Part 1) – The problem description:
I have run into an error which I have been able to bypass to a degree (see Part 2), but not completely. The Error message is:
Moving data to: /tmp/clickstream/om_primary
Failed with exception Unable to move source hdfs://<cluster name:ip>/tmp/clickstream/om_primary/.hive-staging_hive_2015-10-21_17-36-55_357_8997113895856879258-1/-ext-10000 to destination /tmp/clickstream/om_primary
The Hive HQL which causes this is of this type:
insert overwrite directory ‘/tmp/clickstream/om_primary’
select <field1> [, <field2> ] from <db>.<tablename> where partition_key = <value>;
This problem occurred on Hortonworks 2.3 and is definitely related to the size of the output, not to permissions. I know this since I found that if I use a “limit 1000;’, in my actual query, it would work. But the desired query will produce several million records and the larger output fails – that is – if the “limit N;” part of the query is excluded, the error message above appears.
This indicates to me that there is either a configuration issue in Hortonworks, Hive or there is an actual Bug in Hive or Hadoop which causes this error to be produced if the size of the output is “too large”. If it was a permissions problem, the error would appear in all cases. (Processing Large Data Sets is exactly what Hadoop was designed for, so this error is particularly irksome.)
Part 2)
Initially, the query failed for more than 1000 records. I added the following lines (before the query) and found that I was able to increase the number of records written to the target file in the directory successfully. However, I can write up to 30000 records successfully. A “LIMIT 40000” results in the same error message above.
(Caveat: Some of these settings may not actually take effect when executed as set commands within a Hive session even though they produced no error message from Hive. I am aware of that. My goal was to try to find a set of configuration changes which would allow more than 1000 rows to be produced, then adjust accordingly to get the entire query to work.)
Settings changed are below:
SET dfs.stream-buffer-size=131072;
SET mapreduce.input.fileinputformat.split.maxsize=1024000000;
SET dfs.client.mmap.cache.size=4096;
SET hive.merge.size.per.task=1024000000;
SET dfs.blocksize=1073741824;
SET hive.exec.compress.output=false;