Thanks for the reply. I did solve protobuf issue by upgrading to 2.5 but then
hive 0.12 also started showing the same issue as 0.13 and 0.14
I was working through cli
Turns out issue was due to space available (not) to data node. Let me elaborate
for others in the list.
I had about 2GB
Amit,
I am not sure about your datanode issue. But definitely its not related to ORC
writing 500 rows of kv1.txt file.
Also, keeping stripe size to 4MB is on the lower side. The default ORC size of
256MB is chosen because of better data read efficiency. Having many small
stripes will also
Hi All,
I am just trying to do some simple tests to see speedup in hive query
with Hive 0.14 (trunk version this morning). Just tried to use sample
test case to start with. First wanted to see how much I can speed up
using ORC format.
However for some reason I can't insert data into the
I checked out and build hive 0.13. Tried with same results. i.e.
eRpcServer.addBlock(NameNodeRpcServer.java:555)
at File
/tmp/hive-hduser/hive_2014-04-04_20-34-43_550_7470522328893486504-1/_task_tmp.-ext-10002/_tmp.00_3
could only be replicated to 0 nodes instead of minReplication (=1).
Amit,
Are you executing your select for conversion to orc via beeline, or hive
cli? From looking at your logs, it appears that you do not have permissions
in hdfs to write the resultant orc data. Check permissions in hdfs to
ensure that your user has write permissions to write to hive warehouse.