Hi,
I am running a multiple join of 100G TPC-DS data with bad order on our
cluster. And each time, it returns such log file to me with the exception:
 Has anyone ever met it? Is it caused by too much data more than disk space?

* org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/tmp/temp-1180529634/tmp-491747926/_temporary/_attempt_201607142217_0115_r_000000_0/part-r-00000
could only be repl    icated to 0 nodes, instead of 1*
*at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)*
*  5         at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)*
*  6         at sun.reflect.GeneratedMethodAccessor851.invoke(Unknown
Source)*
*  7         at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)*
*  8         at java.lang.reflect.Method.invoke(Method.java:606)*
*  9         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)*
* 10         at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)*
* 11         at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)*
* 12         at java.security.AccessController.doPrivileged(Native Method)*
*.....*
*Pig Stack Trace*
* 32 ---------------*
* 33 ERROR 1066: Unable to open iterator for alias limit_data*
* 34 *
* 35 org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable
to open iterator for alias limit_data*
* 36         at org.apache.pig.PigServer.openIterator(PigServer.java:935)*
* 37         at
org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:754)*
* 38         at
org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:376)*
*....*

More detailed information is in attachment.

Reply via email to