Are you using HCatLoader? If so, you will need the fix of
https://issues.apache.org/jira/browse/PIG-4443 which is in Pig 0.15 which
will be released this week.  Having
https://issues.apache.org/jira/browse/HIVE-9845 fix for HCatLoader also
will avoid the issue.

Regards,
Rohini

On Tue, May 26, 2015 at 2:20 AM, patcharee <[email protected]>
wrote:

> Thanks a lot for the input.
>
> From my debug log below, the problem should be because
> ipc.maximum.data.length is too small.
>
> 2015-05-26 10:10:48,376 INFO [Socket Reader #1 for port 52017] ipc.Server:
> Socket Reader #1 for port 52017: readAndProcess from client 10.10.255.241
> threw exception [java.io.IOException: Requested data length 166822274 is
> longer than maximum configured RPC length 67108864.  RPC came from
> 10.10.255.241]
> java.io.IOException: Requested data length 166822274 is longer than
> maximum configured RPC length 67108864.  RPC came from 10.10.255.241
>     at
> org.apache.hadoop.ipc.Server$Connection.checkDataLength(Server.java:1459)
>     at
> org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1521)
>     at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:762)
>     at
> org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:636)
>     at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:607)
>
> However I am curious why the data request size 166822274 is bigger than my
> HDFS max block size (128 MB). Do you have an idea?
>
> BR,
> Patcharee
>
>
> On 22. mai 2015 19:58, Johannes Zillmann wrote:
>
>> Hey Patcharee,
>>
>> i sometimes faced that in case the DAG or the properties/object it
>> contains become quite big. Pumping up ipc.maximum.data.length, e.g.
>> ipc.maximum.data.length=134217728 usually helped!
>>
>> best
>> Johannes
>>
>>  On 22 May 2015, at 19:49, Rohini Palaniswamy <[email protected]>
>>> wrote:
>>>
>>> Can you check if the hadoop version on your cluster and the version of
>>> hadoop jars on your pig classpath same? Also is the tez jars on pig
>>> classpath and the tez jars installed in hdfs are of the same version?
>>>
>>> -Rohini
>>>
>>> On Fri, May 22, 2015 at 10:26 AM, Hitesh Shah <[email protected]> wrote:
>>> Hello Patcharee
>>>
>>> Could you start with sending a mail to users@pig to see if they have
>>> come across this issue first? Also, can you check the application master
>>> logs to see if there are any errors ( might be useful to enable DEBUG level
>>> logging to get more information )?
>>>
>>> thanks
>>> — Hitesh
>>>
>>> On May 22, 2015, at 5:50 AM, patcharee <[email protected]>
>>> wrote:
>>>
>>>  Hi,
>>>>
>>>> I ran a pig script on tez and got the EOFException. Check at
>>>> http://wiki.apache.org/hadoop/EOFException I have no ideas at all how
>>>> I can fix it. However I did not get the exception when I executed this pig
>>>> script on MR.
>>>>
>>>> I am using HadoopVersion: 2.6.0.2.2.4.2-2, PigVersion:
>>>> 0.14.0.2.2.4.2-2, TezVersion: 0.5.2.2.2.4.2-2
>>>>
>>>> I will appreciate any suggestions. Thanks.
>>>>
>>>> 2015-05-22 14:44:13,638 [PigTezLauncher-0] ERROR
>>>> org.apache.pig.backend.hadoop.executionengine.tez.TezJob - Cannot submit
>>>> DAG - Application id: application_1432237888868_0133
>>>> org.apache.tez.dag.api.TezException:
>>>> com.google.protobuf.ServiceException: java.io.EOFException: End of File
>>>> Exception between local host is: "compute-10-0.local/10.10.255.241";
>>>> destination host is: "compute-10-3.local":47111; : java.io.EOFException;
>>>> For more details see:  http://wiki.apache.org/hadoop/EOFException
>>>>     at
>>>> org.apache.tez.client.TezClient.submitDAGSession(TezClient.java:415)
>>>>     at org.apache.tez.client.TezClient.submitDAG(TezClient.java:351)
>>>>     at
>>>> org.apache.pig.backend.hadoop.executionengine.tez.TezJob.run(TezJob.java:162)
>>>>     at
>>>> org.apache.pig.backend.hadoop.executionengine.tez.TezLauncher$1.run(TezLauncher.java:167)
>>>>     at
>>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>>>     at
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>     at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>     at java.lang.Thread.run(Thread.java:744)
>>>> Caused by: com.google.protobuf.ServiceException: java.io.EOFException:
>>>> End of File Exception between local host is: "compute-10-0.local/
>>>> 10.10.255.241"; destination host is: "compute-10-3.local":47111; :
>>>> java.io.EOFException; For more details see:
>>>> http://wiki.apache.org/hadoop/EOFException
>>>>     at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:246)
>>>>     at com.sun.proxy.$Proxy31.submitDAG(Unknown Source)
>>>>     at
>>>> org.apache.tez.client.TezClient.submitDAGSession(TezClient.java:408)
>>>>     ... 8 more
>>>> Caused by: java.io.EOFException: End of File Exception between local
>>>> host is: "compute-10-0.local/10.10.255.241"; destination host is:
>>>> "compute-10-3.local":47111; : java.io.EOFException; For more details see:
>>>> http://wiki.apache.org/hadoop/EOFException
>>>>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>>> Method)
>>>>     at
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>>>>     at
>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>>>     at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>>>     at org.apache.hadoop.net
>>>> .NetUtils.wrapWithMessage(NetUtils.java:791)
>>>>     at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
>>>>     at org.apache.hadoop.ipc.Client.call(Client.java:1473)
>>>>     at org.apache.hadoop.ipc.Client.call(Client.java:1400)
>>>>     at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>>>>     ... 10 more
>>>> Caused by: java.io.EOFException
>>>>     at java.io.DataInputStream.readInt(DataInputStream.java:392)
>>>>     at
>>>> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1072)
>>>>     at org.apache.hadoop.ipc.Client$Connection.run(Client.java:967)
>>>>
>>>
>>>
>

Reply via email to