[ 
https://issues.apache.org/jira/browse/HDFS-12852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243736#comment-17243736
 ] 

Mithun Antony commented on HDFS-12852:
--------------------------------------

Is there any update on this issue or is there any related ticket which resolve 
the issue.

The issue intermittently occurs when the jars are being copied to the 
distributed cache with the execution mode as mapreduce batch. Below is the 
error we are getting from the grunt log

 

Hadoop : 2.8.5
Pig : 0.17.0


2020-11-26 07:10:42,174 INFO mapReduceLayer.JobControlCompiler: This job cannot 
be converted run in-process
601706 [main] INFO 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler 
- Added jar file://xxx/piggybank-0.17.0.jar to DistributedCache through 
/tmp/temp1245025900/tmp-574781587/piggybank-0.17.0.jar
2020-11-26 07:10:42,318 WARN hdfs.DataStreamer: DataStreamer 
Exception2020-11-26 07:10:42,318 WARN hdfs.DataStreamer: DataStreamer 
Exceptionjava.nio.channels.ClosedByInterruptException at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
 at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:478) at 
org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
 at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at 
java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at 
java.io.DataOutputStream.flush(DataOutputStream.java:123) at 
org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:766)601826 [main] 
ERROR org.apache.pig.tools.grunt.Grunt  - ERROR 2017: Internal error creating 
job configuration.2020-11-26 07:10:42,336 ERROR grunt.Grunt: ERROR 2017: 
Internal error creating job configuration.

> pig script failed with 'java.nio.channels.ClosedByInterruptException'
> ---------------------------------------------------------------------
>
>                 Key: HDFS-12852
>                 URL: https://issues.apache.org/jira/browse/HDFS-12852
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.7.3
>         Environment: SUSE Linux version 3.0.101-0.47.55-xen
> hadoop2.7.3
> pig1.0.4
>            Reporter: Pengfei Yang
>            Priority: Major
>
> My pig script failed while processing some data.
> There is no problem with the data, because it success when I retry.
> [DataStreamer for file /tmp/temp-1118553932/tmp-1389566871/joda-time-2.1.jar 
> block BP-1417581856-XX.XX.XX.XX-1487248328370:blk_1077913318_4172501] WARN  
> org.apache.hadoop.hdfs.DFSClient - DataStreamer Exception
> java.nio.channels.ClosedByInterruptException
>         at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>         at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:478)
>         at 
> org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
>         at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>         at 
> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
>         at 
> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
>         at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>         at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
>         at java.io.DataOutputStream.flush(DataOutputStream.java:123)
>         at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:508)
> Pig Stack Trace
> ---------------
> ERROR 2017: Internal error creating job configuration.
> org.apache.pig.backend.hadoop.executionengine.JobCreationException: ERROR 
> 2017: Internal error creating job configuration.
>         at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:998)
>         at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:323)
>         at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:196)
>         at 
> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:280)
>         at org.apache.pig.PigServer.launchPlan(PigServer.java:1390)
>         at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1375)
>         at org.apache.pig.PigServer.execute(PigServer.java:1364)
>         at org.apache.pig.PigServer.executeBatch(PigServer.java:415)
>         at org.apache.pig.PigServer.executeBatch(PigServer.java:398)
>         at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:171)
>         at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:234)
>         at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
>         at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
>         at org.apache.pig.Main.run(Main.java:624)
>         at org.apache.pig.Main.main(Main.java:170)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.nio.channels.ClosedByInterruptException
>         at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>         at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:478)
>         at 
> org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
>         at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>         at 
> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
>         at 
> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
>         at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>         at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
>         at java.io.DataOutputStream.flush(DataOutputStream.java:123)
>         at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:508)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to