[ 
https://issues.apache.org/jira/browse/FLUME-3054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16353280#comment-16353280
 ] 

jifei_yang commented on FLUME-3054:
-----------------------------------

Hi,[~panyuxuan]

I also encountered this problem, how do you solve? Thank you!

> hflushOrSync method in HDFS Sink should not treat ClosedChannelException as 
> an error 
> -------------------------------------------------------------------------------------
>
>                 Key: FLUME-3054
>                 URL: https://issues.apache.org/jira/browse/FLUME-3054
>             Project: Flume
>          Issue Type: Bug
>          Components: Sinks+Sources
>    Affects Versions: 1.6.0
>            Reporter: Pan Yuxuan
>            Priority: Major
>             Fix For: 1.9.0
>
>         Attachments: FLUME-3054.0001.patch
>
>
> When use HDFS Sink in multiple threads, we face the error in log as below:
> {code}
> 09 Feb 2017 13:44:14,721 ERROR [hdfs-hsProt6-call-runner-4] 
> (org.apache.flume.sink.hdfs.AbstractHDFSWriter.hflushOrSync:267)  - Error 
> while trying to hflushOrSync!
> 09 Feb 2017 14:54:48,271 ERROR 
> [SinkRunner-PollingRunner-DefaultSinkProcessor] 
> (org.apache.flume.sink.hdfs.AbstractHDFSWriter.isUnderReplicated:98)  - 
> Unexpected error while checking replication factor
> java.lang.reflect.InvocationTargetException
>       at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:606)
>       at 
> org.apache.flume.sink.hdfs.AbstractHDFSWriter.getNumCurrentReplicas(AbstractHDFSWriter.java:165)
>       at 
> org.apache.flume.sink.hdfs.AbstractHDFSWriter.isUnderReplicated(AbstractHDFSWriter.java:84)
>       at 
> org.apache.flume.sink.hdfs.BucketWriter.shouldRotate(BucketWriter.java:583)
>       at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:518)
>       at 
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:418)
>       at 
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>       at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.nio.channels.ClosedChannelException
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1665)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream.getCurrentBlockReplication(DFSOutputStream.java:2151)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream.getNumCurrentReplicas(DFSOutputStream.java:2140)
>       ... 11 more
> 09 Feb 2017 14:54:48,277 ERROR 
> [SinkRunner-PollingRunner-DefaultSinkProcessor] 
> (org.apache.flume.sink.hdfs.HDFSEventSink.process:459)  - process failed
> org.apache.flume.auth.SecurityException: Privileged action failed
>       at org.apache.flume.auth.UGIExecutor.execute(UGIExecutor.java:49)
>       at 
> org.apache.flume.auth.KerberosAuthenticator.execute(KerberosAuthenticator.java:63)
>       at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:676)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.nio.channels.ClosedChannelException
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1665)
>       at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:104)
>       at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
>       at java.io.DataOutputStream.write(DataOutputStream.java:107)
>       at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
>       at 
> org.apache.flume.serialization.BodyTextEventSerializer.write(BodyTextEventSerializer.java:71)
>       at 
> org.apache.flume.sink.hdfs.HDFSDataStream.append(HDFSDataStream.java:124)
>       at org.apache.flume.sink.hdfs.BucketWriter$7.call(BucketWriter.java:550)
>       at org.apache.flume.sink.hdfs.BucketWriter$7.call(BucketWriter.java:547)
>       at 
> org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:679)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:415)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1858)
>       at org.apache.flume.auth.UGIExecutor.execute(UGIExecutor.java:47)
>       ... 6 more
> 09 Feb 2017 14:54:48,280 ERROR 
> [SinkRunner-PollingRunner-DefaultSinkProcessor] 
> (org.apache.flume.SinkRunner$PollingRunner.run:160)  - Unable to deliver 
> event. Exception follows.
> org.apache.flume.EventDeliveryException: 
> org.apache.flume.auth.SecurityException: Privileged action failed
>       at 
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:463)
>       at 
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>       at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.flume.auth.SecurityException: Privileged action failed
>       at org.apache.flume.auth.UGIExecutor.execute(UGIExecutor.java:49)
>       at 
> org.apache.flume.auth.KerberosAuthenticator.execute(KerberosAuthenticator.java:63)
>       at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:676)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       ... 1 more
> Caused by: java.nio.channels.ClosedChannelException
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1665)
>       at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:104)
>       at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
>       at java.io.DataOutputStream.write(DataOutputStream.java:107)
>       at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
>       at 
> org.apache.flume.serialization.BodyTextEventSerializer.write(BodyTextEventSerializer.java:71)
>       at 
> org.apache.flume.sink.hdfs.HDFSDataStream.append(HDFSDataStream.java:124)
>       at org.apache.flume.sink.hdfs.BucketWriter$7.call(BucketWriter.java:550)
>       at org.apache.flume.sink.hdfs.BucketWriter$7.call(BucketWriter.java:547)
>       at 
> org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:679)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:415)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1858)
>       at org.apache.flume.auth.UGIExecutor.execute(UGIExecutor.java:47)
>       ... 6 more
> {code}
> The error shows that when call hflushOrSync(), the DataOutputStream was 
> closed by other threads, so DataOutputStream throws ClosedChannelException. 
> But the hdfs sink was not affected by the exception, DataOutputStream just 
> throws the ClosedChannelException as a remind. So we should ignore the 
> ClosedChannelException instead by logging as an error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@flume.apache.org
For additional commands, e-mail: issues-h...@flume.apache.org

Reply via email to