[ 
https://issues.apache.org/jira/browse/FLUME-1112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13261791#comment-13261791
 ] 

[email protected] commented on FLUME-1112:
------------------------------------------------------


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/4683/#review7220
-----------------------------------------------------------

Ship it!


lgtm

- Brock


On 2012-04-09 12:24:59, Inder Singh wrote:
bq.  
bq.  -----------------------------------------------------------
bq.  This is an automatically generated e-mail. To reply, visit:
bq.  https://reviews.apache.org/r/4683/
bq.  -----------------------------------------------------------
bq.  
bq.  (Updated 2012-04-09 12:24:59)
bq.  
bq.  
bq.  Review request for Flume and Arvind Prabhakar.
bq.  
bq.  
bq.  Summary
bq.  -------
bq.  
bq.  HDFSCompressedDataStream append will fail when append=true and it tries to 
append to a nonexistent file.
bq.  
bq.  
bq.  This addresses bug FLUME-1112.
bq.      https://issues.apache.org/jira/browse/FLUME-1112
bq.  
bq.  
bq.  Diffs
bq.  -----
bq.  
bq.    
trunk/flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs/HDFSCompressedDataStream.java
 1311206 
bq.  
bq.  Diff: https://reviews.apache.org/r/4683/diff
bq.  
bq.  
bq.  Testing
bq.  -------
bq.  
bq.  Ran through the test cases of FLUME.
bq.  
bq.  
bq.  Thanks,
bq.  
bq.  Inder
bq.  
bq.


                
> CLONE - Issue with HDFSEventSink for append support for 
> HDFSCompressedDataStream
> --------------------------------------------------------------------------------
>
>                 Key: FLUME-1112
>                 URL: https://issues.apache.org/jira/browse/FLUME-1112
>             Project: Flume
>          Issue Type: Bug
>            Reporter: Inder SIngh
>            Assignee: Inder SIngh
>             Fix For: v1.2.0
>
>         Attachments: FLUME-1112-1.patch
>
>
> Enabled append feature in HDFS and flume conf
> Hit the following exceptions -
> 2012-03-29 13:27:03,284 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN 
> - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:370)] 
> HDFS IO error
> java.io.FileNotFoundException: java.io.FileNotFoundException: failed to 
> append to non-existent file /flume.0.tmp on client 127.0.0.1
>          org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:586)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:209)
>         at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:630)
>         at 
> org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:42)
>         at org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:82)
>         at org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:102)
>         at 
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:342)
>         at 
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:65)
>         at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:148)
>         at java.lang.Thread.run(Thread.java:680)
> Caused by: org.apache.hadoop.ipc.RemoteException: 
> java.io.FileNotFoundException: failed to append to non-existent file 
> /flume.0.tmp on client 127.0.0.1
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1187)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1357)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:600)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> Looking at the code and debugging further looks like the case of first write 
> when append="true" isn't handled.
> Here is the code in picture -> HDFSDataStream.open
>  if (conf.getBoolean("hdfs.append.support", false) == true) {
>       outStream = hdfs.append(dstPath);
>     } else {
>       outStream = hdfs.create(dstPath);
>     }
> it should be something like this -
>  if (conf.getBoolean("hdfs.append.support", false) == true && 
> hdfs.isFile(dstPath)) {
>       outStream = hdfs.append(dstPath);
>     } else {
>       outStream = hdfs.create(dstPath);
>     }
>  
> Refer - 
> http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/fs/FileSystem.html
>  - append works on an existent file.
> Pardon my ignorance and correct me if am missing something here

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to