[jira] [Commented] (HDFS-3179) failed to append data, DataStreamer throw an exception, nodes.length != original.length + 1 on single datanode cluster

2012-04-03 Thread amith (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13245119#comment-13245119
 ] 

amith commented on HDFS-3179:
-

Hi Zhanwei Wang

I exactly dont know about your test script does, but this look similar to 
HDFS-3091.

can u check this once
https://issues.apache.org/jira/browse/HDFS-3091

Please correct me If I am wrong :)

 failed to append data, DataStreamer throw an exception, nodes.length != 
 original.length + 1 on single datanode cluster
 

 Key: HDFS-3179
 URL: https://issues.apache.org/jira/browse/HDFS-3179
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.23.2
Reporter: Zhanwei.Wang
Priority: Critical

 Create a single datanode cluster
 disable permissions
 enable webhfds
 start hdfs
 run the test script
 expected result:
 a file named test is created and the content is testtest
 the result I got:
 hdfs throw an exception on the second append operation.
 {code}
 ./test.sh 
 {RemoteException:{exception:IOException,javaClassName:java.io.IOException,message:Failed
  to add a datanode: nodes.length != original.length + 1, 
 nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]}}
 {code}
 Log in datanode:
 {code}
 2012-04-02 14:34:21,058 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer 
 Exception
 java.io.IOException: Failed to add a datanode: nodes.length != 
 original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 2012-04-02 14:34:21,059 ERROR org.apache.hadoop.hdfs.DFSClient: Failed to 
 close file /test
 java.io.IOException: Failed to add a datanode: nodes.length != 
 original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 {code}
 test.sh
 {code}
 #!/bin/sh
 echo test  test.txt
 curl -L -X PUT http://localhost:50070/webhdfs/v1/test?op=CREATE;
 curl -L -X POST -T test.txt http://localhost:50070/webhdfs/v1/test?op=APPEND;
 curl -L -X POST -T test.txt http://localhost:50070/webhdfs/v1/test?op=APPEND;
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3179) failed to append data, DataStreamer throw an exception, nodes.length != original.length + 1 on single datanode cluster

2012-04-03 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13245140#comment-13245140
 ] 

Uma Maheswara Rao G commented on HDFS-3179:
---

@Zhanwei, How many DNs are running in your test cluster?

 failed to append data, DataStreamer throw an exception, nodes.length != 
 original.length + 1 on single datanode cluster
 

 Key: HDFS-3179
 URL: https://issues.apache.org/jira/browse/HDFS-3179
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.23.2
Reporter: Zhanwei.Wang
Priority: Critical

 Create a single datanode cluster
 disable permissions
 enable webhfds
 start hdfs
 run the test script
 expected result:
 a file named test is created and the content is testtest
 the result I got:
 hdfs throw an exception on the second append operation.
 {code}
 ./test.sh 
 {RemoteException:{exception:IOException,javaClassName:java.io.IOException,message:Failed
  to add a datanode: nodes.length != original.length + 1, 
 nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]}}
 {code}
 Log in datanode:
 {code}
 2012-04-02 14:34:21,058 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer 
 Exception
 java.io.IOException: Failed to add a datanode: nodes.length != 
 original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 2012-04-02 14:34:21,059 ERROR org.apache.hadoop.hdfs.DFSClient: Failed to 
 close file /test
 java.io.IOException: Failed to add a datanode: nodes.length != 
 original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 {code}
 test.sh
 {code}
 #!/bin/sh
 echo test  test.txt
 curl -L -X PUT http://localhost:50070/webhdfs/v1/test?op=CREATE;
 curl -L -X POST -T test.txt http://localhost:50070/webhdfs/v1/test?op=APPEND;
 curl -L -X POST -T test.txt http://localhost:50070/webhdfs/v1/test?op=APPEND;
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3179) failed to append data, DataStreamer throw an exception, nodes.length != original.length + 1 on single datanode cluster

2012-04-03 Thread Zhanwei.Wang (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13245476#comment-13245476
 ] 

Zhanwei.Wang commented on HDFS-3179:


@Uma and amith
It seems the same question with HDFS-3091.

I configure only one datanode and create a file using default number of 
replica(3), 
existings(1) = replication/2(3/2==1) will be satisfied and it can not replace 
with the new node as there is no extra nodes exist in the cluster.

HDFS-3091 should patch to 0.23.2 branch


 failed to append data, DataStreamer throw an exception, nodes.length != 
 original.length + 1 on single datanode cluster
 

 Key: HDFS-3179
 URL: https://issues.apache.org/jira/browse/HDFS-3179
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.23.2
Reporter: Zhanwei.Wang
Priority: Critical

 Create a single datanode cluster
 disable permissions
 enable webhfds
 start hdfs
 run the test script
 expected result:
 a file named test is created and the content is testtest
 the result I got:
 hdfs throw an exception on the second append operation.
 {code}
 ./test.sh 
 {RemoteException:{exception:IOException,javaClassName:java.io.IOException,message:Failed
  to add a datanode: nodes.length != original.length + 1, 
 nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]}}
 {code}
 Log in datanode:
 {code}
 2012-04-02 14:34:21,058 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer 
 Exception
 java.io.IOException: Failed to add a datanode: nodes.length != 
 original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 2012-04-02 14:34:21,059 ERROR org.apache.hadoop.hdfs.DFSClient: Failed to 
 close file /test
 java.io.IOException: Failed to add a datanode: nodes.length != 
 original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 {code}
 test.sh
 {code}
 #!/bin/sh
 echo test  test.txt
 curl -L -X PUT http://localhost:50070/webhdfs/v1/test?op=CREATE;
 curl -L -X POST -T test.txt http://localhost:50070/webhdfs/v1/test?op=APPEND;
 curl -L -X POST -T test.txt http://localhost:50070/webhdfs/v1/test?op=APPEND;
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3179) failed to append data, DataStreamer throw an exception, nodes.length != original.length + 1 on single datanode cluster

2012-04-03 Thread Zhanwei.Wang (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13245501#comment-13245501
 ] 

Zhanwei.Wang commented on HDFS-3179:


@Uma and amith
Another question, in this test script, I first create a new EMPTY file and 
append to the file twice.
The first append succeed because file is empty, to create a pipeline, the 
stage is PIPELINE_SETUP_CREATE and the policy will not be checked.
The second append failed because the stage is PIPELINE_SETPU_APPEND and the 
policy will be checked.

So from the view of user, the first append succeed while the second fail, is 
that a good idea?

{code}
  // get new block from namenode
  if (stage == BlockConstructionStage.PIPELINE_SETUP_CREATE) {
if(DFSClient.LOG.isDebugEnabled()) {
  DFSClient.LOG.debug(Allocating new block);
}
nodes = nextBlockOutputStream(src);
initDataStreaming();
  } else if (stage == BlockConstructionStage.PIPELINE_SETUP_APPEND) {
if(DFSClient.LOG.isDebugEnabled()) {
  DFSClient.LOG.debug(Append to block  + block);
}
setupPipelineForAppendOrRecovery();  //check the policy here
initDataStreaming();
  }
{code}

 failed to append data, DataStreamer throw an exception, nodes.length != 
 original.length + 1 on single datanode cluster
 

 Key: HDFS-3179
 URL: https://issues.apache.org/jira/browse/HDFS-3179
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.23.2
Reporter: Zhanwei.Wang
Priority: Critical

 Create a single datanode cluster
 disable permissions
 enable webhfds
 start hdfs
 run the test script
 expected result:
 a file named test is created and the content is testtest
 the result I got:
 hdfs throw an exception on the second append operation.
 {code}
 ./test.sh 
 {RemoteException:{exception:IOException,javaClassName:java.io.IOException,message:Failed
  to add a datanode: nodes.length != original.length + 1, 
 nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]}}
 {code}
 Log in datanode:
 {code}
 2012-04-02 14:34:21,058 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer 
 Exception
 java.io.IOException: Failed to add a datanode: nodes.length != 
 original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 2012-04-02 14:34:21,059 ERROR org.apache.hadoop.hdfs.DFSClient: Failed to 
 close file /test
 java.io.IOException: Failed to add a datanode: nodes.length != 
 original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 {code}
 test.sh
 {code}
 #!/bin/sh
 echo test  test.txt
 curl -L -X PUT http://localhost:50070/webhdfs/v1/test?op=CREATE;
 curl -L -X POST -T test.txt http://localhost:50070/webhdfs/v1/test?op=APPEND;
 curl -L -X POST -T test.txt http://localhost:50070/webhdfs/v1/test?op=APPEND;
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3179) failed to append data, DataStreamer throw an exception, nodes.length != original.length + 1 on single datanode cluster

2012-04-03 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13245601#comment-13245601
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3179:
--

I think the problem is one datanode with replication 3.  What should be the 
user expectation?  It seems that users won't be happy if we do not allow 
append.  However, if we allow appending to a single replica and the replica 
become corrupted, then it is possible to have data loss - I can imagine in some 
extreme cases that a user is appending to a single replica slowly, admin add 
more datanodes later on but the block won't be replicated since the file is not 
closed, and then the datanode with the single replica fails.  Is this case 
acceptable to you?

 So from the view of user, the first append succeed while the second fail, is 
 that a good idea?

The distinction is whether there is pre-append data.  There are pre-append data 
in the replica in the second append.  The pre-append data was in a closed file 
and if the datanode fails during append, it could have data loss.  However, in 
the first append, there is no pre-append data.  If the append fails and the new 
replica is lost, it is a sort of okay since only the new data is lost.

The add-datanode feature of is to prevent data loss on pre-append data.  Users 
(or admin) could turn it off as mentioned in HDFS-3091.  I think we may improve 
the error message.  Is it good enough?  Or any suggestion?

 failed to append data, DataStreamer throw an exception, nodes.length != 
 original.length + 1 on single datanode cluster
 

 Key: HDFS-3179
 URL: https://issues.apache.org/jira/browse/HDFS-3179
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.23.2
Reporter: Zhanwei.Wang
Priority: Critical

 Create a single datanode cluster
 disable permissions
 enable webhfds
 start hdfs
 run the test script
 expected result:
 a file named test is created and the content is testtest
 the result I got:
 hdfs throw an exception on the second append operation.
 {code}
 ./test.sh 
 {RemoteException:{exception:IOException,javaClassName:java.io.IOException,message:Failed
  to add a datanode: nodes.length != original.length + 1, 
 nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]}}
 {code}
 Log in datanode:
 {code}
 2012-04-02 14:34:21,058 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer 
 Exception
 java.io.IOException: Failed to add a datanode: nodes.length != 
 original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 2012-04-02 14:34:21,059 ERROR org.apache.hadoop.hdfs.DFSClient: Failed to 
 close file /test
 java.io.IOException: Failed to add a datanode: nodes.length != 
 original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 {code}
 test.sh
 {code}
 #!/bin/sh
 echo test  test.txt
 curl -L -X PUT http://localhost:50070/webhdfs/v1/test?op=CREATE;
 curl -L -X POST -T test.txt http://localhost:50070/webhdfs/v1/test?op=APPEND;
 curl -L -X POST -T test.txt http://localhost:50070/webhdfs/v1/test?op=APPEND;
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3179) failed to append data, DataStreamer throw an exception, nodes.length != original.length + 1 on single datanode cluster

2012-04-03 Thread Zhanwei.Wang (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13245755#comment-13245755
 ] 

Zhanwei.Wang commented on HDFS-3179:


I totally agree with you about the problem one datanode with replication 3,I 
think this kind of operation should fail or at least get a warning.

My opinion is that, the purpose of the policy check is to make sure no 
potential data lose, in this one datanode 3 replica case, although the first 
append failure will not cause the data lose, the appended data after the first 
successful append is in danger because there is only one replica which is not 
the user expected 3. And there is no warning to tell the user the truth. 

My suggestion is to make the first write to the empty file fail if there is not 
enough datanode, in another word, make the policy check more strictly. And make 
the error message more friendly instead of nodes.length != original.length + 
1.




 failed to append data, DataStreamer throw an exception, nodes.length != 
 original.length + 1 on single datanode cluster
 

 Key: HDFS-3179
 URL: https://issues.apache.org/jira/browse/HDFS-3179
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.23.2
Reporter: Zhanwei.Wang
Priority: Critical

 Create a single datanode cluster
 disable permissions
 enable webhfds
 start hdfs
 run the test script
 expected result:
 a file named test is created and the content is testtest
 the result I got:
 hdfs throw an exception on the second append operation.
 {code}
 ./test.sh 
 {RemoteException:{exception:IOException,javaClassName:java.io.IOException,message:Failed
  to add a datanode: nodes.length != original.length + 1, 
 nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]}}
 {code}
 Log in datanode:
 {code}
 2012-04-02 14:34:21,058 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer 
 Exception
 java.io.IOException: Failed to add a datanode: nodes.length != 
 original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 2012-04-02 14:34:21,059 ERROR org.apache.hadoop.hdfs.DFSClient: Failed to 
 close file /test
 java.io.IOException: Failed to add a datanode: nodes.length != 
 original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 {code}
 test.sh
 {code}
 #!/bin/sh
 echo test  test.txt
 curl -L -X PUT http://localhost:50070/webhdfs/v1/test?op=CREATE;
 curl -L -X POST -T test.txt http://localhost:50070/webhdfs/v1/test?op=APPEND;
 curl -L -X POST -T test.txt http://localhost:50070/webhdfs/v1/test?op=APPEND;
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3179) failed to append data, DataStreamer throw an exception, nodes.length != original.length + 1 on single datanode cluster

2012-04-03 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13245920#comment-13245920
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3179:
--

 ..., the appended data after the first successful append is in danger ...

You are right but it is the same as creating a new file.  We should not make 
any change unless we also want to change the behavior of create(..).

 ... And make the error message more friendly instead of nodes.length != 
 original.length + 1.

Agree.  I will change the error message.

 failed to append data, DataStreamer throw an exception, nodes.length != 
 original.length + 1 on single datanode cluster
 

 Key: HDFS-3179
 URL: https://issues.apache.org/jira/browse/HDFS-3179
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.23.2
Reporter: Zhanwei.Wang
Priority: Critical

 Create a single datanode cluster
 disable permissions
 enable webhfds
 start hdfs
 run the test script
 expected result:
 a file named test is created and the content is testtest
 the result I got:
 hdfs throw an exception on the second append operation.
 {code}
 ./test.sh 
 {RemoteException:{exception:IOException,javaClassName:java.io.IOException,message:Failed
  to add a datanode: nodes.length != original.length + 1, 
 nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]}}
 {code}
 Log in datanode:
 {code}
 2012-04-02 14:34:21,058 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer 
 Exception
 java.io.IOException: Failed to add a datanode: nodes.length != 
 original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 2012-04-02 14:34:21,059 ERROR org.apache.hadoop.hdfs.DFSClient: Failed to 
 close file /test
 java.io.IOException: Failed to add a datanode: nodes.length != 
 original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 {code}
 test.sh
 {code}
 #!/bin/sh
 echo test  test.txt
 curl -L -X PUT http://localhost:50070/webhdfs/v1/test?op=CREATE;
 curl -L -X POST -T test.txt http://localhost:50070/webhdfs/v1/test?op=APPEND;
 curl -L -X POST -T test.txt http://localhost:50070/webhdfs/v1/test?op=APPEND;
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira