[jira] [Commented] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-16 Thread Tian Hong Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13659355#comment-13659355
 ] 

Tian Hong Wang commented on HDFS-4787:
--

Thank Aaron for your commitment.

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.5-beta

 Attachments: HDFS-4787-trunk.patch, HDFS-4787-trunk-v1.patch, 
 HDFS-4787-v1.patch, HDFS-4787-v2.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4815) TestRBWBlockInvalidation#testBlockInvalidationWhenRBWReplicaMissedInDN: Double call countReplicas() to fetch corruptReplicas and liveReplicas is not needed

2013-05-15 Thread Tian Hong Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13659057#comment-13659057
 ] 

Tian Hong Wang commented on HDFS-4815:
--

Thanks Chris for your comment.

 TestRBWBlockInvalidation#testBlockInvalidationWhenRBWReplicaMissedInDN: 
 Double call countReplicas() to fetch corruptReplicas and liveReplicas is not 
 needed
 ---

 Key: HDFS-4815
 URL: https://issues.apache.org/jira/browse/HDFS-4815
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, test
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4815.patch


 In TestRBWBlockInvalidation, the original code is:
 while (!isCorruptReported) {
 if (countReplicas(namesystem, blk).corruptReplicas()  0) {
   isCorruptReported = true;
 }
 Thread.sleep(100);
 }
 assertEquals(There should be 1 replica in the corruptReplicasMap, 1,
   countReplicas(namesystem, blk).corruptReplicas());
 Once the program detects there exists one corruptReplica, it will break the 
 while loop. After that, it call countReplicas() again in assertEquals(). But 
 sometimes I met the following error:
 java.lang.AssertionError: There should be 1 replica in the corruptReplicasMap 
 expected:1 but was:0
 It's obviously that second function call countReplicas() in assertEquals(), 
 the corruptReplicas value has been changed since program go to sleep and 
 BlockManger recovered the corrupt block during this sleep time.  
 So what I do is:
 1) once detecting there exists one corruptReplica, break the loop and don't 
 call sleep(), the same as liveReplicas
 2) don't double check the countReplicas  liveReplicas in assertEquals()
 3) sometimes I meet the problem of testcase timeout, so I speed up the block 
 report interval

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4815) Double call countReplicas() to fetch corruptReplicas and liveReplicas is not needed

2013-05-13 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4815:
-

Description: 
In TestRBWBlockInvalidation, the original code is:
while (!isCorruptReported) {
if (countReplicas(namesystem, blk).corruptReplicas()  0) {
  isCorruptReported = true;
}
Thread.sleep(100);
}
assertEquals(There should be 1 replica in the corruptReplicasMap, 1,
  countReplicas(namesystem, blk).corruptReplicas());

Once the program detects there exists one corruptReplica, it will break the 
while loop. After that, it call countReplicas() again in assertEquals(). But 
sometimes I met the following error:
java.lang.AssertionError: There should be 1 replica in the corruptReplicasMap 
expected:1 but was:0

It's obviously that second function call countReplicas() in assertEquals(), the 
corruptReplicas value has been changed since program go to sleep and 
BlockManger recovered the corrupt block during this sleep time.  

So what I do is:
1) once detecting there exists one corruptReplica, break the loop and don't 
call sleep(), the same as liveReplicas
2) don't double check the countReplicas  liveReplicas in assertEquals()
3) sometimes I meet the problem of testcase timeout, so I speed up the block 
report interval


  was:
In TestRBWBlockInvalidation, the original code is:
while (!isCorruptReported) {
if (countReplicas(namesystem, blk).corruptReplicas()  0) {
  isCorruptReported = true;
}
Thread.sleep(100);
}
assertEquals(There should be 1 replica in the corruptReplicasMap, 1,
  countReplicas(namesystem, blk).corruptReplicas());

Once the program detects there exists one corruptReplica, it will break the 
while loop. After that, it call countReplicas() again in assertEquals(). But 
sometimes I met the following error:
java.lang.AssertionError: There should be 1 replica in the corruptReplicasMap 
expected:1 but was:0

It's obviously that second function call countReplicas() in assertEquals(), the 
corruptReplicas value has been changed since program go to sleep and 
BlockManger recovered the corrupt block during this sleep time.  

So what I do is:
1) once detecting there exists one corruptReplica, break the loop and don't 
call sleep(), the same as liveReplicas
2) don't double check the countReplicas  liveReplicas in assertEquals()
3) sometime I meet the problem of testcase timeout, so I speed up the block 
report interval



 Double call countReplicas() to fetch corruptReplicas and liveReplicas is not 
 needed
 ---

 Key: HDFS-4815
 URL: https://issues.apache.org/jira/browse/HDFS-4815
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4815.patch


 In TestRBWBlockInvalidation, the original code is:
 while (!isCorruptReported) {
 if (countReplicas(namesystem, blk).corruptReplicas()  0) {
   isCorruptReported = true;
 }
 Thread.sleep(100);
 }
 assertEquals(There should be 1 replica in the corruptReplicasMap, 1,
   countReplicas(namesystem, blk).corruptReplicas());
 Once the program detects there exists one corruptReplica, it will break the 
 while loop. After that, it call countReplicas() again in assertEquals(). But 
 sometimes I met the following error:
 java.lang.AssertionError: There should be 1 replica in the corruptReplicasMap 
 expected:1 but was:0
 It's obviously that second function call countReplicas() in assertEquals(), 
 the corruptReplicas value has been changed since program go to sleep and 
 BlockManger recovered the corrupt block during this sleep time.  
 So what I do is:
 1) once detecting there exists one corruptReplica, break the loop and don't 
 call sleep(), the same as liveReplicas
 2) don't double check the countReplicas  liveReplicas in assertEquals()
 3) sometimes I meet the problem of testcase timeout, so I speed up the block 
 report interval

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4815) Double call countReplicas() to fetch corruptReplicas and liveReplicas is not needed

2013-05-12 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4815:
-

Description: 
In TestRBWBlockInvalidation, the original code is:
while (!isCorruptReported) {
if (countReplicas(namesystem, blk).corruptReplicas()  0) {
  isCorruptReported = true;
}
Thread.sleep(100);
}
assertEquals(There should be 1 replica in the corruptReplicasMap, 1,
  countReplicas(namesystem, blk).corruptReplicas());

Once the program detects there exists one corruptReplica, it will break the 
while loop. After that, it call countReplicas() again in assertEquals(). But 
sometimes I met the following error:
java.lang.AssertionError: There should be 1 replica in the corruptReplicasMap 
expected:1 but was:0

It's obviously that second function call countReplicas() in assertEquals(), the 
corruptReplicas value has been changed since program go to sleep and 
BlockManger recovered the corrupt block during this sleep time.  

So what I do is:
1) once detecting there exists one corruptReplica, break the loop and don't 
call sleep(), the same as liveReplicas
2) don't double check the countReplicas  liveReplicas in assertEquals()
3) sometime I meet the problem of testcase timeout, so I speed up the block 
report interval


  was:
In TestRBWBlockInvalidation, the original code is:
while (!isCorruptReported) {
if (countReplicas(namesystem, blk).corruptReplicas()  0) {
  isCorruptReported = true;
}
Thread.sleep(100);
}
assertEquals(There should be 1 replica in the corruptReplicasMap, 1,
  countReplicas(namesystem, blk).corruptReplicas());

Once the program detects there exists one corruptReplica, it will break the 
while loop. After that, it call countReplicas() again in assertEquals(). But 
sometimes I met the following error:
java.lang.AssertionError: There should be 1 replica in the corruptReplicasMap 
expected:1 but was:0

It's obviously that second call countReplicas() in assertEquals(), the 
corruptReplicas value has been changed since program go to sleep and 
BlockManger recovered the corrupt block during this time.  

So what I do is:
1) once detecting there exists one corruptReplica, break the loop and don't 
call sleep, the same as liveReplicas
2) don't double check the countReplicas  liveReplicas
3) sometime I meet the problem of testcase timeout, so I speed up the block 
report interval



 Double call countReplicas() to fetch corruptReplicas and liveReplicas is not 
 needed
 ---

 Key: HDFS-4815
 URL: https://issues.apache.org/jira/browse/HDFS-4815
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4815.patch


 In TestRBWBlockInvalidation, the original code is:
 while (!isCorruptReported) {
 if (countReplicas(namesystem, blk).corruptReplicas()  0) {
   isCorruptReported = true;
 }
 Thread.sleep(100);
 }
 assertEquals(There should be 1 replica in the corruptReplicasMap, 1,
   countReplicas(namesystem, blk).corruptReplicas());
 Once the program detects there exists one corruptReplica, it will break the 
 while loop. After that, it call countReplicas() again in assertEquals(). But 
 sometimes I met the following error:
 java.lang.AssertionError: There should be 1 replica in the corruptReplicasMap 
 expected:1 but was:0
 It's obviously that second function call countReplicas() in assertEquals(), 
 the corruptReplicas value has been changed since program go to sleep and 
 BlockManger recovered the corrupt block during this sleep time.  
 So what I do is:
 1) once detecting there exists one corruptReplica, break the loop and don't 
 call sleep(), the same as liveReplicas
 2) don't double check the countReplicas  liveReplicas in assertEquals()
 3) sometime I meet the problem of testcase timeout, so I speed up the block 
 report interval

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4815) Double call countReplicas() to fetch corruptReplicas and liveReplicas is not needed

2013-05-10 Thread Tian Hong Wang (JIRA)
Tian Hong Wang created HDFS-4815:


 Summary: Double call countReplicas() to fetch corruptReplicas and 
liveReplicas is not needed
 Key: HDFS-4815
 URL: https://issues.apache.org/jira/browse/HDFS-4815
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang


In TestRBWBlockInvalidation, the original code is:
while (!isCorruptReported) {
if (countReplicas(namesystem, blk).corruptReplicas()  0) {
  isCorruptReported = true;
}
Thread.sleep(100);
}
assertEquals(There should be 1 replica in the corruptReplicasMap, 1,
  countReplicas(namesystem, blk).corruptReplicas());

Once the program detects there exists one corruptReplica, it will break the 
while loop. After that, it call countReplicas() again in assertEquals(). But 
sometimes I met the following error:
java.lang.AssertionError: There should be 1 replica in the corruptReplicasMap 
expected:1 but was:0

It's obviously that second call countReplicas() in assertEquals(), the 
corruptReplicas value has been changed since program go to sleep and 
BlockManger recovered the block. 

So what I do is:
1) once detecting there exists one corruptReplica, break the loop and don't 
call sleep, the same as liveReplicas
2) don't double check the countReplicas  liveReplicas
3) sometime I meet the problem of testcase timeout, so I speed up the block 
report interval


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4815) Double call countReplicas() to fetch corruptReplicas and liveReplicas is not needed

2013-05-10 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4815:
-

Attachment: HDFS-4815.patch

 Double call countReplicas() to fetch corruptReplicas and liveReplicas is not 
 needed
 ---

 Key: HDFS-4815
 URL: https://issues.apache.org/jira/browse/HDFS-4815
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
 Attachments: HDFS-4815.patch


 In TestRBWBlockInvalidation, the original code is:
 while (!isCorruptReported) {
 if (countReplicas(namesystem, blk).corruptReplicas()  0) {
   isCorruptReported = true;
 }
 Thread.sleep(100);
 }
 assertEquals(There should be 1 replica in the corruptReplicasMap, 1,
   countReplicas(namesystem, blk).corruptReplicas());
 Once the program detects there exists one corruptReplica, it will break the 
 while loop. After that, it call countReplicas() again in assertEquals(). But 
 sometimes I met the following error:
 java.lang.AssertionError: There should be 1 replica in the corruptReplicasMap 
 expected:1 but was:0
 It's obviously that second call countReplicas() in assertEquals(), the 
 corruptReplicas value has been changed since program go to sleep and 
 BlockManger recovered the block. 
 So what I do is:
 1) once detecting there exists one corruptReplica, break the loop and don't 
 call sleep, the same as liveReplicas
 2) don't double check the countReplicas  liveReplicas
 3) sometime I meet the problem of testcase timeout, so I speed up the block 
 report interval

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4815) Double call countReplicas() to fetch corruptReplicas and liveReplicas is not needed

2013-05-10 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4815:
-

Status: Patch Available  (was: Open)

 Double call countReplicas() to fetch corruptReplicas and liveReplicas is not 
 needed
 ---

 Key: HDFS-4815
 URL: https://issues.apache.org/jira/browse/HDFS-4815
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
 Attachments: HDFS-4815.patch


 In TestRBWBlockInvalidation, the original code is:
 while (!isCorruptReported) {
 if (countReplicas(namesystem, blk).corruptReplicas()  0) {
   isCorruptReported = true;
 }
 Thread.sleep(100);
 }
 assertEquals(There should be 1 replica in the corruptReplicasMap, 1,
   countReplicas(namesystem, blk).corruptReplicas());
 Once the program detects there exists one corruptReplica, it will break the 
 while loop. After that, it call countReplicas() again in assertEquals(). But 
 sometimes I met the following error:
 java.lang.AssertionError: There should be 1 replica in the corruptReplicasMap 
 expected:1 but was:0
 It's obviously that second call countReplicas() in assertEquals(), the 
 corruptReplicas value has been changed since program go to sleep and 
 BlockManger recovered the block. 
 So what I do is:
 1) once detecting there exists one corruptReplica, break the loop and don't 
 call sleep, the same as liveReplicas
 2) don't double check the countReplicas  liveReplicas
 3) sometime I meet the problem of testcase timeout, so I speed up the block 
 report interval

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4815) Double call countReplicas() to fetch corruptReplicas and liveReplicas is not needed

2013-05-10 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4815:
-

Labels: patch  (was: )

 Double call countReplicas() to fetch corruptReplicas and liveReplicas is not 
 needed
 ---

 Key: HDFS-4815
 URL: https://issues.apache.org/jira/browse/HDFS-4815
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4815.patch


 In TestRBWBlockInvalidation, the original code is:
 while (!isCorruptReported) {
 if (countReplicas(namesystem, blk).corruptReplicas()  0) {
   isCorruptReported = true;
 }
 Thread.sleep(100);
 }
 assertEquals(There should be 1 replica in the corruptReplicasMap, 1,
   countReplicas(namesystem, blk).corruptReplicas());
 Once the program detects there exists one corruptReplica, it will break the 
 while loop. After that, it call countReplicas() again in assertEquals(). But 
 sometimes I met the following error:
 java.lang.AssertionError: There should be 1 replica in the corruptReplicasMap 
 expected:1 but was:0
 It's obviously that second call countReplicas() in assertEquals(), the 
 corruptReplicas value has been changed since program go to sleep and 
 BlockManger recovered the block. 
 So what I do is:
 1) once detecting there exists one corruptReplica, break the loop and don't 
 call sleep, the same as liveReplicas
 2) don't double check the countReplicas  liveReplicas
 3) sometime I meet the problem of testcase timeout, so I speed up the block 
 report interval

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4815) Double call countReplicas() to fetch corruptReplicas and liveReplicas is not needed

2013-05-10 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4815:
-

Description: 
In TestRBWBlockInvalidation, the original code is:
while (!isCorruptReported) {
if (countReplicas(namesystem, blk).corruptReplicas()  0) {
  isCorruptReported = true;
}
Thread.sleep(100);
}
assertEquals(There should be 1 replica in the corruptReplicasMap, 1,
  countReplicas(namesystem, blk).corruptReplicas());

Once the program detects there exists one corruptReplica, it will break the 
while loop. After that, it call countReplicas() again in assertEquals(). But 
sometimes I met the following error:
java.lang.AssertionError: There should be 1 replica in the corruptReplicasMap 
expected:1 but was:0

It's obviously that second call countReplicas() in assertEquals(), the 
corruptReplicas value has been changed since program go to sleep and 
BlockManger recovered the corrupt block during this time.  

So what I do is:
1) once detecting there exists one corruptReplica, break the loop and don't 
call sleep, the same as liveReplicas
2) don't double check the countReplicas  liveReplicas
3) sometime I meet the problem of testcase timeout, so I speed up the block 
report interval


  was:
In TestRBWBlockInvalidation, the original code is:
while (!isCorruptReported) {
if (countReplicas(namesystem, blk).corruptReplicas()  0) {
  isCorruptReported = true;
}
Thread.sleep(100);
}
assertEquals(There should be 1 replica in the corruptReplicasMap, 1,
  countReplicas(namesystem, blk).corruptReplicas());

Once the program detects there exists one corruptReplica, it will break the 
while loop. After that, it call countReplicas() again in assertEquals(). But 
sometimes I met the following error:
java.lang.AssertionError: There should be 1 replica in the corruptReplicasMap 
expected:1 but was:0

It's obviously that second call countReplicas() in assertEquals(), the 
corruptReplicas value has been changed since program go to sleep and 
BlockManger recovered the block. 

So what I do is:
1) once detecting there exists one corruptReplica, break the loop and don't 
call sleep, the same as liveReplicas
2) don't double check the countReplicas  liveReplicas
3) sometime I meet the problem of testcase timeout, so I speed up the block 
report interval



 Double call countReplicas() to fetch corruptReplicas and liveReplicas is not 
 needed
 ---

 Key: HDFS-4815
 URL: https://issues.apache.org/jira/browse/HDFS-4815
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4815.patch


 In TestRBWBlockInvalidation, the original code is:
 while (!isCorruptReported) {
 if (countReplicas(namesystem, blk).corruptReplicas()  0) {
   isCorruptReported = true;
 }
 Thread.sleep(100);
 }
 assertEquals(There should be 1 replica in the corruptReplicasMap, 1,
   countReplicas(namesystem, blk).corruptReplicas());
 Once the program detects there exists one corruptReplica, it will break the 
 while loop. After that, it call countReplicas() again in assertEquals(). But 
 sometimes I met the following error:
 java.lang.AssertionError: There should be 1 replica in the corruptReplicasMap 
 expected:1 but was:0
 It's obviously that second call countReplicas() in assertEquals(), the 
 corruptReplicas value has been changed since program go to sleep and 
 BlockManger recovered the corrupt block during this time.  
 So what I do is:
 1) once detecting there exists one corruptReplica, break the loop and don't 
 call sleep, the same as liveReplicas
 2) don't double check the countReplicas  liveReplicas
 3) sometime I meet the problem of testcase timeout, so I speed up the block 
 report interval

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-09 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4787:
-

Status: Open  (was: Patch Available)

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4787-trunk.patch, HDFS-4787-v1.patch, 
 HDFS-4787-v2.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-09 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4787:
-

Attachment: HDFS-4787-trunk-v1.patch

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4787-trunk.patch, HDFS-4787-trunk-v1.patch, 
 HDFS-4787-v1.patch, HDFS-4787-v2.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-09 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4787:
-

Status: Patch Available  (was: Open)

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4787-trunk.patch, HDFS-4787-trunk-v1.patch, 
 HDFS-4787-v1.patch, HDFS-4787-v2.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-09 Thread Tian Hong Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13653493#comment-13653493
 ] 

Tian Hong Wang commented on HDFS-4787:
--

Yes,Aaron,I didn't notice the spaces within the method. Thanks for your comment.

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4787-trunk.patch, HDFS-4787-trunk-v1.patch, 
 HDFS-4787-v1.patch, HDFS-4787-v2.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-08 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4787:
-

Attachment: (was: HDFS-4787.patch)

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4787-trunk.patch, HDFS-4787-v1.patch, 
 HDFS-4787-v2.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-08 Thread Tian Hong Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652732#comment-13652732
 ] 

Tian Hong Wang commented on HDFS-4787:
--

Aaron, thanks for your comment. You can have a look at my final submitted patch 
HDFS-4787-trunk.patch, it shows the indentation is 2 spaces. 

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4787-trunk.patch, HDFS-4787-v1.patch, 
 HDFS-4787-v2.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4709) TestDFSClientRetries-testGetFileChecksum fails on IBM JAVA 6

2013-05-07 Thread Tian Hong Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13651494#comment-13651494
 ] 

Tian Hong Wang commented on HDFS-4709:
--

Yes, Chris, it's a duplicate of HDFS-4787. So resolve it as a duplicate.

 TestDFSClientRetries-testGetFileChecksum fails on IBM JAVA 6
 -

 Key: HDFS-4709
 URL: https://issues.apache.org/jira/browse/HDFS-4709
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4709.patch, HDFS-4709-trunk.patch


 testGetFileChecksum(org.apache.hadoop.hdfs.TestDFSClientRetries)  Time 
 elapsed: 3993 sec   ERROR!
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /testGetFileChecksum could only be replicated to 0 nodes instead of 
 minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are 
 excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2186)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:491)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:351)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40744)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 at org.apache.hadoop.ipc.Client.call(Client.java:1235)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
 at $Proxy10.addBlock(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
 at java.lang.reflect.Method.invoke(Method.java:611)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
 at $Proxy10.addBlock(Unknown Source)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:311)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1156)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1009)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4709) TestDFSClientRetries-testGetFileChecksum fails on IBM JAVA 6

2013-05-07 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4709:
-

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

 TestDFSClientRetries-testGetFileChecksum fails on IBM JAVA 6
 -

 Key: HDFS-4709
 URL: https://issues.apache.org/jira/browse/HDFS-4709
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4709.patch, HDFS-4709-trunk.patch


 testGetFileChecksum(org.apache.hadoop.hdfs.TestDFSClientRetries)  Time 
 elapsed: 3993 sec   ERROR!
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /testGetFileChecksum could only be replicated to 0 nodes instead of 
 minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are 
 excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2186)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:491)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:351)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40744)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 at org.apache.hadoop.ipc.Client.call(Client.java:1235)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
 at $Proxy10.addBlock(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
 at java.lang.reflect.Method.invoke(Method.java:611)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
 at $Proxy10.addBlock(Unknown Source)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:311)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1156)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1009)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-07 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4787:
-

Status: Open  (was: Patch Available)

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.4-alpha

 Attachments: HDFS-4787.java, HDFS-4787.patch, HDFS-4787-v1.patch, 
 HDFS-4787-v2.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-07 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4787:
-

Attachment: HDFS-4787-v2.patch

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.4-alpha

 Attachments: HDFS-4787.java, HDFS-4787.patch, HDFS-4787-v1.patch, 
 HDFS-4787-v2.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-07 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4787:
-

Status: Patch Available  (was: Open)

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.4-alpha

 Attachments: HDFS-4787.java, HDFS-4787.patch, HDFS-4787-v1.patch, 
 HDFS-4787-v2.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-07 Thread Tian Hong Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13651541#comment-13651541
 ] 

Tian Hong Wang commented on HDFS-4787:
--

Sure, Chris, I have changed the indentation, thanks for your comment.

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.4-alpha

 Attachments: HDFS-4787.java, HDFS-4787.patch, HDFS-4787-v1.patch, 
 HDFS-4787-v2.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-07 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4787:
-

Status: Patch Available  (was: Open)

patch against trunk.

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4787.java, HDFS-4787.patch, HDFS-4787-trunk.patch, 
 HDFS-4787-v1.patch, HDFS-4787-v2.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-07 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4787:
-

Attachment: HDFS-4787-trunk.patch

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4787.java, HDFS-4787.patch, HDFS-4787-trunk.patch, 
 HDFS-4787-v1.patch, HDFS-4787-v2.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-07 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4787:
-

Fix Version/s: (was: 2.0.4-alpha)
   Status: Open  (was: Patch Available)

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4787.java, HDFS-4787.patch, HDFS-4787-trunk.patch, 
 HDFS-4787-v1.patch, HDFS-4787-v2.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-07 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4787:
-

Attachment: (was: HDFS-4787.java)

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4787.patch, HDFS-4787-trunk.patch, 
 HDFS-4787-v1.patch, HDFS-4787-v2.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4730) KeyManagerFactory.getInstance supports SunX509 ibmX509 in HsftpFileSystem.java

2013-05-05 Thread Tian Hong Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13649496#comment-13649496
 ] 

Tian Hong Wang commented on HDFS-4730:
--

Any update for this patch?

 KeyManagerFactory.getInstance supports SunX509  ibmX509 in 
 HsftpFileSystem.java
 

 Key: HDFS-4730
 URL: https://issues.apache.org/jira/browse/HDFS-4730
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4730_trunk.patch, HDFS-4730_trunk.patch, 
 HDFS-4730-v1.patch


 In IBM java, SunX509 should be ibmX509. So use SSLFactory.SSLCERTIFICATE to 
 load dynamically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-02 Thread Tian Hong Wang (JIRA)
Tian Hong Wang created HDFS-4787:


 Summary: Create a new HdfsConfiguration before each 
TestDFSClientRetries testcases
 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang


It's better to create a new HdfsConfiguration before each testcases in 
TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-02 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4787:
-

Attachment: HDFS-4787.patch

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
 Attachments: HDFS-4787.java, HDFS-4787.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-02 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4787:
-

Attachment: HDFS-4787.java

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
 Attachments: HDFS-4787.java, HDFS-4787.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-02 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4787:
-

   Fix Version/s: 2.0.4-alpha
Target Version/s: 2.0.5-beta
  Status: Patch Available  (was: Open)

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
 Fix For: 2.0.4-alpha

 Attachments: HDFS-4787.java, HDFS-4787.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-02 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4787:
-

Labels: patch  (was: )

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.4-alpha

 Attachments: HDFS-4787.java, HDFS-4787.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-02 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4787:
-

Status: Patch Available  (was: Open)

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.4-alpha

 Attachments: HDFS-4787.java, HDFS-4787.patch, HDFS-4787-v1.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-02 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4787:
-

Status: Open  (was: Patch Available)

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.4-alpha

 Attachments: HDFS-4787.java, HDFS-4787.patch, HDFS-4787-v1.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-02 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4787:
-

Attachment: HDFS-4787-v1.patch

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.4-alpha

 Attachments: HDFS-4787.java, HDFS-4787.patch, HDFS-4787-v1.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4787) Create a new HdfsConfiguration before each TestDFSClientRetries testcases

2013-05-02 Thread Tian Hong Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13648133#comment-13648133
 ] 

Tian Hong Wang commented on HDFS-4787:
--

Thank you Andrew for your comment.

 Create a new HdfsConfiguration before each TestDFSClientRetries testcases
 -

 Key: HDFS-4787
 URL: https://issues.apache.org/jira/browse/HDFS-4787
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.4-alpha

 Attachments: HDFS-4787.java, HDFS-4787.patch, HDFS-4787-v1.patch


 It's better to create a new HdfsConfiguration before each testcases in 
 TestDFSClientRetries in case of any configuration parameter impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4709) TestDFSClientRetries-testGetFileChecksum fails on IBM JAVA 6

2013-05-01 Thread Tian Hong Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13647224#comment-13647224
 ] 

Tian Hong Wang commented on HDFS-4709:
--

Any update for this patch?

 TestDFSClientRetries-testGetFileChecksum fails on IBM JAVA 6
 -

 Key: HDFS-4709
 URL: https://issues.apache.org/jira/browse/HDFS-4709
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4709.patch, HDFS-4709-trunk.patch


 testGetFileChecksum(org.apache.hadoop.hdfs.TestDFSClientRetries)  Time 
 elapsed: 3993 sec   ERROR!
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /testGetFileChecksum could only be replicated to 0 nodes instead of 
 minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are 
 excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2186)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:491)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:351)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40744)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 at org.apache.hadoop.ipc.Client.call(Client.java:1235)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
 at $Proxy10.addBlock(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
 at java.lang.reflect.Method.invoke(Method.java:611)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
 at $Proxy10.addBlock(Unknown Source)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:311)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1156)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1009)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4730) KeyManagerFactory.getInstance supports SunX509 ibmX509 in HsftpFileSystem.java

2013-04-23 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4730:
-

Labels: patch  (was: )

 KeyManagerFactory.getInstance supports SunX509  ibmX509 in 
 HsftpFileSystem.java
 

 Key: HDFS-4730
 URL: https://issues.apache.org/jira/browse/HDFS-4730
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4730_trunk.patch, HDFS-4730-v1.patch


 In IBM java, SunX509 should be ibmX509. So use SSLFactory.SSLCERTIFICATE to 
 load dynamically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4730) KeyManagerFactory.getInstance supports SunX509 ibmX509 in HsftpFileSystem.java

2013-04-23 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4730:
-

Status: Open  (was: Patch Available)

 KeyManagerFactory.getInstance supports SunX509  ibmX509 in 
 HsftpFileSystem.java
 

 Key: HDFS-4730
 URL: https://issues.apache.org/jira/browse/HDFS-4730
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4730_trunk.patch, HDFS-4730-v1.patch


 In IBM java, SunX509 should be ibmX509. So use SSLFactory.SSLCERTIFICATE to 
 load dynamically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4730) KeyManagerFactory.getInstance supports SunX509 ibmX509 in HsftpFileSystem.java

2013-04-23 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4730:
-

Fix Version/s: (was: 2.0.3-alpha)

 KeyManagerFactory.getInstance supports SunX509  ibmX509 in 
 HsftpFileSystem.java
 

 Key: HDFS-4730
 URL: https://issues.apache.org/jira/browse/HDFS-4730
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4730_trunk.patch, HDFS-4730-v1.patch


 In IBM java, SunX509 should be ibmX509. So use SSLFactory.SSLCERTIFICATE to 
 load dynamically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4730) KeyManagerFactory.getInstance supports SunX509 ibmX509 in HsftpFileSystem.java

2013-04-23 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4730:
-

Status: Patch Available  (was: Open)

 KeyManagerFactory.getInstance supports SunX509  ibmX509 in 
 HsftpFileSystem.java
 

 Key: HDFS-4730
 URL: https://issues.apache.org/jira/browse/HDFS-4730
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4730_trunk.patch, HDFS-4730_trunk.patch, 
 HDFS-4730-v1.patch


 In IBM java, SunX509 should be ibmX509. So use SSLFactory.SSLCERTIFICATE to 
 load dynamically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4730) KeyManagerFactory.getInstance supports SunX509 ibmX509 in HsftpFileSystem.java

2013-04-23 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4730:
-

Attachment: HDFS-4730_trunk.patch

 KeyManagerFactory.getInstance supports SunX509  ibmX509 in 
 HsftpFileSystem.java
 

 Key: HDFS-4730
 URL: https://issues.apache.org/jira/browse/HDFS-4730
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4730_trunk.patch, HDFS-4730_trunk.patch, 
 HDFS-4730-v1.patch


 In IBM java, SunX509 should be ibmX509. So use SSLFactory.SSLCERTIFICATE to 
 load dynamically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4730) KeyManagerFactory.getInstance supports SunX509 ibmX509 in HsftpFileSystem.java

2013-04-22 Thread Tian Hong Wang (JIRA)
Tian Hong Wang created HDFS-4730:


 Summary: KeyManagerFactory.getInstance supports SunX509 ibmX509 
in HsftpFileSystem.java
 Key: HDFS-4730
 URL: https://issues.apache.org/jira/browse/HDFS-4730
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang


In IBM java, SunX509 should be ibmX509. So use SSLFactory.SSLCERTIFICATE to 
load dynamically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4730) KeyManagerFactory.getInstance supports SunX509 ibmX509 in HsftpFileSystem.java

2013-04-22 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4730:
-

Summary: KeyManagerFactory.getInstance supports SunX509  ibmX509 in 
HsftpFileSystem.java  (was: KeyManagerFactory.getInstance supports SunX509 
ibmX509 in HsftpFileSystem.java)

 KeyManagerFactory.getInstance supports SunX509  ibmX509 in 
 HsftpFileSystem.java
 

 Key: HDFS-4730
 URL: https://issues.apache.org/jira/browse/HDFS-4730
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
 Attachments: HDFS-4730_trunk.patch


 In IBM java, SunX509 should be ibmX509. So use SSLFactory.SSLCERTIFICATE to 
 load dynamically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4730) KeyManagerFactory.getInstance supports SunX509 ibmX509 in HsftpFileSystem.java

2013-04-22 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4730:
-

Attachment: HDFS-4730_trunk.patch

 KeyManagerFactory.getInstance supports SunX509 ibmX509 in 
 HsftpFileSystem.java
 ---

 Key: HDFS-4730
 URL: https://issues.apache.org/jira/browse/HDFS-4730
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
 Attachments: HDFS-4730_trunk.patch


 In IBM java, SunX509 should be ibmX509. So use SSLFactory.SSLCERTIFICATE to 
 load dynamically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4730) KeyManagerFactory.getInstance supports SunX509 ibmX509 in HsftpFileSystem.java

2013-04-22 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4730:
-

Status: Patch Available  (was: Open)

 KeyManagerFactory.getInstance supports SunX509 ibmX509 in 
 HsftpFileSystem.java
 ---

 Key: HDFS-4730
 URL: https://issues.apache.org/jira/browse/HDFS-4730
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
 Attachments: HDFS-4730_trunk.patch


 In IBM java, SunX509 should be ibmX509. So use SSLFactory.SSLCERTIFICATE to 
 load dynamically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4730) KeyManagerFactory.getInstance supports SunX509 ibmX509 in HsftpFileSystem.java

2013-04-22 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4730:
-

Attachment: HDFS-4730-v1.patch

 KeyManagerFactory.getInstance supports SunX509  ibmX509 in 
 HsftpFileSystem.java
 

 Key: HDFS-4730
 URL: https://issues.apache.org/jira/browse/HDFS-4730
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
 Attachments: HDFS-4730_trunk.patch, HDFS-4730-v1.patch


 In IBM java, SunX509 should be ibmX509. So use SSLFactory.SSLCERTIFICATE to 
 load dynamically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4730) KeyManagerFactory.getInstance supports SunX509 ibmX509 in HsftpFileSystem.java

2013-04-22 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4730:
-

Fix Version/s: 2.0.3-alpha
   Status: Patch Available  (was: Open)

 KeyManagerFactory.getInstance supports SunX509  ibmX509 in 
 HsftpFileSystem.java
 

 Key: HDFS-4730
 URL: https://issues.apache.org/jira/browse/HDFS-4730
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4730_trunk.patch, HDFS-4730-v1.patch


 In IBM java, SunX509 should be ibmX509. So use SSLFactory.SSLCERTIFICATE to 
 load dynamically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4730) KeyManagerFactory.getInstance supports SunX509 ibmX509 in HsftpFileSystem.java

2013-04-22 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4730:
-

Status: Open  (was: Patch Available)

 KeyManagerFactory.getInstance supports SunX509  ibmX509 in 
 HsftpFileSystem.java
 

 Key: HDFS-4730
 URL: https://issues.apache.org/jira/browse/HDFS-4730
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
 Attachments: HDFS-4730_trunk.patch, HDFS-4730-v1.patch


 In IBM java, SunX509 should be ibmX509. So use SSLFactory.SSLCERTIFICATE to 
 load dynamically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4723) Occasional failure in TestDFSClientRetries#testGetFileChecksum because the number of available xcievers is set too low

2013-04-21 Thread Tian Hong Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637776#comment-13637776
 ] 

Tian Hong Wang commented on HDFS-4723:
--

Hi, Andrew. I met the same problem as you in HDFS-4709. But I construct a clean 
Configuration object before each unit test begins, it runs well.

 Occasional failure in TestDFSClientRetries#testGetFileChecksum because the 
 number of available xcievers is set too low
 --

 Key: HDFS-4723
 URL: https://issues.apache.org/jira/browse/HDFS-4723
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.0.4-alpha
Reporter: Andrew Purtell
Priority: Minor
 Attachments: 4723-branch-2.patch, 4723.patch


 Occasional failure in TestDFSClientRetries#testGetFileChecksum because the 
 number of available xcievers is set too low. 
 {noformat}
 2013-04-21 18:48:28,273 WARN  datanode.DataNode 
 (DataXceiverServer.java:run(161)) - 127.0.0.1:37608:DataXceiverServer: 
 java.io.IOException: Xceiver count 3 exceeds the limit of concurrent 
 xcievers: 2
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:143)
   at java.lang.Thread.run(Thread.java:662)
 2013-04-21 18:48:28,274 INFO  datanode.DataNode 
 (DataXceiver.java:writeBlock(453)) - Datanode 2 got response for connect ack  
 from downstream datanode with firstbadlink as 127.0.0.1:37608
 2013-04-21 18:48:28,276 INFO  datanode.DataNode 
 (DataXceiver.java:writeBlock(491)) - Datanode 2 forwarding connect ack to 
 upstream firstbadlink is 127.0.0.1:37608
 2013-04-21 18:48:28,276 ERROR datanode.DataNode 
 (DataXceiver.java:writeBlock(477)) - 
 DataNode{data=FSDataset{dirpath='[/home/ec2-user/jenkins/workspace/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3/current,
  
 /home/ec2-user/jenkins/workspace/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4/current]'},
  localName='127.0.0.1:33298', 
 storageID='DS-1506063529-10.174.86.97-33298-1366570107286', 
 xmitsInProgress=0}:Exception transfering block 
 BP-2121022065-10.174.86.97-1366570107029:blk_6876843860808656778_1071 to 
 mirror 127.0.0.1:37608: java.io.EOFException: Premature EOF: no length prefix 
 available
 2013-04-21 18:48:28,276 INFO  hdfs.DFSClient 
 (DFSOutputStream.java:createBlockOutputStream(1105)) - Exception in 
 createBlockOutputStream
 java.io.IOException: Bad connect ack with firstBadLink as 127.0.0.1:37608
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1096)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1019)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)
 2013-04-21 18:48:28,276 INFO  datanode.DataNode 
 (DataXceiver.java:writeBlock(537)) - opWriteBlock 
 BP-2121022065-10.174.86.97-1366570107029:blk_6876843860808656778_1071 
 received exception java.io.EOFException: Premature EOF: no length prefix 
 available
 2013-04-21 18:48:28,277 INFO  datanode.DataNode 
 (BlockReceiver.java:receiveBlock(674)) - Exception for 
 BP-2121022065-10.174.86.97-1366570107029:blk_6876843860808656778_1071
 java.io.IOException: Premature EOF from inputStream
   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:414)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:644)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:506)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:98)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:65)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:219)
   at java.lang.Thread.run(Thread.java:662)
 2013-04-21 18:48:28,277 INFO  hdfs.DFSClient 
 (DFSOutputStream.java:nextBlockOutputStream(1022)) - Abandoning 
 BP-2121022065-10.174.86.97-1366570107029:blk_6876843860808656778_1071
 2013-04-21 18:48:28,277 ERROR datanode.DataNode (DataXceiver.java:run(223)) - 
 127.0.0.1:33298:DataXceiver error processing WRITE_BLOCK operation  src: 
 /127.0.0.1:55182 dest: /127.0.0.1:33298
 java.io.EOFException: Premature 

[jira] [Updated] (HDFS-4709) TestDFSClientRetries#testGetFileChecksum fails

2013-04-20 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4709:
-

Fix Version/s: (was: 2.0.3-alpha)
Affects Version/s: (was: 2.0.3-alpha)
   Status: Open  (was: Patch Available)

 TestDFSClientRetries#testGetFileChecksum fails
 --

 Key: HDFS-4709
 URL: https://issues.apache.org/jira/browse/HDFS-4709
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4709.patch


 testGetFileChecksum(org.apache.hadoop.hdfs.TestDFSClientRetries)  Time 
 elapsed: 3993 sec   ERROR!
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /testGetFileChecksum could only be replicated to 0 nodes instead of 
 minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are 
 excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2186)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:491)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:351)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40744)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 at org.apache.hadoop.ipc.Client.call(Client.java:1235)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
 at $Proxy10.addBlock(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
 at java.lang.reflect.Method.invoke(Method.java:611)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
 at $Proxy10.addBlock(Unknown Source)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:311)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1156)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1009)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4709) TestDFSClientRetries#testGetFileChecksum fails

2013-04-20 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4709:
-

Attachment: HDFS-4709-trunk.patch

add patch against trunk

 TestDFSClientRetries#testGetFileChecksum fails
 --

 Key: HDFS-4709
 URL: https://issues.apache.org/jira/browse/HDFS-4709
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4709.patch, HDFS-4709-trunk.patch


 testGetFileChecksum(org.apache.hadoop.hdfs.TestDFSClientRetries)  Time 
 elapsed: 3993 sec   ERROR!
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /testGetFileChecksum could only be replicated to 0 nodes instead of 
 minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are 
 excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2186)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:491)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:351)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40744)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 at org.apache.hadoop.ipc.Client.call(Client.java:1235)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
 at $Proxy10.addBlock(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
 at java.lang.reflect.Method.invoke(Method.java:611)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
 at $Proxy10.addBlock(Unknown Source)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:311)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1156)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1009)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4709) TestDFSClientRetries#testGetFileChecksum fails

2013-04-20 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4709:
-

Status: Patch Available  (was: Open)

 TestDFSClientRetries#testGetFileChecksum fails
 --

 Key: HDFS-4709
 URL: https://issues.apache.org/jira/browse/HDFS-4709
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4709.patch, HDFS-4709-trunk.patch


 testGetFileChecksum(org.apache.hadoop.hdfs.TestDFSClientRetries)  Time 
 elapsed: 3993 sec   ERROR!
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /testGetFileChecksum could only be replicated to 0 nodes instead of 
 minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are 
 excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2186)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:491)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:351)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40744)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 at org.apache.hadoop.ipc.Client.call(Client.java:1235)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
 at $Proxy10.addBlock(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
 at java.lang.reflect.Method.invoke(Method.java:611)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
 at $Proxy10.addBlock(Unknown Source)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:311)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1156)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1009)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4709) TestDFSClientRetries#testGetFileChecksum fails

2013-04-19 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4709:
-

Target Version/s: 2.0.5-beta  (was: 2.0.3-alpha)

 TestDFSClientRetries#testGetFileChecksum fails
 --

 Key: HDFS-4709
 URL: https://issues.apache.org/jira/browse/HDFS-4709
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4709.patch


 testGetFileChecksum(org.apache.hadoop.hdfs.TestDFSClientRetries)  Time 
 elapsed: 3993 sec   ERROR!
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /testGetFileChecksum could only be replicated to 0 nodes instead of 
 minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are 
 excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2186)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:491)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:351)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40744)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 at org.apache.hadoop.ipc.Client.call(Client.java:1235)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
 at $Proxy10.addBlock(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
 at java.lang.reflect.Method.invoke(Method.java:611)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
 at $Proxy10.addBlock(Unknown Source)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:311)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1156)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1009)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4709) TestDFSClientRetries#testGetFileChecksum

2013-04-18 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4709:
-

Summary: TestDFSClientRetries#testGetFileChecksum  (was: 
TestDFSClientRetries#testGetFileChecksum fails using IBM java 6)

 TestDFSClientRetries#testGetFileChecksum
 

 Key: HDFS-4709
 URL: https://issues.apache.org/jira/browse/HDFS-4709
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4709.patch


 testGetFileChecksum(org.apache.hadoop.hdfs.TestDFSClientRetries)  Time 
 elapsed: 3993 sec   ERROR!
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /testGetFileChecksum could only be replicated to 0 nodes instead of 
 minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are 
 excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2186)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:491)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:351)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40744)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 at org.apache.hadoop.ipc.Client.call(Client.java:1235)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
 at $Proxy10.addBlock(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
 at java.lang.reflect.Method.invoke(Method.java:611)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
 at $Proxy10.addBlock(Unknown Source)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:311)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1156)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1009)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4709) TestDFSClientRetries#testGetFileChecksum fails

2013-04-18 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4709:
-

Summary: TestDFSClientRetries#testGetFileChecksum fails  (was: 
TestDFSClientRetries#testGetFileChecksum)

 TestDFSClientRetries#testGetFileChecksum fails
 --

 Key: HDFS-4709
 URL: https://issues.apache.org/jira/browse/HDFS-4709
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4709.patch


 testGetFileChecksum(org.apache.hadoop.hdfs.TestDFSClientRetries)  Time 
 elapsed: 3993 sec   ERROR!
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /testGetFileChecksum could only be replicated to 0 nodes instead of 
 minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are 
 excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2186)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:491)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:351)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40744)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 at org.apache.hadoop.ipc.Client.call(Client.java:1235)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
 at $Proxy10.addBlock(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
 at java.lang.reflect.Method.invoke(Method.java:611)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
 at $Proxy10.addBlock(Unknown Source)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:311)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1156)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1009)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4709) TestDFSClientRetries#testGetFileChecksum fails using IBM java 6

2013-04-17 Thread Tian Hong Wang (JIRA)
Tian Hong Wang created HDFS-4709:


 Summary: TestDFSClientRetries#testGetFileChecksum fails using IBM 
java 6
 Key: HDFS-4709
 URL: https://issues.apache.org/jira/browse/HDFS-4709
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
 Fix For: 2.0.3-alpha


testGetFileChecksum(org.apache.hadoop.hdfs.TestDFSClientRetries)  Time elapsed: 
3993 sec   ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/testGetFileChecksum could only be replicated to 0 nodes instead of 
minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are 
excluded in this operation.
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2186)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:491)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:351)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40744)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
at 
java.security.AccessController.doPrivileged(AccessController.java:310)
at javax.security.auth.Subject.doAs(Subject.java:573)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)

at org.apache.hadoop.ipc.Client.call(Client.java:1235)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
at $Proxy10.addBlock(Unknown Source)
at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at $Proxy10.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:311)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1156)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1009)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4709) TestDFSClientRetries#testGetFileChecksum fails using IBM java 6

2013-04-17 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4709:
-

Affects Version/s: 2.0.3-alpha

 TestDFSClientRetries#testGetFileChecksum fails using IBM java 6
 ---

 Key: HDFS-4709
 URL: https://issues.apache.org/jira/browse/HDFS-4709
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha


 testGetFileChecksum(org.apache.hadoop.hdfs.TestDFSClientRetries)  Time 
 elapsed: 3993 sec   ERROR!
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /testGetFileChecksum could only be replicated to 0 nodes instead of 
 minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are 
 excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2186)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:491)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:351)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40744)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 at org.apache.hadoop.ipc.Client.call(Client.java:1235)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
 at $Proxy10.addBlock(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
 at java.lang.reflect.Method.invoke(Method.java:611)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
 at $Proxy10.addBlock(Unknown Source)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:311)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1156)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1009)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4709) TestDFSClientRetries#testGetFileChecksum fails using IBM java 6

2013-04-17 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4709:
-

Attachment: HDFS-4709.patch

The main reason is that the whole unit test class uses only one single 
Configuration object, so the configuration information in previous unit test 
will affect the rear unit test.In testGetFileCheckSum() function, it seems to 
be affected by 
DFSConfigKeys.DFS_CLIENT_BLOCK_WRITE_LOCATEFOLLOWINGBLOCK_RETRIES_KEY. So it 
should construct a clean Configuration object before each unit test begins.

 TestDFSClientRetries#testGetFileChecksum fails using IBM java 6
 ---

 Key: HDFS-4709
 URL: https://issues.apache.org/jira/browse/HDFS-4709
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4709.patch


 testGetFileChecksum(org.apache.hadoop.hdfs.TestDFSClientRetries)  Time 
 elapsed: 3993 sec   ERROR!
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /testGetFileChecksum could only be replicated to 0 nodes instead of 
 minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are 
 excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2186)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:491)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:351)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40744)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 at org.apache.hadoop.ipc.Client.call(Client.java:1235)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
 at $Proxy10.addBlock(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
 at java.lang.reflect.Method.invoke(Method.java:611)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
 at $Proxy10.addBlock(Unknown Source)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:311)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1156)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1009)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4709) TestDFSClientRetries#testGetFileChecksum fails using IBM java 6

2013-04-17 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4709:
-

Target Version/s: 2.0.3-alpha
  Status: Patch Available  (was: Open)

 TestDFSClientRetries#testGetFileChecksum fails using IBM java 6
 ---

 Key: HDFS-4709
 URL: https://issues.apache.org/jira/browse/HDFS-4709
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4709.patch


 testGetFileChecksum(org.apache.hadoop.hdfs.TestDFSClientRetries)  Time 
 elapsed: 3993 sec   ERROR!
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /testGetFileChecksum could only be replicated to 0 nodes instead of 
 minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are 
 excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2186)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:491)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:351)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40744)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 at org.apache.hadoop.ipc.Client.call(Client.java:1235)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
 at $Proxy10.addBlock(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
 at java.lang.reflect.Method.invoke(Method.java:611)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
 at $Proxy10.addBlock(Unknown Source)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:311)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1156)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1009)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2013-04-10 Thread Tian Hong Wang (JIRA)
Tian Hong Wang created HDFS-4681:


 Summary: 
TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
using IBM java
 Key: HDFS-4681
 URL: https://issues.apache.org/jira/browse/HDFS-4681
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
 Fix For: 2.0.3-alpha


TestBlocksWithNotEnoughRacks unit test fails with the following error message:

testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
  Time elapsed: 8997 sec   FAILURE!
org.junit.ComparisonFailure: Corrupt replica 
expected:...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02���:)$�{|�^@�-���|GvW��7g
 �/M��[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���   
oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
��6b�S�G�^?��m4FW#^@
D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C] but 
was:...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02�:)$�{|�^@�-���|GvW��7g
 �/M�[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���  
oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
   ��6b�S�G�^?��m4FW#^@
D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]
at org.junit.Assert.assertEquals(Assert.java:123)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)


The root cause is that the unit test code uses in.read() method to read the 
block content char by char., which will abandon the LF. So the best way is to 
use buffer to read the block content.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2013-04-10 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4681:
-

Description: 
TestBlocksWithNotEnoughRacks unit test fails with the following error message:

testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
  Time elapsed: 8997 sec   FAILURE!
org.junit.ComparisonFailure: Corrupt replica 
expected:...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02���:)$�{|�^@�-���|GvW��7g
 �/M��[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���   
oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
��6b�S�G�^?��m4FW#^@
D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C] but 
was:...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02�:)$�{|�^@�-���|GvW��7g
 �/M�[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���  
oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
   ��6b�S�G�^?��m4FW#^@
D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]
at org.junit.Assert.assertEquals(Assert.java:123)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)




  was:
TestBlocksWithNotEnoughRacks unit test fails with the following error message:

testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
  Time elapsed: 8997 sec   FAILURE!
org.junit.ComparisonFailure: Corrupt replica 
expected:...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02���:)$�{|�^@�-���|GvW��7g
 �/M��[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���   
oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
��6b�S�G�^?��m4FW#^@
D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C] but 
was:...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02�:)$�{|�^@�-���|GvW��7g
 �/M�[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���  
oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
   ��6b�S�G�^?��m4FW#^@
D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]
at org.junit.Assert.assertEquals(Assert.java:123)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)


The root cause is that the unit test code uses in.read() method to read the 
block content char by char., which will abandon the LF. So the best way is to 
use buffer to read the block content.




 TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
 using IBM java
 -

 Key: HDFS-4681
 URL: https://issues.apache.org/jira/browse/HDFS-4681
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4681.patch


 TestBlocksWithNotEnoughRacks unit test fails with the following error message:
 
 testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
   Time elapsed: 8997 sec   FAILURE!
 org.junit.ComparisonFailure: Corrupt replica 
 expected:...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02���:)$�{|�^@�-���|GvW��7g
  �/M��[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���   
 oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
 ��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C] but 
 was:...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02�:)$�{|�^@�-���|GvW��7g
  �/M�[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���  
 oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]
 at org.junit.Assert.assertEquals(Assert.java:123)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)

--
This message is automatically generated by JIRA.
If you think it was sent 

[jira] [Updated] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2013-04-10 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4681:
-

Attachment: HDFS-4681.patch

 TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
 using IBM java
 -

 Key: HDFS-4681
 URL: https://issues.apache.org/jira/browse/HDFS-4681
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4681.patch


 TestBlocksWithNotEnoughRacks unit test fails with the following error message:
 
 testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
   Time elapsed: 8997 sec   FAILURE!
 org.junit.ComparisonFailure: Corrupt replica 
 expected:...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02���:)$�{|�^@�-���|GvW��7g
  �/M��[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���   
 oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
 ��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C] but 
 was:...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02�:)$�{|�^@�-���|GvW��7g
  �/M�[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���  
 oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]
 at org.junit.Assert.assertEquals(Assert.java:123)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2013-04-10 Thread Tian Hong Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13627580#comment-13627580
 ] 

Tian Hong Wang commented on HDFS-4681:
--

The root cause is that the unit test code uses in.read() method to read the 
block content char by char., which will abandon the LF. So the best way is to 
use buffer to read the block content.

 TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
 using IBM java
 -

 Key: HDFS-4681
 URL: https://issues.apache.org/jira/browse/HDFS-4681
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4681.patch


 TestBlocksWithNotEnoughRacks unit test fails with the following error message:
 
 testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
   Time elapsed: 8997 sec   FAILURE!
 org.junit.ComparisonFailure: Corrupt replica 
 expected:...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02���:)$�{|�^@�-���|GvW��7g
  �/M��[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���   
 oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
 ��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C] but 
 was:...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02�:)$�{|�^@�-���|GvW��7g
  �/M�[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���  
 oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]
 at org.junit.Assert.assertEquals(Assert.java:123)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2013-04-10 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4681:
-

Status: Patch Available  (was: Open)

 TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
 using IBM java
 -

 Key: HDFS-4681
 URL: https://issues.apache.org/jira/browse/HDFS-4681
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4681.patch


 TestBlocksWithNotEnoughRacks unit test fails with the following error message:
 
 testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
   Time elapsed: 8997 sec   FAILURE!
 org.junit.ComparisonFailure: Corrupt replica 
 expected:...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02���:)$�{|�^@�-���|GvW��7g
  �/M��[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���   
 oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
 ��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C] but 
 was:...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02�:)$�{|�^@�-���|GvW��7g
  �/M�[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���  
 oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]
 at org.junit.Assert.assertEquals(Assert.java:123)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2013-04-10 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4681:
-

Assignee: Tian Hong Wang
  Status: Open  (was: Patch Available)

 TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
 using IBM java
 -

 Key: HDFS-4681
 URL: https://issues.apache.org/jira/browse/HDFS-4681
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4681.patch, HDFS-4681-v1.patch


 TestBlocksWithNotEnoughRacks unit test fails with the following error message:
 
 testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
   Time elapsed: 8997 sec   FAILURE!
 org.junit.ComparisonFailure: Corrupt replica 
 expected:...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02���:)$�{|�^@�-���|GvW��7g
  �/M��[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���   
 oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
 ��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C] but 
 was:...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02�:)$�{|�^@�-���|GvW��7g
  �/M�[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���  
 oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]
 at org.junit.Assert.assertEquals(Assert.java:123)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2013-04-10 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4681:
-

Attachment: HDFS-4681-v1.patch

 TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
 using IBM java
 -

 Key: HDFS-4681
 URL: https://issues.apache.org/jira/browse/HDFS-4681
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4681.patch, HDFS-4681-v1.patch


 TestBlocksWithNotEnoughRacks unit test fails with the following error message:
 
 testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
   Time elapsed: 8997 sec   FAILURE!
 org.junit.ComparisonFailure: Corrupt replica 
 expected:...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02���:)$�{|�^@�-���|GvW��7g
  �/M��[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���   
 oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
 ��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C] but 
 was:...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02�:)$�{|�^@�-���|GvW��7g
  �/M�[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���  
 oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]
 at org.junit.Assert.assertEquals(Assert.java:123)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2013-04-10 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4681:
-

Status: Patch Available  (was: Open)

 TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
 using IBM java
 -

 Key: HDFS-4681
 URL: https://issues.apache.org/jira/browse/HDFS-4681
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4681.patch, HDFS-4681-v1.patch


 TestBlocksWithNotEnoughRacks unit test fails with the following error message:
 
 testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
   Time elapsed: 8997 sec   FAILURE!
 org.junit.ComparisonFailure: Corrupt replica 
 expected:...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02���:)$�{|�^@�-���|GvW��7g
  �/M��[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���   
 oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
 ��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C] but 
 was:...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02�:)$�{|�^@�-���|GvW��7g
  �/M�[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���  
 oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]
 at org.junit.Assert.assertEquals(Assert.java:123)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2013-04-10 Thread Tian Hong Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628663#comment-13628663
 ] 

Tian Hong Wang commented on HDFS-4681:
--

Sure, Todd, add a utility to DFSTestUtil which returns byte[]. Change String 
comparison to byte[] comparison.

 TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
 using IBM java
 -

 Key: HDFS-4681
 URL: https://issues.apache.org/jira/browse/HDFS-4681
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4681.patch, HDFS-4681-v1.patch


 TestBlocksWithNotEnoughRacks unit test fails with the following error message:
 
 testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
   Time elapsed: 8997 sec   FAILURE!
 org.junit.ComparisonFailure: Corrupt replica 
 expected:...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02���:)$�{|�^@�-���|GvW��7g
  �/M��[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���   
 oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
 ��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C] but 
 was:...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02�:)$�{|�^@�-���|GvW��7g
  �/M�[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���  
 oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]
 at org.junit.Assert.assertEquals(Assert.java:123)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4669) org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager fails using IBM java

2013-04-08 Thread Tian Hong Wang (JIRA)
Tian Hong Wang created HDFS-4669:


 Summary: 
org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager fails using IBM java
 Key: HDFS-4669
 URL: https://issues.apache.org/jira/browse/HDFS-4669
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
 Fix For: 2.0.3-alpha


TestBlockPoolManager unit test fails with the following error message using IBM 
java:
testFederationRefresh(org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager)
  Time elapsed: 27 sec   FAILURE!
org.junit.ComparisonFailure: expected:stop #[1
refresh #2]
 but was:stop #[2
refresh #1]


The root cause is:
(1)if we want to remove the first NS, keep the second NS, it should be 
conf.set(DFSConfigKeys.DFS_NAMESERVICES, ns2), not 
conf.set(DFSConfigKeys.DFS_NAMESERVICES, ns1).

(2)Since HashMap  HashSet store the data in the random order way, so in ibm 
java  Oracle java, HashMap get the random order key, value that causing the 
random ns1ns2 value.  So in the code, it should use LinkedHashMap  
LinkedHashSet to keep the original order.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4669) org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager fails using IBM java

2013-04-08 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4669:
-

Target Version/s: 2.0.3-alpha
  Status: Patch Available  (was: Open)

 org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager fails using IBM 
 java
 

 Key: HDFS-4669
 URL: https://issues.apache.org/jira/browse/HDFS-4669
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-4669.patch


 TestBlockPoolManager unit test fails with the following error message using 
 IBM java:
 testFederationRefresh(org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager)
   Time elapsed: 27 sec   FAILURE!
 org.junit.ComparisonFailure: expected:stop #[1
 refresh #2]
  but was:stop #[2
 refresh #1]
 
 The root cause is:
 (1)if we want to remove the first NS, keep the second NS, it should be 
 conf.set(DFSConfigKeys.DFS_NAMESERVICES, ns2), not 
 conf.set(DFSConfigKeys.DFS_NAMESERVICES, ns1).
 (2)Since HashMap  HashSet store the data in the random order way, so in ibm 
 java  Oracle java, HashMap get the random order key, value that causing 
 the random ns1ns2 value.  So in the code, it should use LinkedHashMap  
 LinkedHashSet to keep the original order.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4669) org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager fails using IBM java

2013-04-08 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4669:
-

Attachment: HADOOP-4669.patch

 org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager fails using IBM 
 java
 

 Key: HDFS-4669
 URL: https://issues.apache.org/jira/browse/HDFS-4669
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-4669.patch


 TestBlockPoolManager unit test fails with the following error message using 
 IBM java:
 testFederationRefresh(org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager)
   Time elapsed: 27 sec   FAILURE!
 org.junit.ComparisonFailure: expected:stop #[1
 refresh #2]
  but was:stop #[2
 refresh #1]
 
 The root cause is:
 (1)if we want to remove the first NS, keep the second NS, it should be 
 conf.set(DFSConfigKeys.DFS_NAMESERVICES, ns2), not 
 conf.set(DFSConfigKeys.DFS_NAMESERVICES, ns1).
 (2)Since HashMap  HashSet store the data in the random order way, so in ibm 
 java  Oracle java, HashMap get the random order key, value that causing 
 the random ns1ns2 value.  So in the code, it should use LinkedHashMap  
 LinkedHashSet to keep the original order.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4669) org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager fails using IBM java

2013-04-08 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4669:
-

Target Version/s: 2.0.5-beta  (was: 2.0.3-alpha)

 org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager fails using IBM 
 java
 

 Key: HDFS-4669
 URL: https://issues.apache.org/jira/browse/HDFS-4669
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-4669.patch


 TestBlockPoolManager unit test fails with the following error message using 
 IBM java:
 testFederationRefresh(org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager)
   Time elapsed: 27 sec   FAILURE!
 org.junit.ComparisonFailure: expected:stop #[1
 refresh #2]
  but was:stop #[2
 refresh #1]
 
 The root cause is:
 (1)if we want to remove the first NS, keep the second NS, it should be 
 conf.set(DFSConfigKeys.DFS_NAMESERVICES, ns2), not 
 conf.set(DFSConfigKeys.DFS_NAMESERVICES, ns1).
 (2)Since HashMap  HashSet store the data in the random order way, so in ibm 
 java  Oracle java, HashMap get the random order key, value that causing 
 the random ns1ns2 value.  So in the code, it should use LinkedHashMap  
 LinkedHashSet to keep the original order.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira