[jira] [Assigned] (HDFS-5562) TestCacheDirectives fails on trunk

2013-11-25 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HDFS-5562:
---

Assignee: Akira AJISAKA

 TestCacheDirectives fails on trunk
 --

 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA

 Some tests fail on trunk.
 {code}
 Tests in error:
   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
 datan...
   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
 » Runtime
   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
 Cannot ...
   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
 datanode ...
 Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
 {code}
 For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5562) TestCacheDirectives fails on trunk

2013-11-25 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831248#comment-13831248
 ] 

Akira AJISAKA commented on HDFS-5562:
-

I'll attach a patch to skip those tests.

 TestCacheDirectives fails on trunk
 --

 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA

 Some tests fail on trunk.
 {code}
 Tests in error:
   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
 datan...
   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
 » Runtime
   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
 Cannot ...
   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
 datanode ...
 Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
 {code}
 For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5562) TestCacheDirectives and TestFsDatasetCache fails on trunk

2013-11-25 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-5562:


Summary: TestCacheDirectives and TestFsDatasetCache fails on trunk  (was: 
TestCacheDirectives fails on trunk)

 TestCacheDirectives and TestFsDatasetCache fails on trunk
 -

 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA

 Some tests fail on trunk.
 {code}
 Tests in error:
   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
 datan...
   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
 » Runtime
   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
 Cannot ...
   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
 datanode ...
 Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
 {code}
 For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5562) TestCacheDirectives and TestFsDatasetCache fails on trunk

2013-11-25 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-5562:


Attachment: HDFS-5562.v1.patch

Looks like you already started... anyway here my patch just for reference:)

 TestCacheDirectives and TestFsDatasetCache fails on trunk
 -

 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HDFS-5562.v1.patch


 Some tests fail on trunk.
 {code}
 Tests in error:
   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
 datan...
   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
 » Runtime
   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
 Cannot ...
   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
 datanode ...
 Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
 {code}
 For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5558) LeaseManager monitor thread can crash if the last block is complete but another block is not.

2013-11-25 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831263#comment-13831263
 ] 

Binglin Chang commented on HDFS-5558:
-

Is this related to HDFS-4882 ? It has been left there for a long time. 

 LeaseManager monitor thread can crash if the last block is complete but 
 another block is not.
 -

 Key: HDFS-5558
 URL: https://issues.apache.org/jira/browse/HDFS-5558
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.9, 2.3.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-5558.branch-023.patch, HDFS-5558.patch


 As mentioned in HDFS-5557, if a file has its last and penultimate block not 
 completed and the file is being closed, the last block may be completed but 
 the penultimate one might not. If this condition lasts long and the file is 
 abandoned, LeaseManager will try to recover the lease and the block. But 
 {{internalReleaseLease()}} will fail with invalid cast exception with this 
 kind of file.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5562) TestCacheDirectives and TestFsDatasetCache fails on trunk

2013-11-25 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-5562:


Assignee: Binglin Chang  (was: Akira AJISAKA)

 TestCacheDirectives and TestFsDatasetCache fails on trunk
 -

 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Binglin Chang
 Attachments: HDFS-5562.v1.patch


 Some tests fail on trunk.
 {code}
 Tests in error:
   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
 datan...
   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
 » Runtime
   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
 Cannot ...
   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
 datanode ...
 Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
 {code}
 For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5562) TestCacheDirectives and TestFsDatasetCache fails on trunk

2013-11-25 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831278#comment-13831278
 ] 

Akira AJISAKA commented on HDFS-5562:
-

[~decster], thanks for attaching the patch. LGTM, +1.

 TestCacheDirectives and TestFsDatasetCache fails on trunk
 -

 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Binglin Chang
 Attachments: HDFS-5562.v1.patch


 Some tests fail on trunk.
 {code}
 Tests in error:
   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
 datan...
   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
 » Runtime
   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
 Cannot ...
   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
 datanode ...
 Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
 {code}
 For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5562) TestCacheDirectives and TestFsDatasetCache fails on trunk

2013-11-25 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-5562:


Status: Patch Available  (was: Open)

 TestCacheDirectives and TestFsDatasetCache fails on trunk
 -

 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Binglin Chang
 Attachments: HDFS-5562.v1.patch


 Some tests fail on trunk.
 {code}
 Tests in error:
   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
 datan...
   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
 » Runtime
   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
 Cannot ...
   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
 datanode ...
 Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
 {code}
 For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5562) TestCacheDirectives and TestFsDatasetCache fails on trunk

2013-11-25 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-5562:


Hadoop Flags: Reviewed

 TestCacheDirectives and TestFsDatasetCache fails on trunk
 -

 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Binglin Chang
 Attachments: HDFS-5562.v1.patch


 Some tests fail on trunk.
 {code}
 Tests in error:
   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
 datan...
   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
 » Runtime
   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
 Cannot ...
   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
 datanode ...
 Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
 {code}
 For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5558) LeaseManager monitor thread can crash if the last block is complete but another block is not.

2013-11-25 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831290#comment-13831290
 ] 

Vinay commented on HDFS-5558:
-

HDFS-4882 is more related to {{dfs.namenode.replication.min=2}} which is not 
taking care of adding extra datanode during PIPELINE_CLOSE_RECOVERY, 

I agree, not adding extra datanodes can cause {{checkLeases()}} into infiniate 
loop, but with the fix given in this jira that situation may not come. 
Now client's close() have timeout and fails after timeout.

 LeaseManager monitor thread can crash if the last block is complete but 
 another block is not.
 -

 Key: HDFS-5558
 URL: https://issues.apache.org/jira/browse/HDFS-5558
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.9, 2.3.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-5558.branch-023.patch, HDFS-5558.patch


 As mentioned in HDFS-5557, if a file has its last and penultimate block not 
 completed and the file is being closed, the last block may be completed but 
 the penultimate one might not. If this condition lasts long and the file is 
 abandoned, LeaseManager will try to recover the lease and the block. But 
 {{internalReleaseLease()}} will fail with invalid cast exception with this 
 kind of file.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5562) TestCacheDirectives and TestFsDatasetCache fails on trunk

2013-11-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831384#comment-13831384
 ] 

Hadoop QA commented on HDFS-5562:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615554/HDFS-5562.v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5559//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5559//console

This message is automatically generated.

 TestCacheDirectives and TestFsDatasetCache fails on trunk
 -

 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Binglin Chang
 Attachments: HDFS-5562.v1.patch


 Some tests fail on trunk.
 {code}
 Tests in error:
   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
 datan...
   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
 » Runtime
   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
 Cannot ...
   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
 datanode ...
 Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
 {code}
 For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5526) Datanode cannot roll back to previous layout version

2013-11-25 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-5526:
-

Fix Version/s: 0.23.10
   2.3.0
   3.0.0

 Datanode cannot roll back to previous layout version
 

 Key: HDFS-5526
 URL: https://issues.apache.org/jira/browse/HDFS-5526
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Kihwal Lee
Priority: Blocker
 Fix For: 3.0.0, 2.3.0, 0.23.10

 Attachments: HDFS-5526.patch, HDFS-5526.patch


 Current trunk layout version is -48.
 Hadoop v2.2.0 layout version is -47.
 If a cluster is upgraded from v2.2.0 (-47) to trunk (-48), the datanodes 
 cannot start with -rollback.  It will fail with IncorrectVersionException.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5526) Datanode cannot roll back to previous layout version

2013-11-25 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831562#comment-13831562
 ] 

Kihwal Lee commented on HDFS-5526:
--

[~szetszwo], I've committed this to branch-2, branch-0.23 and trunk. Please 
pull into other branches as see fit and resolve this jira.

 Datanode cannot roll back to previous layout version
 

 Key: HDFS-5526
 URL: https://issues.apache.org/jira/browse/HDFS-5526
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Kihwal Lee
Priority: Blocker
 Fix For: 3.0.0, 2.3.0, 0.23.10

 Attachments: HDFS-5526.patch, HDFS-5526.patch


 Current trunk layout version is -48.
 Hadoop v2.2.0 layout version is -47.
 If a cluster is upgraded from v2.2.0 (-47) to trunk (-48), the datanodes 
 cannot start with -rollback.  It will fail with IncorrectVersionException.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5526) Datanode cannot roll back to previous layout version

2013-11-25 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831565#comment-13831565
 ] 

Kihwal Lee commented on HDFS-5526:
--

I made a typo in the checkin commen, so commit messages are not showing up here.

 Datanode cannot roll back to previous layout version
 

 Key: HDFS-5526
 URL: https://issues.apache.org/jira/browse/HDFS-5526
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Kihwal Lee
Priority: Blocker
 Fix For: 3.0.0, 2.3.0, 0.23.10

 Attachments: HDFS-5526.patch, HDFS-5526.patch


 Current trunk layout version is -48.
 Hadoop v2.2.0 layout version is -47.
 If a cluster is upgraded from v2.2.0 (-47) to trunk (-48), the datanodes 
 cannot start with -rollback.  It will fail with IncorrectVersionException.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5526) Datanode cannot roll back to previous layout version

2013-11-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831570#comment-13831570
 ] 

Hudson commented on HDFS-5526:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4789 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4789/])
HDFS-5526. Datanode cannot roll back to previous layout version. Contributed by 
Kihwal Lee. (kihwal: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1545322)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSRollback.java


 Datanode cannot roll back to previous layout version
 

 Key: HDFS-5526
 URL: https://issues.apache.org/jira/browse/HDFS-5526
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Kihwal Lee
Priority: Blocker
 Fix For: 3.0.0, 2.3.0, 0.23.10

 Attachments: HDFS-5526.patch, HDFS-5526.patch


 Current trunk layout version is -48.
 Hadoop v2.2.0 layout version is -47.
 If a cluster is upgraded from v2.2.0 (-47) to trunk (-48), the datanodes 
 cannot start with -rollback.  It will fail with IncorrectVersionException.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5558) LeaseManager monitor thread can crash if the last block is complete but another block is not.

2013-11-25 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831589#comment-13831589
 ] 

Kihwal Lee commented on HDFS-5558:
--

bq. I tried to reproduce the problem as mentioned with the help of test changes 
in HDFS-5557, but could not get the Invalid Cast Exception in trunk code. 

The test won't reproduce this issue because the client will lose every time.  
In order to reproduce this, the client has to lose the race in HDFS-5557 for 
the penultimate block and the datanode has to win the race for the last block.  
I.e. produce block layout 
[...][BlockInfoUnderConstruction:COMMITTED][BlockInfo:COMPLETE].

 LeaseManager monitor thread can crash if the last block is complete but 
 another block is not.
 -

 Key: HDFS-5558
 URL: https://issues.apache.org/jira/browse/HDFS-5558
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.9, 2.3.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-5558.branch-023.patch, HDFS-5558.patch


 As mentioned in HDFS-5557, if a file has its last and penultimate block not 
 completed and the file is being closed, the last block may be completed but 
 the penultimate one might not. If this condition lasts long and the file is 
 abandoned, LeaseManager will try to recover the lease and the block. But 
 {{internalReleaseLease()}} will fail with invalid cast exception with this 
 kind of file.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5533) Symlink delete/create should be treated as DELETE/CREATE in snapshot diff report

2013-11-25 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831620#comment-13831620
 ] 

Colin Patrick McCabe commented on HDFS-5533:


+1.  Thanks, Binglin.

 Symlink delete/create should be treated as DELETE/CREATE in snapshot diff 
 report
 

 Key: HDFS-5533
 URL: https://issues.apache.org/jira/browse/HDFS-5533
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-5533.patch, HDFS-5533.v2.patch, HDFS-5533.v2.patch


 Currently the original code treat symlink delete/create as modify, but 
 symlink is immutable, should be CREATE and DELETE



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5541) LIBHDFS questions and performance suggestions

2013-11-25 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831624#comment-13831624
 ] 

Colin Patrick McCabe commented on HDFS-5541:


bq. All variables must be declared at the begining of scope , and no (//) 
comments allowed

I support the idea of declaring all variables at the beginning of the scope.  
You can see that many of my patches have moved variables to this position.  The 
reason is more to avoid initialization errors than to conform to C89.  I simply 
am not aware of any compilers that can't handle this (gcc, LLVM, MSVC, etc. can 
handle it).

However, there are no C compilers still in use that don't recognize '//' 
comments.  So let's not change things that don't need to be changed.

bq. 1) If threads are not used why do a thread attach ( when threads are not 
used all the thread attach nonesense is a waste of time and a performance 
killer )

That nonsense you're talking about is essential to how JNI works.  Threads 
need to be registered with the JVM.  You can read more on Oracle's web site 
here 
http://docs.oracle.com/javase/7/docs/technotes/guides/jni/spec/invocation.html :

bq. The JNI interface pointer (JNIEnv) is valid only in the current thread. 
Should another thread need to access the Java VM, it must first call 
AttachCurrentThread() to attach itself to the VM and obtain a JNI interface 
pointer. Once attached to the VM, a native thread works just like an ordinary 
Java thread running inside a native method. The native thread remains attached 
to the VM until it calls DetachCurrentThread() to detach itself.

bq. 2) The JVM init code should not be imbedded within the context of every 
function call . The JVM init code should be in a stand-alone LIBINIT function 
that is only invoked once. The JVM * and the JNI * should be global variables 
for use when no threads are utilized.

Well, based on Oracle's documentation, you can see why the {{JNIEnv}} can't be 
a global variable.  Globals are unsuitable for libraries anyway, for reasons 
you should already know.  It would be nice to have an init function for 
libhdfs, but it would break compatibility, something which is very important to 
us.  Maybe someday when we do a complete, compatibility-breaking rethink of the 
API we can think about this.

bq. 4) Hash Table and Locking Why ?  When threads are used the hash table 
locking is going to hurt perfromance . Why not use thread local storage for the 
hash table,that way no locking is required either with or without threads.

If you use thread-local storage, you'll have to store {{num_threads}} times as 
many class references.

bq. 5) FINALLY Windows Compatibility.  Do not use posix features if they cannot 
easilly be replaced on other platforms !!

Do you have any examples of this?

 LIBHDFS questions and performance suggestions
 -

 Key: HDFS-5541
 URL: https://issues.apache.org/jira/browse/HDFS-5541
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Reporter: Stephen Bovy
Priority: Minor

 Since libhdfs is a client interface,  and esspecially because it is a C 
 interface , it should be assumed that the code will be used accross many 
 different platforms, and many different compilers.
 1) The code should be cross platform ( no Linux extras )
 2) The code should compile on standard c89 compilers, the
   {least common denominator rule applies here} !!   
 C  code with  c   extension should follow the rules of the c standard  
 All variables must be declared at the begining of scope , and no (//) 
 comments allowed 
  I just spent a week white-washing the code back to nornal C standards so 
  that it could compile and build accross a wide range of platforms  
 Now on-to  performance questions 
 1) If threads are not used why do a thread attach ( when threads are not used 
 all the thread attach nonesense is a waste of time and a performance killer ) 
 2) The JVM  init  code should not be imbedded within the context of every 
 function call   .  The  JVM init code should be in a stand-alone  LIBINIT 
 function that is only invoked once.   The JVM * and the JNI * should be 
 global variables for use when no threads are utilized.  
 3) When threads are utilized the attach fucntion can use the GLOBAL  jvm * 
 created by the LIBINIT  { WHICH IS INVOKED ONLY ONCE } and thus safely 
 outside the scope of any LOOP that is using the functions 
 4) Hash Table and Locking  Why ?
 When threads are used the hash table locking is going to hurt perfromance .  
 Why not use thread local storage for the hash table,that way no locking is 
 required either with or without threads.   
  
 5) FINALLY Windows  Compatibility 
 Do not use posix features if they cannot easilly be replaced on other 
 platforms   !!



--
This 

[jira] [Commented] (HDFS-5533) Symlink delete/create should be treated as DELETE/CREATE in snapshot diff report

2013-11-25 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831674#comment-13831674
 ] 

Jing Zhao commented on HDFS-5533:
-

+1 as well. I will commit the patch shortly.

 Symlink delete/create should be treated as DELETE/CREATE in snapshot diff 
 report
 

 Key: HDFS-5533
 URL: https://issues.apache.org/jira/browse/HDFS-5533
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-5533.patch, HDFS-5533.v2.patch, HDFS-5533.v2.patch


 Currently the original code treat symlink delete/create as modify, but 
 symlink is immutable, should be CREATE and DELETE



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5533) Symlink delete/create should be treated as DELETE/CREATE in snapshot diff report

2013-11-25 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5533:


   Resolution: Fixed
Fix Version/s: 2.3.0
   Status: Resolved  (was: Patch Available)

I've committed this to trunk and branch-2. Thanks Binglin for the fix! Thanks 
Colin for the review!

 Symlink delete/create should be treated as DELETE/CREATE in snapshot diff 
 report
 

 Key: HDFS-5533
 URL: https://issues.apache.org/jira/browse/HDFS-5533
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Fix For: 2.3.0

 Attachments: HDFS-5533.patch, HDFS-5533.v2.patch, HDFS-5533.v2.patch


 Currently the original code treat symlink delete/create as modify, but 
 symlink is immutable, should be CREATE and DELETE



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5533) Symlink delete/create should be treated as DELETE/CREATE in snapshot diff report

2013-11-25 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5533:


Component/s: snapshots

 Symlink delete/create should be treated as DELETE/CREATE in snapshot diff 
 report
 

 Key: HDFS-5533
 URL: https://issues.apache.org/jira/browse/HDFS-5533
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Fix For: 2.3.0

 Attachments: HDFS-5533.patch, HDFS-5533.v2.patch, HDFS-5533.v2.patch


 Currently the original code treat symlink delete/create as modify, but 
 symlink is immutable, should be CREATE and DELETE



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5533) Symlink delete/create should be treated as DELETE/CREATE in snapshot diff report

2013-11-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831687#comment-13831687
 ] 

Hudson commented on HDFS-5533:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4790 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4790/])
HDFS-5533. Symlink delete/create should be treated as DELETE/CREATE in snapshot 
diff report. Contributed by Binglin Chang. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1545357)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDiffReport.java


 Symlink delete/create should be treated as DELETE/CREATE in snapshot diff 
 report
 

 Key: HDFS-5533
 URL: https://issues.apache.org/jira/browse/HDFS-5533
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Fix For: 2.3.0

 Attachments: HDFS-5533.patch, HDFS-5533.v2.patch, HDFS-5533.v2.patch


 Currently the original code treat symlink delete/create as modify, but 
 symlink is immutable, should be CREATE and DELETE



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5548) Use ConcurrentHashMap in portmap

2013-11-25 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831710#comment-13831710
 ] 

Brandon Li commented on HDFS-5548:
--

The patch looks good in general. For the removed log traces, we may want to 
keep them for the sake of debugging. 

 Use ConcurrentHashMap in portmap
 

 Key: HDFS-5548
 URL: https://issues.apache.org/jira/browse/HDFS-5548
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5548.000.patch, HDFS-5548.001.patch


 Portmap uses a HashMap to store the port mapping. It synchronizes the access 
 of the hash map by locking itself. It can be simplified by using a 
 ConcurrentHashMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5551) Rename path.based caching configuration options

2013-11-25 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831708#comment-13831708
 ] 

Colin Patrick McCabe commented on HDFS-5551:


I'm not sure if we want to rename these.  For example, if we rename 
{{dfs.namenode.path.based.cache.refresh.interval.ms}} to 
{{dfs.namenode.cache.refresh.interval.ms}}, it starts to sound awfully generic.

The rename that removed PathBased from a bunch of class names was just a 
matter of necessity-- the names were getting too cumbersome, and we didn't 
think there would be conflicts.  With the config keys, a long name is not such 
a big problem since we only use them once or twice throughout the code.  And it 
does help to disambiguate them from all the other cache configurations we have 
(and there are many)

 Rename path.based caching configuration options
 -

 Key: HDFS-5551
 URL: https://issues.apache.org/jira/browse/HDFS-5551
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor

 Some configuration options still have the path.based moniker, missed during 
 the big rename removing this naming convention.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5558) LeaseManager monitor thread can crash if the last block is complete but another block is not.

2013-11-25 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831745#comment-13831745
 ] 

Colin Patrick McCabe commented on HDFS-5558:


My understanding is that we can only get into this situation if there is 
another bug (such as HDFS-5557) causing an internal inconsistency.  With this 
in mind, I think the log message should be at ERROR level, not INFO, and should 
look different from the standard {{checkFileProgress}} log message.  It looks 
good aside from that.

 LeaseManager monitor thread can crash if the last block is complete but 
 another block is not.
 -

 Key: HDFS-5558
 URL: https://issues.apache.org/jira/browse/HDFS-5558
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.9, 2.3.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-5558.branch-023.patch, HDFS-5558.patch


 As mentioned in HDFS-5557, if a file has its last and penultimate block not 
 completed and the file is being closed, the last block may be completed but 
 the penultimate one might not. If this condition lasts long and the file is 
 abandoned, LeaseManager will try to recover the lease and the block. But 
 {{internalReleaseLease()}} will fail with invalid cast exception with this 
 kind of file.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5563) NFS gateway should commit the buffered data when read request comes after write to the same file

2013-11-25 Thread Brandon Li (JIRA)
Brandon Li created HDFS-5563:


 Summary: NFS gateway should commit the buffered data when read 
request comes after write to the same file
 Key: HDFS-5563
 URL: https://issues.apache.org/jira/browse/HDFS-5563
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li


HDFS write is asynchronous and data may not be available to read immediately 
after write.
One of the main reason is that DFSClient doesn't flush data to DN until its 
local buffer is full.

To workaround this problem, when a read comes after write to the same file, NFS 
gateway should sync the data so the read request can get the latest content. 
The drawback is that, the frequent hsync() call can slow down data write.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5562) TestCacheDirectives and TestFsDatasetCache fails on trunk

2013-11-25 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5562:
---

Attachment: HDFS-5562.002.patch

These tests have been fixed to not actually use mlock, to avoid a dependency on 
libhadoop.so.  We don't want to skip them.  Here is a patch that allows the 
tests to run when {{libhadoop.so}} is not present.

 TestCacheDirectives and TestFsDatasetCache fails on trunk
 -

 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Binglin Chang
 Attachments: HDFS-5562.002.patch, HDFS-5562.v1.patch


 Some tests fail on trunk.
 {code}
 Tests in error:
   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
 datan...
   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
 » Runtime
   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
 Cannot ...
   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
 datanode ...
 Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
 {code}
 For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5562) TestCacheDirectives and TestFsDatasetCache should not depend on libhadoop.so

2013-11-25 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5562:
---

Summary: TestCacheDirectives and TestFsDatasetCache should not depend on 
libhadoop.so  (was: TestCacheDirectives and TestFsDatasetCache fails on trunk)

 TestCacheDirectives and TestFsDatasetCache should not depend on libhadoop.so
 

 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Binglin Chang
 Attachments: HDFS-5562.002.patch, HDFS-5562.v1.patch


 Some tests fail on trunk.
 {code}
 Tests in error:
   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
 datan...
   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
 » Runtime
   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
 Cannot ...
   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
 datanode ...
 Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
 {code}
 For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5538) URLConnectionFactory should pick up the SSL related configuration by default

2013-11-25 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831796#comment-13831796
 ] 

Jing Zhao commented on HDFS-5538:
-

The patch looks pretty good to me. Some comments:
# nit: In EditLogInputStream#fromUrl, please update the javadoc (add description
  about the new parameter) and fix the indentation.
# Please post some testing results for HA setup (to cover the changes in
  EditLogFileInputStream and QuorumJournalManager) when https is enabled.
# It will be better to update the jira description to provide details about how
  the patch eases the handling of HTTPS connections (e.g., loading the SSL 
factory
  setup in the beginning, and getting rid of its dependency of the global
  configuration HttpConfig when determining whether we need to setup an ssl
  connection).  

 URLConnectionFactory should pick up the SSL related configuration by default
 

 Key: HDFS-5538
 URL: https://issues.apache.org/jira/browse/HDFS-5538
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5538.000.patch, HDFS-5538.001.patch, 
 HDFS-5538.002.patch


 The default instance of URLConnectionFactory, DEFAULT_CONNECTION_FACTORY does 
 not pick up any hadoop-specific, SSL-related configuration. Its customers 
 have to set up the ConnectionConfigurator explicitly in order to pick up 
 these configurations.
 This is less than ideal for HTTPS because whenever the code needs to make a 
 HTTPS connection, the code is forced to go through the set up.
 This jira refactors URLConnectionFactory to ease the handling of HTTPS 
 connections (compared to the DEFAULT_CONNECTION_FACTORY we have right now).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5564) Refactor tests in TestCacheDirectives

2013-11-25 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-5564:
-

 Summary: Refactor tests in TestCacheDirectives
 Key: HDFS-5564
 URL: https://issues.apache.org/jira/browse/HDFS-5564
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang


Some of the tests in TestCacheDirectives start their own MiniDFSCluster to get 
a new config, even though we already start a cluster in the @Before function. 
This contributes to longer test runs and code duplication.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5556) add some more NameNode cache statistics, cache pool stats

2013-11-25 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5556:
---

Attachment: HDFS-5556.002.patch

* I added some tests for the new DN stats

the Cache pool stats aren't hooked up yet (I am planning on doing that in a 
follow-on).  This is just setting up the types and protobuf changes.  Let's 
wait on CacheAdmin support until we get that straightened out.



 add some more NameNode cache statistics, cache pool stats
 -

 Key: HDFS-5556
 URL: https://issues.apache.org/jira/browse/HDFS-5556
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-5556.001.patch, HDFS-5556.002.patch


 Add some more NameNode cache statistics and also cache pool statistics.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5562) TestCacheDirectives and TestFsDatasetCache should not depend on libhadoop.so

2013-11-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831941#comment-13831941
 ] 

Hadoop QA commented on HDFS-5562:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615645/HDFS-5562.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5560//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5560//console

This message is automatically generated.

 TestCacheDirectives and TestFsDatasetCache should not depend on libhadoop.so
 

 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Binglin Chang
 Attachments: HDFS-5562.002.patch, HDFS-5562.v1.patch


 Some tests fail on trunk.
 {code}
 Tests in error:
   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
 datan...
   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
 » Runtime
   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
 Cannot ...
   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
 datanode ...
 Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
 {code}
 For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5541) LIBHDFS questions and performance suggestions

2013-11-25 Thread Stephen Bovy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831956#comment-13831956
 ] 

Stephen Bovy commented on HDFS-5541:


Thanks very much for tolerating my inept and sometimed ill informed questions.

However, there are no C compilers still in use that don't recognize '//' 
comments.  So let's not change things that don't need to be changed.

We support a wide range of platforms ( which is why compatibility is a hot 
issue for me )  

Some vendors stubornly adhere strictly to the letter of the law  and tightly 
follow standards ,  

Here is a compiler  that does not support (//)  ( the ibm aix compiler )  

[21:31:27][21:31:27]= AIX 32-bit build 
=[21:31:27][21:31:27]clearmake -f makefile.twb -J 1 
CLEARCASE_BLD_HOST_TYPE=otbe.aix-power TARGET_ARCH=aix-power.32 
pdclibhdfs[21:31:37]Rebuilding pdclibhdfs_all on host 
esaix800.td.teradata.com [21:31:41][21:31:41] Finished 
pdclibhdfs_all on host esaix800.td.teradata.com [21:31:41]  cd 
pdclibhdfs/src/  clearmake -J 1 -f makefile.twb   
pdclibhdfs_all[21:31:41]clearmake[1]: Entering directory 
`/vob/paralx/pdclibhdfs/src'[21:31:41]Rebuilding 
../../pdclibhdfs/aix-power.32/jni_helper.o on host esaix800.td.teradata.com 
[21:31:41][21:31:41] Finished 
../../pdclibhdfs/aix-power.32/jni_helper.o on host esaix800.td.teradata.com 
[21:31:41] LIBPATH=../../../aix-power/usr/vac/lib xlc -qmaxmem=8192 
-q32 -qnostdinc -F../../../aix-power/etc/otbe_vac.cfg -w -qhalt=e -c -O2 
-qlanglvl=extended -I. -I../inc -I../../../java/aix-power.32/include 
-I../../../java/aix-power/include -I../../../aix-power/usr/include  
-DBUILDPRODUCTNAME=Teradata Parallel Transporter -DBUILDPROJECT=Teradata PT 
Hdfs Library -DBUILDVERSION=15d.00.00.00T.D2D -DAIX -DTA_aix_power_32=1  
-o../../pdclibhdfs/aix-power.32/jni_helper.o 
jni_helper.c[21:31:41]jni_helper.c, line 427.1: 1506-046 (S) Syntax 
error.[21:31:41]*** Error code 1[21:31:41]clearmake: Error: Build script failed 
for 
../../pdclibhdfs/aix-power.32/jni_helper.o[21:31:41][21:31:41][21:31:41]Aborting...[21:31:41]clearmake[1]:
 Leaving directory `/vob/paralx/pdclibhdfs/src'[21:31:41]*** Error code 
1[21:31:41]clearmake: Error: Build script failed for 
pdclibhdfs_all[21:31:41]

On the thread issue  ( I did not fully understand the code path )

I thought incorrectly that  an application that did not use threads would 
in-avertently be forced to invoke thread-attach.
But now I see that this is not the case, thanks 

ON the windows front( there is no support for the posix hash funcitons ) so 
I had to scramble to find a replacement 

I chose ut-hash  ( for good or bad ) it was simple, easy and had a flexable 
license.  

I do have a question  

Are the global class object pointers shared accross  thread jni environements ?

I would have thought that everyting created with-in the context  of  one  
jni-env  would  be local to that (env)  ?? 



 LIBHDFS questions and performance suggestions
 -

 Key: HDFS-5541
 URL: https://issues.apache.org/jira/browse/HDFS-5541
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Reporter: Stephen Bovy
Priority: Minor

 Since libhdfs is a client interface,  and esspecially because it is a C 
 interface , it should be assumed that the code will be used accross many 
 different platforms, and many different compilers.
 1) The code should be cross platform ( no Linux extras )
 2) The code should compile on standard c89 compilers, the
   {least common denominator rule applies here} !!   
 C  code with  c   extension should follow the rules of the c standard  
 All variables must be declared at the begining of scope , and no (//) 
 comments allowed 
  I just spent a week white-washing the code back to nornal C standards so 
  that it could compile and build accross a wide range of platforms  
 Now on-to  performance questions 
 1) If threads are not used why do a thread attach ( when threads are not used 
 all the thread attach nonesense is a waste of time and a performance killer ) 
 2) The JVM  init  code should not be imbedded within the context of every 
 function call   .  The  JVM init code should be in a stand-alone  LIBINIT 
 function that is only invoked once.   The JVM * and the JNI * should be 
 global variables for use when no threads are utilized.  
 3) When threads are utilized the attach fucntion can use the GLOBAL  jvm * 
 created by the LIBINIT  { WHICH IS INVOKED ONLY ONCE } and thus safely 
 outside the scope of any LOOP that is using the functions 
 4) Hash Table and Locking  Why ?
 When threads are used the hash table 

[jira] [Updated] (HDFS-5430) Support TTL on CacheBasedPathDirectives

2013-11-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5430:
--

Attachment: hdfs-5430-3.patch

New patch attached. I reworked it so CDI has both an absolute and relative 
time. The client can set either, but the NN will translate relative time to 
absolute time for the CacheDirective and for serialization to the edit log and 
fsimage. Also added tests for time unit conversion.

 Support TTL on CacheBasedPathDirectives
 ---

 Key: HDFS-5430
 URL: https://issues.apache.org/jira/browse/HDFS-5430
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
Priority: Minor
 Attachments: hdfs-5430-1.patch, hdfs-5430-2.patch, hdfs-5430-3.patch


 It would be nice if CacheBasedPathDirectives would support an expiration 
 time, after which they would be automatically removed by the NameNode.  This 
 time would probably be in wall-block time for the convenience of system 
 administrators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5538) URLConnectionFactory should pick up the SSL related configuration by default

2013-11-25 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5538:
-

Attachment: HDFS-5538.003.patch

 URLConnectionFactory should pick up the SSL related configuration by default
 

 Key: HDFS-5538
 URL: https://issues.apache.org/jira/browse/HDFS-5538
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5538.000.patch, HDFS-5538.001.patch, 
 HDFS-5538.002.patch, HDFS-5538.003.patch


 The default instance of URLConnectionFactory, DEFAULT_CONNECTION_FACTORY does 
 not pick up any hadoop-specific, SSL-related configuration. Its customers 
 have to set up the ConnectionConfigurator explicitly in order to pick up 
 these configurations.
 This is less than ideal for HTTPS because whenever the code needs to make a 
 HTTPS connection, the code is forced to go through the set up.
 This jira refactors URLConnectionFactory to ease the handling of HTTPS 
 connections (compared to the DEFAULT_CONNECTION_FACTORY we have right now).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5538) URLConnectionFactory should pick up the SSL related configuration by default

2013-11-25 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831967#comment-13831967
 ] 

Haohui Mai commented on HDFS-5538:
--

I've tested the patch by running a secondary namenode, and it works for both 
http and https.

 URLConnectionFactory should pick up the SSL related configuration by default
 

 Key: HDFS-5538
 URL: https://issues.apache.org/jira/browse/HDFS-5538
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5538.000.patch, HDFS-5538.001.patch, 
 HDFS-5538.002.patch, HDFS-5538.003.patch


 The default instance of URLConnectionFactory, DEFAULT_CONNECTION_FACTORY does 
 not pick up any hadoop-specific, SSL-related configuration. Its customers 
 have to set up the ConnectionConfigurator explicitly in order to pick up 
 these configurations.
 This is less than ideal for HTTPS because whenever the code needs to make a 
 HTTPS connection, the code is forced to go through the set up.
 This jira refactors URLConnectionFactory to ease the handling of HTTPS 
 connections (compared to the DEFAULT_CONNECTION_FACTORY we have right now).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5548) Use ConcurrentHashMap in portmap

2013-11-25 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5548:
-

Attachment: HDFS-5548.002.patch

 Use ConcurrentHashMap in portmap
 

 Key: HDFS-5548
 URL: https://issues.apache.org/jira/browse/HDFS-5548
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5548.000.patch, HDFS-5548.001.patch, 
 HDFS-5548.002.patch


 Portmap uses a HashMap to store the port mapping. It synchronizes the access 
 of the hash map by locking itself. It can be simplified by using a 
 ConcurrentHashMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5541) LIBHDFS questions and performance suggestions

2013-11-25 Thread Stephen Bovy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Bovy updated HDFS-5541:
---

Attachment: pdclibhdfs.zip

Windows Porting Project ( and other nix comaptibility )  

Testing with  Hortonworks Windows Dist Based on hadoop 1.1.3 andwith jdk   
1.6.0_31

These changes are based on latest GA 2.0.xx release 

Unix/Windows Compatibility Changes 
And Some Performance Enhancments 

Added  uthash  for windows hash table  

#ifdef WIN32
#include uthash.h
#endif

Added many #def  for windows vs unix  

Added  jvm-mutex macro 

#ifdef WIN32
#define LOCK_JVM_MUTEX() \
dwWaitResult = WaitForSingleObject(hdfs_JvmMutex,INFINITE) 
#else
#define LOCK_JVM_MUTEX() \
pthread_mutex_lock(hdfs_JvmMutex)
#endif

#ifdef WIN32
#define UNLOCK_JVM_MUTEX() \
ReleaseMutex(hdfs_JvmMutex) 
#else
#define UNLOCK_JVM_MUTEX() \
pthread_mutex_unlock(hdfs_JvmMutex)
#endif

 Dynamically load the jvm   ( more flexable )  ( and easier to build )

added simplistic starting point for lib init function 
When this fucntion is used  locking in  getjni env can be avoided 

int hdfsLibInit ( void * parms )
{

JNIEnv* env = getJNIEnv();

if (!env) return 1;

hdfs_InitLib = 1;

return 0;

}


Convert Thread local storage init to use 

{ pthread_once )  to eliminate some locking issues  

( see below ) ::

JNIEnv* getJNIEnv(void)
{
JNIEnv *env = NULL;
HDFSTLS *tls = NULL;
int ret = 0;
jint rv = 0;

#ifdef WIN32
DWORD dwWaitResult; 
tls = TlsGetValue(hdfs_dwTlsIndex1); 
if (tls) return tls-env;
#endif

#ifdef HAVE_BETTER_TLS
static __thread HDFSTLS *quickTls = NULL;
if (quickTls) return quickTls-env;
#endif

#ifndef WIN32

pthread_once(hdfs_threadInit_Once, Make_Thread_Key);

if (!hdfs_gTlsKeyInitialized)
return NULL;

tls = pthread_getspecific(hdfs_gTlsKey);
if (tls) {
return tls-env;
}

#endif

if (!hdfs_InitLib) { 
LOCK_JVM_MUTEX();
env = getGlobalJNIEnv();
UNLOCK_JVM_MUTEX();
} else {
rv = (*hdfs_JVM)-AttachCurrentThread(hdfs_JVM, (void**) env, 0);
if (rv != 0) {
fprintf(stderr, Call to AttachCurrentThread 
failed with error: %d\n, rv);
return NULL;
}
}

if (!env) {
fprintf(stderr, getJNIEnv: getGlobalJNIEnv failed\n);
return NULL;
}

tls = calloc ( 1, sizeof(HDFSTLS) );
if (!tls) {
fprintf(stderr, getJNIEnv: OOM allocating %zd bytes\n,
sizeof(HDFSTLS) );
return NULL;
}

tls-env = env;

#ifdef WIN32
printf ( dll: save environment\n );
if (!TlsSetValue(hdfs_dwTlsIndex1, tls))
 return NULL;
return env;
#endif

#ifdef HAVE_BETTER_TLS
quickTls = tls;
return env;
#endif

#ifndef WIN32
ret = pthread_setspecific(hdfs_gTlsKey, tls);
if (ret) {
fprintf(stderr, getJNIEnv: pthread_setspecific failed with 
error code %d\n, ret);
hdfsThreadDestructor(tls);
return NULL;
}
#endif

return env;

}

Also used ( pthread_once )  to init  hash table and simplify hash table locking 

static int insertEntryIntoTable ( const char *key, void *data )
{

ENTRY e, *ep = NULL;
if (key == NULL || data == NULL) {
return 0;
}

pthread_once ( hdfs_hashTable_Once, hashTableInit );
if ( !hdfs_hashTableInited ) {
  return -1;
}



Note:  Some recent  enhancements are not backwards comaptible 

/*This is not backwards comaptible */
/*
jthr = invokeMethod ( env, NULL, STATIC, NULL,
 org/apache/hadoop/fs/FileSystem,
 loadFileSystems, ()V );
if (jthr) {
printExceptionAndFree ( env, jthr, PRINT_EXC_ALL,
loadFileSystems );
return NULL;
} */



The newInstance functions are not backwards compatible 

and therfore must be avoided  

The new readDirect function  produces a method error  on windows jdk
64 bit 1.6.0_31 

java version 1.6.0_31
Java(TM) SE Runtime Environment (build 1.6.0_31-b05)
Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)


could not find method read from class org/apache/hadoop/fs/FSDataInputStream wit
h signature (Ljava/nio/ByteBuffer;)I
readDirect: FSDataInputStream#read error:
Begin Method Invokation:org/apache/commons/lang/exception/ExceptionUtils ## getS
tackTrace
End Method Invokation
Method success
java.lang.NoSuchMethodError: read
hdfsOpenFile(/tmp/testfile.txt): WARN: Unexpected error 255 when testing for dir
ect read compatibility



And finally  

Dag nab it  I cannot figure this one out  the append does not work 

Begin Method Invokation:org/apache/hadoop/fs/FileSystem ## append

org.apache.hadoop.ipc.RemoteException: 

[jira] [Commented] (HDFS-5541) LIBHDFS questions and performance suggestions

2013-11-25 Thread Stephen Bovy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831992#comment-13831992
 ] 

Stephen Bovy commented on HDFS-5541:


I have attached my project with some comments 



 LIBHDFS questions and performance suggestions
 -

 Key: HDFS-5541
 URL: https://issues.apache.org/jira/browse/HDFS-5541
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Reporter: Stephen Bovy
Priority: Minor
 Attachments: pdclibhdfs.zip


 Since libhdfs is a client interface,  and esspecially because it is a C 
 interface , it should be assumed that the code will be used accross many 
 different platforms, and many different compilers.
 1) The code should be cross platform ( no Linux extras )
 2) The code should compile on standard c89 compilers, the
   {least common denominator rule applies here} !!   
 C  code with  c   extension should follow the rules of the c standard  
 All variables must be declared at the begining of scope , and no (//) 
 comments allowed 
  I just spent a week white-washing the code back to nornal C standards so 
  that it could compile and build accross a wide range of platforms  
 Now on-to  performance questions 
 1) If threads are not used why do a thread attach ( when threads are not used 
 all the thread attach nonesense is a waste of time and a performance killer ) 
 2) The JVM  init  code should not be imbedded within the context of every 
 function call   .  The  JVM init code should be in a stand-alone  LIBINIT 
 function that is only invoked once.   The JVM * and the JNI * should be 
 global variables for use when no threads are utilized.  
 3) When threads are utilized the attach fucntion can use the GLOBAL  jvm * 
 created by the LIBINIT  { WHICH IS INVOKED ONLY ONCE } and thus safely 
 outside the scope of any LOOP that is using the functions 
 4) Hash Table and Locking  Why ?
 When threads are used the hash table locking is going to hurt perfromance .  
 Why not use thread local storage for the hash table,that way no locking is 
 required either with or without threads.   
  
 5) FINALLY Windows  Compatibility 
 Do not use posix features if they cannot easilly be replaced on other 
 platforms   !!



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5556) add some more NameNode cache statistics, cache pool stats

2013-11-25 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832005#comment-13832005
 ] 

Andrew Wang commented on HDFS-5556:
---

I missed this before, but are we sure we want to add new stats to 
NameNodeMXBean rather than FSNamesystemMBean? It's a public stable interface, 
so modifications here are not backwards compatible.

I'd also like to see a test a la TestNameNodeMetrics verifying that the new NN 
metrics show up. It should be straightfoward to insert a few more checks in an 
existing caching test or check method. We can wait on CachePoolStats tests 
until they're hooked up.

 add some more NameNode cache statistics, cache pool stats
 -

 Key: HDFS-5556
 URL: https://issues.apache.org/jira/browse/HDFS-5556
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-5556.001.patch, HDFS-5556.002.patch


 Add some more NameNode cache statistics and also cache pool statistics.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5538) URLConnectionFactory should pick up the SSL related configuration by default

2013-11-25 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5538:


Description: 
The default instance of URLConnectionFactory, DEFAULT_CONNECTION_FACTORY does 
not pick up any hadoop-specific, SSL-related configuration. Its customers have 
to set up the ConnectionConfigurator explicitly in order to pick up these 
configurations. This is less than ideal for HTTPS because whenever the code 
needs to make a HTTPS connection, the code is forced to go through the set up.

This jira refactors URLConnectionFactory to ease the handling of HTTPS 
connections (compared to the DEFAULT_CONNECTION_FACTORY we have right now). In 
particular, instead of loading the SSL configurator statically in SecurityUtil 
(based on a global configuration about SSL), and determine whether we should 
set up SSL for a given connection based on whether the SSL configurator is 
null, we now load the SSL configurator in URLConnectionFactory, and determine 
if we need to use the configurator to set up an SSL connection based on if the 
given URL/connection is https.

  was:
The default instance of URLConnectionFactory, DEFAULT_CONNECTION_FACTORY does 
not pick up any hadoop-specific, SSL-related configuration. Its customers have 
to set up the ConnectionConfigurator explicitly in order to pick up these 
configurations.

This is less than ideal for HTTPS because whenever the code needs to make a 
HTTPS connection, the code is forced to go through the set up.

This jira refactors URLConnectionFactory to ease the handling of HTTPS 
connections (compared to the DEFAULT_CONNECTION_FACTORY we have right now).


 URLConnectionFactory should pick up the SSL related configuration by default
 

 Key: HDFS-5538
 URL: https://issues.apache.org/jira/browse/HDFS-5538
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5538.000.patch, HDFS-5538.001.patch, 
 HDFS-5538.002.patch, HDFS-5538.003.patch


 The default instance of URLConnectionFactory, DEFAULT_CONNECTION_FACTORY does 
 not pick up any hadoop-specific, SSL-related configuration. Its customers 
 have to set up the ConnectionConfigurator explicitly in order to pick up 
 these configurations. This is less than ideal for HTTPS because whenever the 
 code needs to make a HTTPS connection, the code is forced to go through the 
 set up.
 This jira refactors URLConnectionFactory to ease the handling of HTTPS 
 connections (compared to the DEFAULT_CONNECTION_FACTORY we have right now). 
 In particular, instead of loading the SSL configurator statically in 
 SecurityUtil (based on a global configuration about SSL), and determine 
 whether we should set up SSL for a given connection based on whether the SSL 
 configurator is null, we now load the SSL configurator in 
 URLConnectionFactory, and determine if we need to use the configurator to set 
 up an SSL connection based on if the given URL/connection is https.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5549) Support for implementing custom FsDatasetSpi from outside the project

2013-11-25 Thread Ignacio Corderi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Corderi updated HDFS-5549:
--

Attachment: (was: HDFS-5549.patch)

 Support for implementing custom FsDatasetSpi from outside the project
 -

 Key: HDFS-5549
 URL: https://issues.apache.org/jira/browse/HDFS-5549
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.0
Reporter: Ignacio Corderi

 Visibility for multiple methods and a few classes got changed to public to 
 allow FsDatasetSpiT and all the related classes that need subtyping to be 
 fully implemented from outside the HDFS project.
 Blocks transfers got abstracted to a factory given that the behavior will be 
 changed for DataNodes using Kinetic drives. The existing DataNode to DataNode 
 block transfer functionality got moved to LegacyBlockTransferer, no new 
 configuration is needed to use this class and have the same behavior that is 
 currently present.
 DataNodes have an additional configuration key 
 DFS_DATANODE_BLOCKTRANSFERER_FACTORY_KEY to override the default block 
 transfer behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5549) Support for implementing custom FsDatasetSpi from outside the project

2013-11-25 Thread Ignacio Corderi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Corderi updated HDFS-5549:
--

Attachment: HDFS-5549.patch

 Support for implementing custom FsDatasetSpi from outside the project
 -

 Key: HDFS-5549
 URL: https://issues.apache.org/jira/browse/HDFS-5549
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.0
Reporter: Ignacio Corderi
 Attachments: HDFS-5549.patch


 Visibility for multiple methods and a few classes got changed to public to 
 allow FsDatasetSpiT and all the related classes that need subtyping to be 
 fully implemented from outside the HDFS project.
 Blocks transfers got abstracted to a factory given that the behavior will be 
 changed for DataNodes using Kinetic drives. The existing DataNode to DataNode 
 block transfer functionality got moved to LegacyBlockTransferer, no new 
 configuration is needed to use this class and have the same behavior that is 
 currently present.
 DataNodes have an additional configuration key 
 DFS_DATANODE_BLOCKTRANSFERER_FACTORY_KEY to override the default block 
 transfer behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5538) URLConnectionFactory should pick up the SSL related configuration by default

2013-11-25 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832027#comment-13832027
 ] 

Haohui Mai commented on HDFS-5538:
--

We'll defer the system test of HA / QJM in HDFS-5545 / HDFS-5536.

 URLConnectionFactory should pick up the SSL related configuration by default
 

 Key: HDFS-5538
 URL: https://issues.apache.org/jira/browse/HDFS-5538
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5538.000.patch, HDFS-5538.001.patch, 
 HDFS-5538.002.patch, HDFS-5538.003.patch


 The default instance of URLConnectionFactory, DEFAULT_CONNECTION_FACTORY does 
 not pick up any hadoop-specific, SSL-related configuration. Its customers 
 have to set up the ConnectionConfigurator explicitly in order to pick up 
 these configurations. This is less than ideal for HTTPS because whenever the 
 code needs to make a HTTPS connection, the code is forced to go through the 
 set up.
 This jira refactors URLConnectionFactory to ease the handling of HTTPS 
 connections (compared to the DEFAULT_CONNECTION_FACTORY we have right now). 
 In particular, instead of loading the SSL configurator statically in 
 SecurityUtil (based on a global configuration about SSL), and determine 
 whether we should set up SSL for a given connection based on whether the SSL 
 configurator is null, we now load the SSL configurator in 
 URLConnectionFactory, and determine if we need to use the configurator to set 
 up an SSL connection based on if the given URL/connection is https.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5565) CacheAdmin help should match against non-dashed commands

2013-11-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5565:
--

Labels: newbie  (was: )

 CacheAdmin help should match against non-dashed commands
 

 Key: HDFS-5565
 URL: https://issues.apache.org/jira/browse/HDFS-5565
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: caching
Affects Versions: 3.0.0
Reporter: Andrew Wang
Priority: Minor
  Labels: newbie

 Using the shell, `hdfs dfsadmin -help refreshNamespace` returns help text, 
 but for cacheadmin, you have to specify `hdfs cacheadmin -help -addDirective` 
 with a dash before the command name. This is inconsistent with dfsadmin, dfs, 
 and haadmin, which also error when you provide a dash.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5556) add some more NameNode cache statistics, cache pool stats

2013-11-25 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832055#comment-13832055
 ] 

Colin Patrick McCabe commented on HDFS-5556:


cache is logically a per-cluster thing, not a per-namesystem thing.  So I think 
it does belong in NameNodeMXBean, just like getPercentUsed, etc.

I will check up on adding this to TestNameNodeMetrics

 add some more NameNode cache statistics, cache pool stats
 -

 Key: HDFS-5556
 URL: https://issues.apache.org/jira/browse/HDFS-5556
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-5556.001.patch, HDFS-5556.002.patch


 Add some more NameNode cache statistics and also cache pool statistics.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5538) URLConnectionFactory should pick up the SSL related configuration by default

2013-11-25 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832059#comment-13832059
 ] 

Jing Zhao commented on HDFS-5538:
-

Sounds good to me. +1 pending Jenkins

 URLConnectionFactory should pick up the SSL related configuration by default
 

 Key: HDFS-5538
 URL: https://issues.apache.org/jira/browse/HDFS-5538
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5538.000.patch, HDFS-5538.001.patch, 
 HDFS-5538.002.patch, HDFS-5538.003.patch


 The default instance of URLConnectionFactory, DEFAULT_CONNECTION_FACTORY does 
 not pick up any hadoop-specific, SSL-related configuration. Its customers 
 have to set up the ConnectionConfigurator explicitly in order to pick up 
 these configurations. This is less than ideal for HTTPS because whenever the 
 code needs to make a HTTPS connection, the code is forced to go through the 
 set up.
 This jira refactors URLConnectionFactory to ease the handling of HTTPS 
 connections (compared to the DEFAULT_CONNECTION_FACTORY we have right now). 
 In particular, instead of loading the SSL configurator statically in 
 SecurityUtil (based on a global configuration about SSL), and determine 
 whether we should set up SSL for a given connection based on whether the SSL 
 configurator is null, we now load the SSL configurator in 
 URLConnectionFactory, and determine if we need to use the configurator to set 
 up an SSL connection based on if the given URL/connection is https.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5545) Allow specifying endpoints for listeners in HttpServer

2013-11-25 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5545:
-

Attachment: HDFS-5545.001.patch

 Allow specifying endpoints for listeners in HttpServer
 --

 Key: HDFS-5545
 URL: https://issues.apache.org/jira/browse/HDFS-5545
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5545.000.patch, HDFS-5545.001.patch


 Currently HttpServer listens to HTTP port and provides a method to allow the 
 users to add an SSL listeners after the server starts. This complicates the 
 logic if the client needs to set up HTTP / HTTPS serverfs.
 This jira proposes to replace these two methods with the concepts of listener 
 endpoints. A listener endpoints is a URI (i.e., scheme + host + port) that 
 the HttpServer should listen to. This concept simplifies the task of managing 
 the HTTP server from HDFS / YARN.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5556) add some more NameNode cache statistics, cache pool stats

2013-11-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832079#comment-13832079
 ] 

Hadoop QA commented on HDFS-5556:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615673/HDFS-5556.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5561//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5561//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5561//console

This message is automatically generated.

 add some more NameNode cache statistics, cache pool stats
 -

 Key: HDFS-5556
 URL: https://issues.apache.org/jira/browse/HDFS-5556
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-5556.001.patch, HDFS-5556.002.patch


 Add some more NameNode cache statistics and also cache pool statistics.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5565) CacheAdmin help should match against non-dashed commands

2013-11-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5565:
--

Attachment: hdfs-5565-1.patch

Small patch attached, twiddled TestCacheAdminCLI's input to use the new-style.

 CacheAdmin help should match against non-dashed commands
 

 Key: HDFS-5565
 URL: https://issues.apache.org/jira/browse/HDFS-5565
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: caching
Affects Versions: 3.0.0
Reporter: Andrew Wang
Priority: Minor
  Labels: caching, newbie
 Attachments: hdfs-5565-1.patch


 Using the shell, `hdfs dfsadmin -help refreshNamespace` returns help text, 
 but for cacheadmin, you have to specify `hdfs cacheadmin -help -addDirective` 
 with a dash before the command name. This is inconsistent with dfsadmin, dfs, 
 and haadmin, which also error when you provide a dash.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5565) CacheAdmin help should match against non-dashed commands

2013-11-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5565:
--

Assignee: Andrew Wang
  Labels: caching newbie  (was: newbie)
  Status: Patch Available  (was: Open)

 CacheAdmin help should match against non-dashed commands
 

 Key: HDFS-5565
 URL: https://issues.apache.org/jira/browse/HDFS-5565
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: caching
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
  Labels: newbie, caching
 Attachments: hdfs-5565-1.patch


 Using the shell, `hdfs dfsadmin -help refreshNamespace` returns help text, 
 but for cacheadmin, you have to specify `hdfs cacheadmin -help -addDirective` 
 with a dash before the command name. This is inconsistent with dfsadmin, dfs, 
 and haadmin, which also error when you provide a dash.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5565) CacheAdmin help should match against non-dashed commands

2013-11-25 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832103#comment-13832103
 ] 

Colin Patrick McCabe commented on HDFS-5565:


+1 pending jenkins

 CacheAdmin help should match against non-dashed commands
 

 Key: HDFS-5565
 URL: https://issues.apache.org/jira/browse/HDFS-5565
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: caching
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
  Labels: caching, newbie
 Attachments: hdfs-5565-1.patch


 Using the shell, `hdfs dfsadmin -help refreshNamespace` returns help text, 
 but for cacheadmin, you have to specify `hdfs cacheadmin -help -addDirective` 
 with a dash before the command name. This is inconsistent with dfsadmin, dfs, 
 and haadmin, which also error when you provide a dash.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5286) Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature

2013-11-25 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832108#comment-13832108
 ] 

Jing Zhao commented on HDFS-5286:
-

The patch looks great. The only concern is that maybe we do not need to clone 
the feature list?

 Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature
 ---

 Key: HDFS-5286
 URL: https://issues.apache.org/jira/browse/HDFS-5286
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h5286_20131122.patch


 Similar to the case of INodeFile (HFDS-5285), we should also flatten the 
 INodeDirectory hierarchy.
 This is the first step to add DirectoryWithQuotaFeature for replacing 
 INodeDirectoryWithQuota.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5286) Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature

2013-11-25 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5286:
-

Attachment: h5286_20131125.patch

Jing, thanks.  You are right that we don't need to clone the feature list.

h5286_20131125.patch

 Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature
 ---

 Key: HDFS-5286
 URL: https://issues.apache.org/jira/browse/HDFS-5286
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h5286_20131122.patch, h5286_20131125.patch


 Similar to the case of INodeFile (HFDS-5285), we should also flatten the 
 INodeDirectory hierarchy.
 This is the first step to add DirectoryWithQuotaFeature for replacing 
 INodeDirectoryWithQuota.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5548) Use ConcurrentHashMap in portmap

2013-11-25 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5548:
-

Component/s: nfs

 Use ConcurrentHashMap in portmap
 

 Key: HDFS-5548
 URL: https://issues.apache.org/jira/browse/HDFS-5548
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5548.000.patch, HDFS-5548.001.patch, 
 HDFS-5548.002.patch


 Portmap uses a HashMap to store the port mapping. It synchronizes the access 
 of the hash map by locking itself. It can be simplified by using a 
 ConcurrentHashMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5286) Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature

2013-11-25 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5286:
-

Attachment: h5286_20131125.patch

 Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature
 ---

 Key: HDFS-5286
 URL: https://issues.apache.org/jira/browse/HDFS-5286
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h5286_20131122.patch, h5286_20131125.patch


 Similar to the case of INodeFile (HFDS-5285), we should also flatten the 
 INodeDirectory hierarchy.
 This is the first step to add DirectoryWithQuotaFeature for replacing 
 INodeDirectoryWithQuota.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5286) Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature

2013-11-25 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5286:
-

Attachment: (was: h5286_20131125.patch)

 Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature
 ---

 Key: HDFS-5286
 URL: https://issues.apache.org/jira/browse/HDFS-5286
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h5286_20131122.patch, h5286_20131125.patch


 Similar to the case of INodeFile (HFDS-5285), we should also flatten the 
 INodeDirectory hierarchy.
 This is the first step to add DirectoryWithQuotaFeature for replacing 
 INodeDirectoryWithQuota.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5430) Support TTL on CacheBasedPathDirectives

2013-11-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832132#comment-13832132
 ] 

Hadoop QA commented on HDFS-5430:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615681/hdfs-5430-3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5563//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5563//console

This message is automatically generated.

 Support TTL on CacheBasedPathDirectives
 ---

 Key: HDFS-5430
 URL: https://issues.apache.org/jira/browse/HDFS-5430
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
Priority: Minor
 Attachments: hdfs-5430-1.patch, hdfs-5430-2.patch, hdfs-5430-3.patch


 It would be nice if CacheBasedPathDirectives would support an expiration 
 time, after which they would be automatically removed by the NameNode.  This 
 time would probably be in wall-block time for the convenience of system 
 administrators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5538) URLConnectionFactory should pick up the SSL related configuration by default

2013-11-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832134#comment-13832134
 ] 

Hadoop QA commented on HDFS-5538:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615686/HDFS-5538.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5562//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5562//console

This message is automatically generated.

 URLConnectionFactory should pick up the SSL related configuration by default
 

 Key: HDFS-5538
 URL: https://issues.apache.org/jira/browse/HDFS-5538
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5538.000.patch, HDFS-5538.001.patch, 
 HDFS-5538.002.patch, HDFS-5538.003.patch


 The default instance of URLConnectionFactory, DEFAULT_CONNECTION_FACTORY does 
 not pick up any hadoop-specific, SSL-related configuration. Its customers 
 have to set up the ConnectionConfigurator explicitly in order to pick up 
 these configurations. This is less than ideal for HTTPS because whenever the 
 code needs to make a HTTPS connection, the code is forced to go through the 
 set up.
 This jira refactors URLConnectionFactory to ease the handling of HTTPS 
 connections (compared to the DEFAULT_CONNECTION_FACTORY we have right now). 
 In particular, instead of loading the SSL configurator statically in 
 SecurityUtil (based on a global configuration about SSL), and determine 
 whether we should set up SSL for a given connection based on whether the SSL 
 configurator is null, we now load the SSL configurator in 
 URLConnectionFactory, and determine if we need to use the configurator to set 
 up an SSL connection based on if the given URL/connection is https.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5548) Use ConcurrentHashMap in portmap

2013-11-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832139#comment-13832139
 ] 

Hadoop QA commented on HDFS-5548:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615690/HDFS-5548.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5565//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5565//console

This message is automatically generated.

 Use ConcurrentHashMap in portmap
 

 Key: HDFS-5548
 URL: https://issues.apache.org/jira/browse/HDFS-5548
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5548.000.patch, HDFS-5548.001.patch, 
 HDFS-5548.002.patch


 Portmap uses a HashMap to store the port mapping. It synchronizes the access 
 of the hash map by locking itself. It can be simplified by using a 
 ConcurrentHashMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5430) Support TTL on CacheBasedPathDirectives

2013-11-25 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832141#comment-13832141
 ] 

Colin Patrick McCabe commented on HDFS-5430:


I think it would be nicer to have something like this:
{code}
public class CacheDirectiveInfo {
  public static class Expiration {
public static Expiration newRelative(long ms) {
  return new Expiration(ms, true);
}

public static Expiration newAbsolute(long ms) {
  return new Expiration(ms, false);
}

public static Expiration newAbsolute(Date date) {
  return new Expiration(date.toLong(), false);
}

private Expiration(long ms, boolean isRelative) {
  this.ms = ms;
  this.isRelative = isRelative;
}

public long toAbsolute() { // using local clock
   ...
}

private final long ms;
private final boolean isRelative;
  }
 ...
}
{code}

That way, we wouldn't have to worry about the awkward situation of having two 
fields in {{CacheDirectiveInfo}}, which can't be both used at the same time.  
What do you think?

We can continue making this just a long in CacheDirective, since at that point, 
it's always absolute.  It would be nice to have a constant for NEVER, rather 
than hard-coding -1 (Sun style guide discourages magic numbers.)

I also wonder if we can add a {{boolean expired}} or something to 
{{CacheDirectiveStats}}, since otherwise the client can't easily figure out if 
the directive has expired.  After all, the clock on the NN might be different.

Looks good aside from that.

 Support TTL on CacheBasedPathDirectives
 ---

 Key: HDFS-5430
 URL: https://issues.apache.org/jira/browse/HDFS-5430
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
Priority: Minor
 Attachments: hdfs-5430-1.patch, hdfs-5430-2.patch, hdfs-5430-3.patch


 It would be nice if CacheBasedPathDirectives would support an expiration 
 time, after which they would be automatically removed by the NameNode.  This 
 time would probably be in wall-block time for the convenience of system 
 administrators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5538) URLConnectionFactory should pick up the SSL related configuration by default

2013-11-25 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5538:


   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this to trunk. We can merge it to branch-2 after the remaining 
work done and well tested.

 URLConnectionFactory should pick up the SSL related configuration by default
 

 Key: HDFS-5538
 URL: https://issues.apache.org/jira/browse/HDFS-5538
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 3.0.0

 Attachments: HDFS-5538.000.patch, HDFS-5538.001.patch, 
 HDFS-5538.002.patch, HDFS-5538.003.patch


 The default instance of URLConnectionFactory, DEFAULT_CONNECTION_FACTORY does 
 not pick up any hadoop-specific, SSL-related configuration. Its customers 
 have to set up the ConnectionConfigurator explicitly in order to pick up 
 these configurations. This is less than ideal for HTTPS because whenever the 
 code needs to make a HTTPS connection, the code is forced to go through the 
 set up.
 This jira refactors URLConnectionFactory to ease the handling of HTTPS 
 connections (compared to the DEFAULT_CONNECTION_FACTORY we have right now). 
 In particular, instead of loading the SSL configurator statically in 
 SecurityUtil (based on a global configuration about SSL), and determine 
 whether we should set up SSL for a given connection based on whether the SSL 
 configurator is null, we now load the SSL configurator in 
 URLConnectionFactory, and determine if we need to use the configurator to set 
 up an SSL connection based on if the given URL/connection is https.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5286) Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature

2013-11-25 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832167#comment-13832167
 ] 

Jing Zhao commented on HDFS-5286:
-

Thanks for the update Nicholas! +1 pending Jenkins.

 Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature
 ---

 Key: HDFS-5286
 URL: https://issues.apache.org/jira/browse/HDFS-5286
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h5286_20131122.patch, h5286_20131125.patch


 Similar to the case of INodeFile (HFDS-5285), we should also flatten the 
 INodeDirectory hierarchy.
 This is the first step to add DirectoryWithQuotaFeature for replacing 
 INodeDirectoryWithQuota.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5563) NFS gateway should commit the buffered data when read request comes after write to the same file

2013-11-25 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5563:
-

Attachment: HDFS-5563.001.patch

 NFS gateway should commit the buffered data when read request comes after 
 write to the same file
 

 Key: HDFS-5563
 URL: https://issues.apache.org/jira/browse/HDFS-5563
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-5563.001.patch


 HDFS write is asynchronous and data may not be available to read immediately 
 after write.
 One of the main reason is that DFSClient doesn't flush data to DN until its 
 local buffer is full.
 To workaround this problem, when a read comes after write to the same file, 
 NFS gateway should sync the data so the read request can get the latest 
 content. The drawback is that, the frequent hsync() call can slow down data 
 write.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5563) NFS gateway should commit the buffered data when read request comes after write to the same file

2013-11-25 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5563:
-

Status: Patch Available  (was: Open)

 NFS gateway should commit the buffered data when read request comes after 
 write to the same file
 

 Key: HDFS-5563
 URL: https://issues.apache.org/jira/browse/HDFS-5563
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-5563.001.patch


 HDFS write is asynchronous and data may not be available to read immediately 
 after write.
 One of the main reason is that DFSClient doesn't flush data to DN until its 
 local buffer is full.
 To workaround this problem, when a read comes after write to the same file, 
 NFS gateway should sync the data so the read request can get the latest 
 content. The drawback is that, the frequent hsync() call can slow down data 
 write.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5549) Support for implementing custom FsDatasetSpi from outside the project

2013-11-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832172#comment-13832172
 ] 

Hadoop QA commented on HDFS-5549:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615703/HDFS-5549.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5564//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5564//console

This message is automatically generated.

 Support for implementing custom FsDatasetSpi from outside the project
 -

 Key: HDFS-5549
 URL: https://issues.apache.org/jira/browse/HDFS-5549
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.0
Reporter: Ignacio Corderi
 Attachments: HDFS-5549.patch


 Visibility for multiple methods and a few classes got changed to public to 
 allow FsDatasetSpiT and all the related classes that need subtyping to be 
 fully implemented from outside the HDFS project.
 Blocks transfers got abstracted to a factory given that the behavior will be 
 changed for DataNodes using Kinetic drives. The existing DataNode to DataNode 
 block transfer functionality got moved to LegacyBlockTransferer, no new 
 configuration is needed to use this class and have the same behavior that is 
 currently present.
 DataNodes have an additional configuration key 
 DFS_DATANODE_BLOCKTRANSFERER_FACTORY_KEY to override the default block 
 transfer behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5561) New Web UI cannot display correctly

2013-11-25 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5561:
-

Attachment: HDFS-5561.000.patch

 New Web UI cannot display correctly
 ---

 Key: HDFS-5561
 URL: https://issues.apache.org/jira/browse/HDFS-5561
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.2.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Attachments: HDFS-5561.000.patch, NNUI.PNG


 the new web UI cannot display correctly, I attached the screen shot.
 I've tried on Chrome31.0.1650, Firefox 25.0.1.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5561) New Web UI cannot display correctly

2013-11-25 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5561:
-

Status: Patch Available  (was: Open)

 New Web UI cannot display correctly
 ---

 Key: HDFS-5561
 URL: https://issues.apache.org/jira/browse/HDFS-5561
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.2.0, 3.0.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Attachments: HDFS-5561.000.patch, NNUI.PNG


 the new web UI cannot display correctly, I attached the screen shot.
 I've tried on Chrome31.0.1650, Firefox 25.0.1.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5541) LIBHDFS questions and performance suggestions

2013-11-25 Thread Stephen Bovy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832179#comment-13832179
 ] 

Stephen Bovy commented on HDFS-5541:


Here are the traces from  the   OPS  test  ( almost 100% )  

I am getting a strange error  on writing a file in append mode  

  And is  getStatus  a new method ???

Some new methods are  missing ( thus cannot be used for backwards comaptibility 
)

Usage: hadoop [--config confdir] COMMAND
where COMMAND is one of:
  namenode -format format the DFS filesystem
  secondarynamenoderun the DFS secondary namenode
  namenode run the DFS namenode
  datanode run a DFS datanode
  dfsadmin run a DFS admin client
  mradmin  run a Map-Reduce admin client
  fsck run a DFS filesystem checking utility
  fs   run a generic filesystem user client
  balancer run a cluster balancing utility
  snapshotDiff diff two snapshots of a directory or diff the
   current directory contents with a snapshot
  lsSnapshottableDir   list all snapshottable dirs owned by the current user
  oiv  apply the offline fsimage viewer to an fsimage
  fetchdt  fetch a delegation token from the NameNode
  jobtracker   run the MapReduce job Tracker node
  pipesrun a Pipes job
  tasktracker  run a MapReduce task Tracker node
  historyserverrun job history servers as a standalone daemon
  job  manipulate MapReduce jobs
  queueget information regarding JobQueues
  version  print the version
  jar jarrun a jar file

  distcp srcurl desturl copy file or directories recursively
  distcp2 srcurl desturl DistCp version 2
  archive -archiveName NAME src* dest create a hadoop archive
  daemonlogget/set the log level for each daemon
 or
  CLASSNAMErun the class named CLASSNAME
Most commands print help when invoked w/o parameters.

C:\hdp\hadoop\hadoop-1.2.0.1.3.0.0-0380z:

Z:\cd d\dclibhdfs

Z:\D\dclibhdfsdir
 Volume in drive Z is Shared Folders
 Volume Serial Number is -0064

 Directory of Z:\D\dclibhdfs

11/25/2013  05:33 PMDIR  .
11/25/2013  04:29 PM56,832 dclibhdfs.dll
11/25/2013  05:33 PM17,408 TstOpsHdfs.exe
11/22/2013  09:28 PM 7,680 TstReadHdfs.exe
11/22/2013  09:29 PM 7,680 TstWriteHdfs.exe
09/09/2013  03:14 PM 4,961,800 vc2008_SP1_redist_x64.exe
09/09/2013  03:16 PM 1,821,192 vc2008_SP1_redist_x86.exe
09/09/2013  03:02 PM 7,185,000 vc2012_Update3_redist_x64.exe
09/09/2013  03:02 PM 6,552,288 vc2012_Update3_redist_x86.exe
   8 File(s) 20,613,976 bytes
   1 Dir(s)  272,641,044,480 bytes free

Z:\D\dclibhdfsTstOpsHdfs.exe
dll attached
dll: tls1=1
Get Global JNI
load jvm
dll: get proc addresses
dll: thread attach
dll: thread attach
dll: thread attach
dll: thread attach
dll: thread attach
dll: thread attach
dll: thread attach
dll: thread attach
dll: thread attach
dll: thread attach
dll: jvm created
dll: thread attach
dll: save environment
dll: thread attach
dll: thread attach
dll: thread attach
dll: thread attach
dll: thread attach
Opened /tmp/testfile.txt for writing successfully...
dll: thread attach
Wrote 14 bytes
Current position: 14
Flushed /tmp/testfile.txt successfully!
dll: thread attach
dll: detach thread
dll: detach thread
could not find method read from class org/apache/hadoop/fs/FSDataInputStream wit
h signature (Ljava/nio/ByteBuffer;)I
readDirect: FSDataInputStream#read error:
java.lang.NoSuchMethodError: read
hdfsOpenFile(/tmp/testfile.txt): WARN: Unexpected error 255 when testing for dir
ect read compatibility
hdfsAvailable: 14
Current position: 1
Direct read support not detected for HDFS filesystem
Read following 13 bytes:
ello, World!
Read following 14 bytes:
Hello, World!
Test Local File System C:\tmp\testfile.txt
13/11/25 17:40:41 WARN util.NativeCodeLoader: Unable to load native-hadoop libra
ry for your platform... using builtin-java classes where applicable
dll: thread attach
dll: detach thread
could not find method read from class org/apache/hadoop/fs/FSDataInputStream wit
h signature (Ljava/nio/ByteBuffer;)I
readDirect: FSDataInputStream#read error:
java.lang.NoSuchMethodError: read
hdfsOpenFile(C:\tmp\testfile.txt): WARN: Unexpected error 255 when testing for d
irect read compatibility
dll: thread attach
dll: detach thread
hdfsCopy(remote-local): Success!
dll: thread attach
dll: thread attach
dll: detach thread
dll: detach thread
hdfsCopy(remote-remote): Success!
dll: thread attach
dll: detach thread
hdfsMove(local-local): Success!
dll: thread attach
dll: detach thread
hdfsMove(remote-local): Success!
hdfsRename: Success!
dll: thread attach
dll: thread attach
dll: detach thread
dll: detach 

[jira] [Updated] (HDFS-5561) getNameJournalStatus() in JMX should not return HTML at all

2013-11-25 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5561:
-

Summary: getNameJournalStatus() in JMX should not return HTML at all  (was: 
New Web UI cannot display correctly)

 getNameJournalStatus() in JMX should not return HTML at all
 ---

 Key: HDFS-5561
 URL: https://issues.apache.org/jira/browse/HDFS-5561
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.2.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Attachments: HDFS-5561.000.patch, NNUI.PNG


 the new web UI cannot display correctly, I attached the screen shot.
 I've tried on Chrome31.0.1650, Firefox 25.0.1.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5561) getNameJournalStatus() in JMX should not return HTML at all

2013-11-25 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5561:
-

Description: 
Currently FSNameSystem#getNameJournalStatus() returns the status of the quorum 
stream, which is an HTML string. This should not happen since that 
getNameJournalStatus() is a JMX call. This will confuse the downstream clients 
(e.g., the web UI) and lead to incorrect result.

This jira proposes to change the information to plain text.

  was:
the new web UI cannot display correctly, I attached the screen shot.

I've tried on Chrome31.0.1650, Firefox 25.0.1.


 getNameJournalStatus() in JMX should not return HTML at all
 ---

 Key: HDFS-5561
 URL: https://issues.apache.org/jira/browse/HDFS-5561
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.2.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Attachments: HDFS-5561.000.patch, NNUI.PNG


 Currently FSNameSystem#getNameJournalStatus() returns the status of the 
 quorum stream, which is an HTML string. This should not happen since that 
 getNameJournalStatus() is a JMX call. This will confuse the downstream 
 clients (e.g., the web UI) and lead to incorrect result.
 This jira proposes to change the information to plain text.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5561) FSNameSystem#getNameJournalStatus() in JMX should return plain text instead of HTML

2013-11-25 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5561:
-

Summary: FSNameSystem#getNameJournalStatus() in JMX should return plain 
text instead of HTML  (was: getNameJournalStatus() in JMX should not return 
HTML at all)

 FSNameSystem#getNameJournalStatus() in JMX should return plain text instead 
 of HTML
 ---

 Key: HDFS-5561
 URL: https://issues.apache.org/jira/browse/HDFS-5561
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.2.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Attachments: HDFS-5561.000.patch, NNUI.PNG


 Currently FSNameSystem#getNameJournalStatus() returns the status of the 
 quorum stream, which is an HTML string. This should not happen since that 
 getNameJournalStatus() is a JMX call. This will confuse the downstream 
 clients (e.g., the web UI) and lead to incorrect result.
 This jira proposes to change the information to plain text.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HDFS-5562) TestCacheDirectives and TestFsDatasetCache should not depend on libhadoop.so

2013-11-25 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang reassigned HDFS-5562:
---

Assignee: Colin Patrick McCabe  (was: Binglin Chang)

 TestCacheDirectives and TestFsDatasetCache should not depend on libhadoop.so
 

 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Colin Patrick McCabe
 Attachments: HDFS-5562.002.patch, HDFS-5562.v1.patch


 Some tests fail on trunk.
 {code}
 Tests in error:
   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
 datan...
   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
 » Runtime
   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
 Cannot ...
   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
 datanode ...
 Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
 {code}
 For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5548) Use ConcurrentHashMap in portmap

2013-11-25 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832207#comment-13832207
 ] 

Brandon Li commented on HDFS-5548:
--

+1

 Use ConcurrentHashMap in portmap
 

 Key: HDFS-5548
 URL: https://issues.apache.org/jira/browse/HDFS-5548
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5548.000.patch, HDFS-5548.001.patch, 
 HDFS-5548.002.patch


 Portmap uses a HashMap to store the port mapping. It synchronizes the access 
 of the hash map by locking itself. It can be simplified by using a 
 ConcurrentHashMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5563) NFS gateway should commit the buffered data when read request comes after write to the same file

2013-11-25 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832212#comment-13832212
 ] 

Jing Zhao commented on HDFS-5563:
-

The patch looks good overall. Some minor:
# Instead of using fromRead as parameter, how about using a parameter in the 
opposite way like toCache? Also please add javadoc for this new parameter.
# It's better to use assertEquals(expected, actual) instead of 
assertTrue(expected == actual value) in the unit test.
# A possible optimization here may be to directly return the local buffered 
data for the read request without calling hsync. This may be addressed in 
future jiras.

 NFS gateway should commit the buffered data when read request comes after 
 write to the same file
 

 Key: HDFS-5563
 URL: https://issues.apache.org/jira/browse/HDFS-5563
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-5563.001.patch


 HDFS write is asynchronous and data may not be available to read immediately 
 after write.
 One of the main reason is that DFSClient doesn't flush data to DN until its 
 local buffer is full.
 To workaround this problem, when a read comes after write to the same file, 
 NFS gateway should sync the data so the read request can get the latest 
 content. The drawback is that, the frequent hsync() call can slow down data 
 write.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5496) Make replication queue initialization asynchronous

2013-11-25 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-5496:


Status: Patch Available  (was: Open)

 Make replication queue initialization asynchronous
 --

 Key: HDFS-5496
 URL: https://issues.apache.org/jira/browse/HDFS-5496
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Kihwal Lee
 Attachments: HDFS-5496.patch


 Today, initialization of replication queues blocks safe mode exit and certain 
 HA state transitions. For a big name space, this can take hundreds of seconds 
 with the FSNamesystem write lock held.  During this time, important requests 
 (e.g. initial block reports, heartbeat, etc) are blocked.
 The effect of delaying the initialization would be not starting replication 
 right away, but I think the benefit outweighs. If we make it asynchronous, 
 the work per iteration should be limited, so that the lock duration is 
 capped. 
 If full/incremental block reports and any other requests that modifies block 
 state properly performs replication checks while the blocks are scanned and 
 the queues populated in background, every block will be processed. (Some may 
 be done twice)  The replication monitor should run even before all blocks are 
 processed.
 This will allow namenode to exit safe mode and start serving immediately even 
 with a big name space. It will also reduce the HA failover latency.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5496) Make replication queue initialization asynchronous

2013-11-25 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-5496:


Attachment: HDFS-5496.patch

Attaching a patch for proposal.
Please review and let me know improvements required.

 Make replication queue initialization asynchronous
 --

 Key: HDFS-5496
 URL: https://issues.apache.org/jira/browse/HDFS-5496
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Kihwal Lee
 Attachments: HDFS-5496.patch


 Today, initialization of replication queues blocks safe mode exit and certain 
 HA state transitions. For a big name space, this can take hundreds of seconds 
 with the FSNamesystem write lock held.  During this time, important requests 
 (e.g. initial block reports, heartbeat, etc) are blocked.
 The effect of delaying the initialization would be not starting replication 
 right away, but I think the benefit outweighs. If we make it asynchronous, 
 the work per iteration should be limited, so that the lock duration is 
 capped. 
 If full/incremental block reports and any other requests that modifies block 
 state properly performs replication checks while the blocks are scanned and 
 the queues populated in background, every block will be processed. (Some may 
 be done twice)  The replication monitor should run even before all blocks are 
 processed.
 This will allow namenode to exit safe mode and start serving immediately even 
 with a big name space. It will also reduce the HA failover latency.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5526) Datanode cannot roll back to previous layout version

2013-11-25 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832224#comment-13832224
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-5526:
--

Kihwal, I tested this with HDFS-2832 after the commit.  It worked well.  Thanks!

 Datanode cannot roll back to previous layout version
 

 Key: HDFS-5526
 URL: https://issues.apache.org/jira/browse/HDFS-5526
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Kihwal Lee
Priority: Blocker
 Fix For: 3.0.0, 2.3.0, 0.23.10

 Attachments: HDFS-5526.patch, HDFS-5526.patch


 Current trunk layout version is -48.
 Hadoop v2.2.0 layout version is -47.
 If a cluster is upgraded from v2.2.0 (-47) to trunk (-48), the datanodes 
 cannot start with -rollback.  It will fail with IncorrectVersionException.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5561) FSNameSystem#getNameJournalStatus() in JMX should return plain text instead of HTML

2013-11-25 Thread Fengdong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832237#comment-13832237
 ] 

Fengdong Yu commented on HDFS-5561:
---

Thanks, I'll test the patch today. then leave comments here.

 FSNameSystem#getNameJournalStatus() in JMX should return plain text instead 
 of HTML
 ---

 Key: HDFS-5561
 URL: https://issues.apache.org/jira/browse/HDFS-5561
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.2.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Attachments: HDFS-5561.000.patch, NNUI.PNG


 Currently FSNameSystem#getNameJournalStatus() returns the status of the 
 quorum stream, which is an HTML string. This should not happen since that 
 getNameJournalStatus() is a JMX call. This will confuse the downstream 
 clients (e.g., the web UI) and lead to incorrect result.
 This jira proposes to change the information to plain text.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5549) Support for implementing custom FsDatasetSpi from outside the project

2013-11-25 Thread Ignacio Corderi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832243#comment-13832243
 ] 

Ignacio Corderi commented on HDFS-5549:
---

Any idea what's going on with TestBalancerWithNodeGroup? Keeps timing out on me.
Looking at the failed test history it seems it likes timing out from time to 
time.


 Support for implementing custom FsDatasetSpi from outside the project
 -

 Key: HDFS-5549
 URL: https://issues.apache.org/jira/browse/HDFS-5549
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.0
Reporter: Ignacio Corderi
 Attachments: HDFS-5549.patch


 Visibility for multiple methods and a few classes got changed to public to 
 allow FsDatasetSpiT and all the related classes that need subtyping to be 
 fully implemented from outside the HDFS project.
 Blocks transfers got abstracted to a factory given that the behavior will be 
 changed for DataNodes using Kinetic drives. The existing DataNode to DataNode 
 block transfer functionality got moved to LegacyBlockTransferer, no new 
 configuration is needed to use this class and have the same behavior that is 
 currently present.
 DataNodes have an additional configuration key 
 DFS_DATANODE_BLOCKTRANSFERER_FACTORY_KEY to override the default block 
 transfer behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5566) HA namenode with QJM created from org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider should implement Closeable

2013-11-25 Thread Henry Hung (JIRA)
Henry Hung created HDFS-5566:


 Summary: HA namenode with QJM created from 
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider 
should implement Closeable
 Key: HDFS-5566
 URL: https://issues.apache.org/jira/browse/HDFS-5566
 Project: Hadoop HDFS
  Issue Type: Bug
 Environment: hadoop-2.2.0
hbase-0.96
Reporter: Henry Hung


When using hbase-0.96 with hadoop-2.2.0, stopping master/regionserver node will 
result in {{Cannot close proxy - is not Closeable or does not provide closeable 
invocation}}.

[Mail 
Archive|https://drive.google.com/file/d/0B22pkxoqCdvWSGFIaEpfR3lnT2M/edit?usp=sharing]

My hadoop-2.2.0 configured as HA namenode with QJM, the configuration is like 
this:
{code:xml}
  property
namedfs.nameservices/name
valuehadoopdev/value
  /property
  property
namedfs.ha.namenodes.hadoopdev/name
valuenn1,nn2/value
  /property
  property
namedfs.namenode.rpc-address.hadoopdev.nn1/name
valuefphd9.ctpilot1.com:9000/value
  /property
  property
namedfs.namenode.http-address.hadoopdev.nn1/name
valuefphd9.ctpilot1.com:50070/value
  /property
  property
namedfs.namenode.rpc-address.hadoopdev.nn2/name
valuefphd10.ctpilot1.com:9000/value
  /property
  property
namedfs.namenode.http-address.hadoopdev.nn2/name
valuefphd10.ctpilot1.com:50070/value
  /property
  property
namedfs.namenode.shared.edits.dir/name

valueqjournal://fphd8.ctpilot1.com:8485;fphd9.ctpilot1.com:8485;fphd10.ctpilot1.com:8485/hadoopdev/value
  /property
  property
namedfs.client.failover.proxy.provider.hadoopdev/name

valueorg.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider/value
  /property
  property
namedfs.ha.fencing.methods/name
valueshell(/bin/true)/value
  /property
  property
namedfs.journalnode.edits.dir/name
value/data/hadoop/hadoop-data-2/journal/value
  /property
  property
namedfs.ha.automatic-failover.enabled/name
valuetrue/value
  /property
  property
nameha.zookeeper.quorum/name
valuefphd1.ctpilot1.com:/value
  /property
{code}

I traced the code and found out that when stopping the hbase master node, it 
will try invoke method close on namenode, but the instance that created from 
{{org.apache.hadoop.hdfs.NameNodeProxies.createProxy}} with 
failoverProxyProviderClass 
{{org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider}} 
do not have the Closeable interface.

If we use the Non-HA case, the created instance will be 
{{org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB}} that 
implement Closeable.

TL;DR;
With hbase connecting to hadoop HA namenode, when stopping the hbase master or 
regionserver, it couldn't find the {{close}} method to gracefully close 
namenode session.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5565) CacheAdmin help should match against non-dashed commands

2013-11-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832247#comment-13832247
 ] 

Hadoop QA commented on HDFS-5565:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615732/hdfs-5565-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
  org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5567//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5567//console

This message is automatically generated.

 CacheAdmin help should match against non-dashed commands
 

 Key: HDFS-5565
 URL: https://issues.apache.org/jira/browse/HDFS-5565
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: caching
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
  Labels: caching, newbie
 Attachments: hdfs-5565-1.patch


 Using the shell, `hdfs dfsadmin -help refreshNamespace` returns help text, 
 but for cacheadmin, you have to specify `hdfs cacheadmin -help -addDirective` 
 with a dash before the command name. This is inconsistent with dfsadmin, dfs, 
 and haadmin, which also error when you provide a dash.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5541) LIBHDFS questions and performance suggestions

2013-11-25 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832248#comment-13832248
 ] 

Colin Patrick McCabe commented on HDFS-5541:


I don't have any objections to using {{uthash}} instead of {{hsearch}}.  If you 
want to create a JIRA for that I will review it.  Its BSD-style license appears 
to be compatible with Apache.  Please include the file as a header rather than 
adding a dependency if you want to go this route.

I do not think we should change all the C++ style comments since it would 
generate a massive delta and inconvenience users of other platforms.  Plus, 
there is a workaround on AIX.  Simply pass the {{\-qcpluscmt}} flag to get it 
to stop complaining about C++-style comments.  This is another change you could 
make to the {{CMakeLists.txt}} file.  I cannot make this change since I do not 
have access to AIX (it is proprietary) but I will review such a change.

JIRA isn't a question  answer forum.  You should probably ask these kind of 
questions on the hdfs-dev mailing list in the future.

thanks

 LIBHDFS questions and performance suggestions
 -

 Key: HDFS-5541
 URL: https://issues.apache.org/jira/browse/HDFS-5541
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Reporter: Stephen Bovy
Priority: Minor
 Attachments: pdclibhdfs.zip


 Since libhdfs is a client interface,  and esspecially because it is a C 
 interface , it should be assumed that the code will be used accross many 
 different platforms, and many different compilers.
 1) The code should be cross platform ( no Linux extras )
 2) The code should compile on standard c89 compilers, the
   {least common denominator rule applies here} !!   
 C  code with  c   extension should follow the rules of the c standard  
 All variables must be declared at the begining of scope , and no (//) 
 comments allowed 
  I just spent a week white-washing the code back to nornal C standards so 
  that it could compile and build accross a wide range of platforms  
 Now on-to  performance questions 
 1) If threads are not used why do a thread attach ( when threads are not used 
 all the thread attach nonesense is a waste of time and a performance killer ) 
 2) The JVM  init  code should not be imbedded within the context of every 
 function call   .  The  JVM init code should be in a stand-alone  LIBINIT 
 function that is only invoked once.   The JVM * and the JNI * should be 
 global variables for use when no threads are utilized.  
 3) When threads are utilized the attach fucntion can use the GLOBAL  jvm * 
 created by the LIBINIT  { WHICH IS INVOKED ONLY ONCE } and thus safely 
 outside the scope of any LOOP that is using the functions 
 4) Hash Table and Locking  Why ?
 When threads are used the hash table locking is going to hurt perfromance .  
 Why not use thread local storage for the hash table,that way no locking is 
 required either with or without threads.   
  
 5) FINALLY Windows  Compatibility 
 Do not use posix features if they cannot easilly be replaced on other 
 platforms   !!



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5538) URLConnectionFactory should pick up the SSL related configuration by default

2013-11-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832264#comment-13832264
 ] 

Hudson commented on HDFS-5538:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4793 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4793/])
HDFS-5538. URLConnectionFactory should pick up the SSL related configuration by 
default. Contributed by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1545491)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumJournalManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/HftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/HsftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/SWebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/URLConnectionFactory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLogFileInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestURLConnectionFactory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTimeouts.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestDelegationTokenRemoteFetcher.java


 URLConnectionFactory should pick up the SSL related configuration by default
 

 Key: HDFS-5538
 URL: https://issues.apache.org/jira/browse/HDFS-5538
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 3.0.0

 Attachments: HDFS-5538.000.patch, HDFS-5538.001.patch, 
 HDFS-5538.002.patch, HDFS-5538.003.patch


 The default instance of URLConnectionFactory, DEFAULT_CONNECTION_FACTORY does 
 not pick up any hadoop-specific, SSL-related configuration. Its customers 
 have to set up the ConnectionConfigurator explicitly in order to pick up 
 these configurations. This is less than ideal for HTTPS because whenever the 
 code needs to make a HTTPS connection, the code is forced to go through the 
 set up.
 This jira refactors URLConnectionFactory to ease the handling of HTTPS 
 connections (compared to the DEFAULT_CONNECTION_FACTORY we have right now). 
 In particular, instead of loading the SSL configurator statically in 
 SecurityUtil (based on a global configuration about SSL), and determine 
 whether we should set up SSL for a given connection based on whether the SSL 
 configurator is null, we now load the SSL configurator in 
 URLConnectionFactory, and determine if we need to use the configurator to set 
 up an SSL connection based on if the given URL/connection is https.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5286) Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature

2013-11-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832267#comment-13832267
 ] 

Hadoop QA commented on HDFS-5286:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615742/h5286_20131125.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 5 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5566//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5566//console

This message is automatically generated.

 Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature
 ---

 Key: HDFS-5286
 URL: https://issues.apache.org/jira/browse/HDFS-5286
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h5286_20131122.patch, h5286_20131125.patch


 Similar to the case of INodeFile (HFDS-5285), we should also flatten the 
 INodeDirectory hierarchy.
 This is the first step to add DirectoryWithQuotaFeature for replacing 
 INodeDirectoryWithQuota.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5541) LIBHDFS questions and performance suggestions

2013-11-25 Thread Stephen Bovy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832269#comment-13832269
 ] 

Stephen Bovy commented on HDFS-5541:


You are correct, I appologize for cheating a little-bit.

Thanks for the help.   

 LIBHDFS questions and performance suggestions
 -

 Key: HDFS-5541
 URL: https://issues.apache.org/jira/browse/HDFS-5541
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Reporter: Stephen Bovy
Priority: Minor
 Attachments: pdclibhdfs.zip


 Since libhdfs is a client interface,  and esspecially because it is a C 
 interface , it should be assumed that the code will be used accross many 
 different platforms, and many different compilers.
 1) The code should be cross platform ( no Linux extras )
 2) The code should compile on standard c89 compilers, the
   {least common denominator rule applies here} !!   
 C  code with  c   extension should follow the rules of the c standard  
 All variables must be declared at the begining of scope , and no (//) 
 comments allowed 
  I just spent a week white-washing the code back to nornal C standards so 
  that it could compile and build accross a wide range of platforms  
 Now on-to  performance questions 
 1) If threads are not used why do a thread attach ( when threads are not used 
 all the thread attach nonesense is a waste of time and a performance killer ) 
 2) The JVM  init  code should not be imbedded within the context of every 
 function call   .  The  JVM init code should be in a stand-alone  LIBINIT 
 function that is only invoked once.   The JVM * and the JNI * should be 
 global variables for use when no threads are utilized.  
 3) When threads are utilized the attach fucntion can use the GLOBAL  jvm * 
 created by the LIBINIT  { WHICH IS INVOKED ONLY ONCE } and thus safely 
 outside the scope of any LOOP that is using the functions 
 4) Hash Table and Locking  Why ?
 When threads are used the hash table locking is going to hurt perfromance .  
 Why not use thread local storage for the hash table,that way no locking is 
 required either with or without threads.   
  
 5) FINALLY Windows  Compatibility 
 Do not use posix features if they cannot easilly be replaced on other 
 platforms   !!



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5561) FSNameSystem#getNameJournalStatus() in JMX should return plain text instead of HTML

2013-11-25 Thread Fengdong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832277#comment-13832277
 ] 

Fengdong Yu commented on HDFS-5561:
---

I tested. It works well. 

but I have a minor comments:

/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/AsyncLoggerSet.java
{code}
+  void appendReport(StringBuilder sb) {
+for (int i = 0, len = loggers.size(); i  len; ++i) {
{code}

It  can be simplified:
{code}
for(int i = 0; i  loggers.size(); ++i) {
{code}

 FSNameSystem#getNameJournalStatus() in JMX should return plain text instead 
 of HTML
 ---

 Key: HDFS-5561
 URL: https://issues.apache.org/jira/browse/HDFS-5561
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.2.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Attachments: HDFS-5561.000.patch, NNUI.PNG


 Currently FSNameSystem#getNameJournalStatus() returns the status of the 
 quorum stream, which is an HTML string. This should not happen since that 
 getNameJournalStatus() is a JMX call. This will confuse the downstream 
 clients (e.g., the web UI) and lead to incorrect result.
 This jira proposes to change the information to plain text.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5563) NFS gateway should commit the buffered data when read request comes after write to the same file

2013-11-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832284#comment-13832284
 ] 

Hadoop QA commented on HDFS-5563:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615764/HDFS-5563.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5571//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5571//console

This message is automatically generated.

 NFS gateway should commit the buffered data when read request comes after 
 write to the same file
 

 Key: HDFS-5563
 URL: https://issues.apache.org/jira/browse/HDFS-5563
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-5563.001.patch


 HDFS write is asynchronous and data may not be available to read immediately 
 after write.
 One of the main reason is that DFSClient doesn't flush data to DN until its 
 local buffer is full.
 To workaround this problem, when a read comes after write to the same file, 
 NFS gateway should sync the data so the read request can get the latest 
 content. The drawback is that, the frequent hsync() call can slow down data 
 write.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5567) CacheAdmin operations not supported with viewfs

2013-11-25 Thread Stephen Chu (JIRA)
Stephen Chu created HDFS-5567:
-

 Summary: CacheAdmin operations not supported with viewfs
 Key: HDFS-5567
 URL: https://issues.apache.org/jira/browse/HDFS-5567
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: caching
Affects Versions: 3.0.0
Reporter: Stephen Chu


On a federated cluster with viewfs configured, we'll run into the following 
error when using CacheAdmin commands:

{code}
bash-4.1$ hdfs cacheadmin -listPools
Exception in thread main java.lang.IllegalArgumentException: FileSystem 
viewfs://cluster3/ is not an HDFS file system
at org.apache.hadoop.hdfs.tools.CacheAdmin.getDFS(CacheAdmin.java:96)
at 
org.apache.hadoop.hdfs.tools.CacheAdmin.access$100(CacheAdmin.java:50)
at 
org.apache.hadoop.hdfs.tools.CacheAdmin$ListCachePoolsCommand.run(CacheAdmin.java:748)
at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:84)
at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:89)
bash-4.1$
{code}





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5545) Allow specifying endpoints for listeners in HttpServer

2013-11-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832300#comment-13832300
 ] 

Hadoop QA commented on HDFS-5545:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615730/HDFS-5545.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1554 javac 
compiler warnings (more than the trunk's current 1544 warnings).

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy:

  org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5568//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5568//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5568//console

This message is automatically generated.

 Allow specifying endpoints for listeners in HttpServer
 --

 Key: HDFS-5545
 URL: https://issues.apache.org/jira/browse/HDFS-5545
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5545.000.patch, HDFS-5545.001.patch


 Currently HttpServer listens to HTTP port and provides a method to allow the 
 users to add an SSL listeners after the server starts. This complicates the 
 logic if the client needs to set up HTTP / HTTPS serverfs.
 This jira proposes to replace these two methods with the concepts of listener 
 endpoints. A listener endpoints is a URI (i.e., scheme + host + port) that 
 the HttpServer should listen to. This concept simplifies the task of managing 
 the HTTP server from HDFS / YARN.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5568) Support inclusion of snapshot paths in Namenode fsck

2013-11-25 Thread Vinay (JIRA)
Vinay created HDFS-5568:
---

 Summary: Support inclusion of snapshot paths in Namenode fsck
 Key: HDFS-5568
 URL: https://issues.apache.org/jira/browse/HDFS-5568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: Vinay
Assignee: Vinay


Support Fsck to check the snapshot paths also for inconsistency.

Currently Fsck supports snapshot paths if path given explicitly refers to a 
snapshot path.

We have seen safemode problems in our clusters which were due to blocks missing 
which were only present inside snapshots. But hdfs fsck / shows HEALTHY. 

So supporting snapshot paths also during fsck (may be by default or on demand) 
would be helpful in these cases instead of specifying each and every 
snapshottable directory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5568) Support inclusion of snapshot paths in Namenode fsck

2013-11-25 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-5568:


Attachment: HDFS-5568.patch

Here is the patch to include snapshot paths also during fsck check.

This is on demand. -includeSnapshots should be specified to include snapshot 
paths. If need to provide as default, then can remove this option.

Fsck includes only those snapshottable directories which are owned by a user. 
For super user includes all.

Please review and let me know improvements/suggestions required.
Thanks


 Support inclusion of snapshot paths in Namenode fsck
 

 Key: HDFS-5568
 URL: https://issues.apache.org/jira/browse/HDFS-5568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5568.patch


 Support Fsck to check the snapshot paths also for inconsistency.
 Currently Fsck supports snapshot paths if path given explicitly refers to a 
 snapshot path.
 We have seen safemode problems in our clusters which were due to blocks 
 missing which were only present inside snapshots. But hdfs fsck / shows 
 HEALTHY. 
 So supporting snapshot paths also during fsck (may be by default or on 
 demand) would be helpful in these cases instead of specifying each and every 
 snapshottable directory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5568) Support inclusion of snapshot paths in Namenode fsck

2013-11-25 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-5568:


Status: Patch Available  (was: Open)

 Support inclusion of snapshot paths in Namenode fsck
 

 Key: HDFS-5568
 URL: https://issues.apache.org/jira/browse/HDFS-5568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5568.patch


 Support Fsck to check the snapshot paths also for inconsistency.
 Currently Fsck supports snapshot paths if path given explicitly refers to a 
 snapshot path.
 We have seen safemode problems in our clusters which were due to blocks 
 missing which were only present inside snapshots. But hdfs fsck / shows 
 HEALTHY. 
 So supporting snapshot paths also during fsck (may be by default or on 
 demand) would be helpful in these cases instead of specifying each and every 
 snapshottable directory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-2832) Enable support for heterogeneous storages in HDFS

2013-11-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832323#comment-13832323
 ] 

Hadoop QA commented on HDFS-2832:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12615762/20131125-HeterogeneousStorage-TestPlan.pdf
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5572//console

This message is automatically generated.

 Enable support for heterogeneous storages in HDFS
 -

 Key: HDFS-2832
 URL: https://issues.apache.org/jira/browse/HDFS-2832
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 0.24.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: 20130813-HeterogeneousStorage.pdf, 
 20131125-HeterogeneousStorage-TestPlan.pdf, 
 20131125-HeterogeneousStorage.pdf, H2832_20131107.patch, editsStored, 
 h2832_20131023.patch, h2832_20131023b.patch, h2832_20131025.patch, 
 h2832_20131028.patch, h2832_20131028b.patch, h2832_20131029.patch, 
 h2832_20131103.patch, h2832_20131104.patch, h2832_20131105.patch, 
 h2832_20131107b.patch, h2832_20131108.patch, h2832_20131110.patch, 
 h2832_20131110b.patch, h2832_2013.patch, h2832_20131112.patch, 
 h2832_20131112b.patch, h2832_20131114.patch, h2832_20131118.patch, 
 h2832_20131119.patch, h2832_20131119b.patch, h2832_20131121.patch, 
 h2832_20131122.patch, h2832_20131122b.patch, h2832_20131123.patch, 
 h2832_20131124.patch


 HDFS currently supports configuration where storages are a list of 
 directories. Typically each of these directories correspond to a volume with 
 its own file system. All these directories are homogeneous and therefore 
 identified as a single storage at the namenode. I propose, change to the 
 current model where Datanode * is a * storage, to Datanode * is a collection 
 * of strorages. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5566) HA namenode with QJM created from org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider should implement Closeable

2013-11-25 Thread Henry Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Hung resolved HDFS-5566.
--

Resolution: Duplicate

duplicate with 
[HBASE-10029|https://issues.apache.org/jira/browse/HBASE-10029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel]

 HA namenode with QJM created from 
 org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider 
 should implement Closeable
 --

 Key: HDFS-5566
 URL: https://issues.apache.org/jira/browse/HDFS-5566
 Project: Hadoop HDFS
  Issue Type: Bug
 Environment: hadoop-2.2.0
 hbase-0.96
Reporter: Henry Hung

 When using hbase-0.96 with hadoop-2.2.0, stopping master/regionserver node 
 will result in {{Cannot close proxy - is not Closeable or does not provide 
 closeable invocation}}.
 [Mail 
 Archive|https://drive.google.com/file/d/0B22pkxoqCdvWSGFIaEpfR3lnT2M/edit?usp=sharing]
 My hadoop-2.2.0 configured as HA namenode with QJM, the configuration is like 
 this:
 {code:xml}
   property
 namedfs.nameservices/name
 valuehadoopdev/value
   /property
   property
 namedfs.ha.namenodes.hadoopdev/name
 valuenn1,nn2/value
   /property
   property
 namedfs.namenode.rpc-address.hadoopdev.nn1/name
 valuefphd9.ctpilot1.com:9000/value
   /property
   property
 namedfs.namenode.http-address.hadoopdev.nn1/name
 valuefphd9.ctpilot1.com:50070/value
   /property
   property
 namedfs.namenode.rpc-address.hadoopdev.nn2/name
 valuefphd10.ctpilot1.com:9000/value
   /property
   property
 namedfs.namenode.http-address.hadoopdev.nn2/name
 valuefphd10.ctpilot1.com:50070/value
   /property
   property
 namedfs.namenode.shared.edits.dir/name
 
 valueqjournal://fphd8.ctpilot1.com:8485;fphd9.ctpilot1.com:8485;fphd10.ctpilot1.com:8485/hadoopdev/value
   /property
   property
 namedfs.client.failover.proxy.provider.hadoopdev/name
 
 valueorg.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider/value
   /property
   property
 namedfs.ha.fencing.methods/name
 valueshell(/bin/true)/value
   /property
   property
 namedfs.journalnode.edits.dir/name
 value/data/hadoop/hadoop-data-2/journal/value
   /property
   property
 namedfs.ha.automatic-failover.enabled/name
 valuetrue/value
   /property
   property
 nameha.zookeeper.quorum/name
 valuefphd1.ctpilot1.com:/value
   /property
 {code}
 I traced the code and found out that when stopping the hbase master node, it 
 will try invoke method close on namenode, but the instance that created 
 from {{org.apache.hadoop.hdfs.NameNodeProxies.createProxy}} with 
 failoverProxyProviderClass 
 {{org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider}} 
 do not have the Closeable interface.
 If we use the Non-HA case, the created instance will be 
 {{org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB}} that 
 implement Closeable.
 TL;DR;
 With hbase connecting to hadoop HA namenode, when stopping the hbase master 
 or regionserver, it couldn't find the {{close}} method to gracefully close 
 namenode session.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5561) FSNameSystem#getNameJournalStatus() in JMX should return plain text instead of HTML

2013-11-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832337#comment-13832337
 ] 

Hadoop QA commented on HDFS-5561:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615767/HDFS-5561.000.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5569//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5569//console

This message is automatically generated.

 FSNameSystem#getNameJournalStatus() in JMX should return plain text instead 
 of HTML
 ---

 Key: HDFS-5561
 URL: https://issues.apache.org/jira/browse/HDFS-5561
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.2.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Attachments: HDFS-5561.000.patch, NNUI.PNG


 Currently FSNameSystem#getNameJournalStatus() returns the status of the 
 quorum stream, which is an HTML string. This should not happen since that 
 getNameJournalStatus() is a JMX call. This will confuse the downstream 
 clients (e.g., the web UI) and lead to incorrect result.
 This jira proposes to change the information to plain text.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5561) FSNameSystem#getNameJournalStatus() in JMX should return plain text instead of HTML

2013-11-25 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832343#comment-13832343
 ] 

Haohui Mai commented on HDFS-5561:
--

I'm okay with either way. The original patch, however, only calls 
loggers.size() once instead of every iteration.

You might argue that a trace-based JIT can inline the call and optimize it 
away, but this depends on which JRE / JDK you're using.

 FSNameSystem#getNameJournalStatus() in JMX should return plain text instead 
 of HTML
 ---

 Key: HDFS-5561
 URL: https://issues.apache.org/jira/browse/HDFS-5561
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.2.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Attachments: HDFS-5561.000.patch, NNUI.PNG


 Currently FSNameSystem#getNameJournalStatus() returns the status of the 
 quorum stream, which is an HTML string. This should not happen since that 
 getNameJournalStatus() is a JMX call. This will confuse the downstream 
 clients (e.g., the web UI) and lead to incorrect result.
 This jira proposes to change the information to plain text.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5286) Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature

2013-11-25 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5286:
-

Attachment: h5286_20131125b.patch

h5286_20131125b.patch: fixes the 5 javadoc warnings -- all are about a typo.

 Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature
 ---

 Key: HDFS-5286
 URL: https://issues.apache.org/jira/browse/HDFS-5286
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h5286_20131122.patch, h5286_20131125.patch, 
 h5286_20131125b.patch


 Similar to the case of INodeFile (HFDS-5285), we should also flatten the 
 INodeDirectory hierarchy.
 This is the first step to add DirectoryWithQuotaFeature for replacing 
 INodeDirectoryWithQuota.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5496) Make replication queue initialization asynchronous

2013-11-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832346#comment-13832346
 ] 

Hadoop QA commented on HDFS-5496:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615774/HDFS-5496.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5570//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5570//console

This message is automatically generated.

 Make replication queue initialization asynchronous
 --

 Key: HDFS-5496
 URL: https://issues.apache.org/jira/browse/HDFS-5496
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Kihwal Lee
 Attachments: HDFS-5496.patch


 Today, initialization of replication queues blocks safe mode exit and certain 
 HA state transitions. For a big name space, this can take hundreds of seconds 
 with the FSNamesystem write lock held.  During this time, important requests 
 (e.g. initial block reports, heartbeat, etc) are blocked.
 The effect of delaying the initialization would be not starting replication 
 right away, but I think the benefit outweighs. If we make it asynchronous, 
 the work per iteration should be limited, so that the lock duration is 
 capped. 
 If full/incremental block reports and any other requests that modifies block 
 state properly performs replication checks while the blocks are scanned and 
 the queues populated in background, every block will be processed. (Some may 
 be done twice)  The replication monitor should run even before all blocks are 
 processed.
 This will allow namenode to exit safe mode and start serving immediately even 
 with a big name space. It will also reduce the HA failover latency.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5561) FSNameSystem#getNameJournalStatus() in JMX should return plain text instead of HTML

2013-11-25 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832348#comment-13832348
 ] 

Jing Zhao commented on HDFS-5561:
-

The patch looks good to me. +1.

 FSNameSystem#getNameJournalStatus() in JMX should return plain text instead 
 of HTML
 ---

 Key: HDFS-5561
 URL: https://issues.apache.org/jira/browse/HDFS-5561
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.2.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Attachments: HDFS-5561.000.patch, NNUI.PNG


 Currently FSNameSystem#getNameJournalStatus() returns the status of the 
 quorum stream, which is an HTML string. This should not happen since that 
 getNameJournalStatus() is a JMX call. This will confuse the downstream 
 clients (e.g., the web UI) and lead to incorrect result.
 This jira proposes to change the information to plain text.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   >