[jira] [Commented] (HDFS-5552) Fix wrong information of "Cluster summay" in dfshealth.html

2013-11-21 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829762#comment-13829762
 ] 

Kousuke Saruta commented on HDFS-5552:
--

[~wheat9] addressed this issue during I commented.

> Fix wrong information of "Cluster summay" in dfshealth.html
> ---
>
> Key: HDFS-5552
> URL: https://issues.apache.org/jira/browse/HDFS-5552
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Haohui Mai
> Attachments: HDFS-5552.000.patch, dfshealth-html.png
>
>
> "files and directories" + "blocks" = total filesystem object(s). But wrong 
> value is displayed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5552) Fix wrong information of "Cluster summay" in dfshealth.html

2013-11-21 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829760#comment-13829760
 ] 

Kousuke Saruta commented on HDFS-5552:
--

In FSNameSystem#metaSave, there is a similar expression.

{code}
  private void metaSave(PrintWriter out) {
assert hasWriteLock();
long totalInodes = this.dir.totalInodes();
long totalBlocks = this.getBlocksTotal();
out.println(totalInodes + " files and directories, " + totalBlocks
+ " blocks = " + (totalInodes + totalBlocks)
+ " total filesystem objects");

blockManager.metaSave(out);
  }
{code}

As you can see, "files and directories, blocks = total filesystem objects" 
means "totalInode + totalBlocks = (totalInodes + totalBlocks)".
On the other hand, dfshealth.dust.html defines "files and directories, blocks = 
total filesystem object(s)" means "{TotalLoad}(FSNameSystem#getTotalLoad) + 
{BlocksTotal}(FSNameSystem#getBlocksTotal) = 
{FilesTotal}(FSNameSystem#getFilesTotal)".

TotalLoad means num of active xceivers and FilesTotal means num of inodes so 
these are different from metaSave's.

> Fix wrong information of "Cluster summay" in dfshealth.html
> ---
>
> Key: HDFS-5552
> URL: https://issues.apache.org/jira/browse/HDFS-5552
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Haohui Mai
> Attachments: HDFS-5552.000.patch, dfshealth-html.png
>
>
> "files and directories" + "blocks" = total filesystem object(s). But wrong 
> value is displayed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5552) Fix wrong information of "Cluster summay" in dfshealth.html

2013-11-21 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5552:
-

Attachment: HDFS-5552.000.patch

> Fix wrong information of "Cluster summay" in dfshealth.html
> ---
>
> Key: HDFS-5552
> URL: https://issues.apache.org/jira/browse/HDFS-5552
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Haohui Mai
> Attachments: HDFS-5552.000.patch, dfshealth-html.png
>
>
> "files and directories" + "blocks" = total filesystem object(s). But wrong 
> value is displayed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5552) Fix wrong information of "Cluster summay" in dfshealth.html

2013-11-21 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5552:
-

Status: Patch Available  (was: Open)

> Fix wrong information of "Cluster summay" in dfshealth.html
> ---
>
> Key: HDFS-5552
> URL: https://issues.apache.org/jira/browse/HDFS-5552
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Haohui Mai
> Attachments: HDFS-5552.000.patch, dfshealth-html.png
>
>
> "files and directories" + "blocks" = total filesystem object(s). But wrong 
> value is displayed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HDFS-5552) Fix wrong information of "Cluster summay" in dfshealth.html

2013-11-21 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai reassigned HDFS-5552:


Assignee: Haohui Mai

> Fix wrong information of "Cluster summay" in dfshealth.html
> ---
>
> Key: HDFS-5552
> URL: https://issues.apache.org/jira/browse/HDFS-5552
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Haohui Mai
> Attachments: dfshealth-html.png
>
>
> "files and directories" + "blocks" = total filesystem object(s). But wrong 
> value is displayed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5552) Fix wrong information of "Cluster summay" in dfshealth.html

2013-11-21 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5552:
-

Attachment: dfshealth-html.png

I attach a screenshot. It displays as "3 files and directories, 0 blocks = 1 
total filesystem object(s).".

> Fix wrong information of "Cluster summay" in dfshealth.html
> ---
>
> Key: HDFS-5552
> URL: https://issues.apache.org/jira/browse/HDFS-5552
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
> Attachments: dfshealth-html.png
>
>
> "files and directories" + "blocks" = total filesystem object(s). But wrong 
> value is displayed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5552) Fix wrong information of "Cluster summay" in dfshealth.html

2013-11-21 Thread Shinichi Yamashita (JIRA)
Shinichi Yamashita created HDFS-5552:


 Summary: Fix wrong information of "Cluster summay" in 
dfshealth.html
 Key: HDFS-5552
 URL: https://issues.apache.org/jira/browse/HDFS-5552
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita


"files and directories" + "blocks" = total filesystem object(s). But wrong 
value is displayed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5549) Support for implementing custom FsDatasetSpi from outside the project

2013-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829689#comment-13829689
 ] 

Hadoop QA commented on HDFS-5549:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615254/HDFS-5549.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 3 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5535//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5535//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5535//console

This message is automatically generated.

> Support for implementing custom FsDatasetSpi from outside the project
> -
>
> Key: HDFS-5549
> URL: https://issues.apache.org/jira/browse/HDFS-5549
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Ignacio Corderi
> Attachments: HDFS-5549.patch
>
>
> Visibility for multiple methods and a few classes got changed to public to 
> allow FsDatasetSpi and all the related classes that need subtyping to be 
> fully implemented from outside the HDFS project.
> Blocks transfers got abstracted to a factory given that the behavior will be 
> changed for DataNodes using Kinetic drives. The existing DataNode to DataNode 
> block transfer functionality got moved to LegacyBlockTransferer, no new 
> configuration is needed to use this class and have the same behavior that is 
> currently present.
> DataNodes have an additional configuration key 
> DFS_DATANODE_BLOCKTRANSFERER_FACTORY_KEY to override the default block 
> transfer behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5540) Fix TestBlocksWithNotEnoughRacks

2013-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829687#comment-13829687
 ] 

Hadoop QA commented on HDFS-5540:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615263/HDFS-5540.v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5536//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5536//console

This message is automatically generated.

> Fix TestBlocksWithNotEnoughRacks
> 
>
> Key: HDFS-5540
> URL: https://issues.apache.org/jira/browse/HDFS-5540
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HDFS-5540.v1.patch
>
>
> TestBlocksWithNotEnoughRacks fails with timed out waiting for corrupt replicas
> java.util.concurrent.TimeoutException: Timed out waiting for corrupt 
> replicas. Waiting for 1, but only found 0
>   at 
> org.apache.hadoop.hdfs.DFSTestUtil.waitCorruptReplicas(DFSTestUtil.java:351)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:219)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5538) URLConnectionFactory should pick up the SSL related configuration by default

2013-11-21 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5538:
-

Attachment: HDFS-5538.002.patch

> URLConnectionFactory should pick up the SSL related configuration by default
> 
>
> Key: HDFS-5538
> URL: https://issues.apache.org/jira/browse/HDFS-5538
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5538.000.patch, HDFS-5538.001.patch, 
> HDFS-5538.002.patch
>
>
> The default instance of URLConnectionFactory, DEFAULT_CONNECTION_FACTORY does 
> not pick up any hadoop-specific, SSL-related configuration. Its customers 
> have to set up the ConnectionConfigurator explicitly in order to pick up 
> these configurations.
> This is less than ideal for HTTPS because whenever the code needs to make a 
> HTTPS connection, the code is forced to go through the set up.
> This jira refactors URLConnectionFactory to ease the handling of HTTPS 
> connections (compared to the DEFAULT_CONNECTION_FACTORY we have right now).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5551) Rename "path.based" caching configuration options

2013-11-21 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-5551:
-

 Summary: Rename "path.based" caching configuration options
 Key: HDFS-5551
 URL: https://issues.apache.org/jira/browse/HDFS-5551
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor


Some configuration options still have the "path.based" moniker, missed during 
the big rename removing this naming convention.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5430) Support TTL on CacheBasedPathDirectives

2013-11-21 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829663#comment-13829663
 ] 

Andrew Wang commented on HDFS-5430:
---

Couple further notes, I did a couple missed PATH_BASED renames as well, and 
changed the edit log loading for addDirective since I think it would 
potentially allow users to pass in their own directive IDs.

> Support TTL on CacheBasedPathDirectives
> ---
>
> Key: HDFS-5430
> URL: https://issues.apache.org/jira/browse/HDFS-5430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: hdfs-5430-1.patch
>
>
> It would be nice if CacheBasedPathDirectives would support an expiration 
> time, after which they would be automatically removed by the NameNode.  This 
> time would probably be in wall-block time for the convenience of system 
> administrators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5430) Support TTL on CacheBasedPathDirectives

2013-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5430:
--

Status: Patch Available  (was: Open)

> Support TTL on CacheBasedPathDirectives
> ---
>
> Key: HDFS-5430
> URL: https://issues.apache.org/jira/browse/HDFS-5430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: hdfs-5430-1.patch
>
>
> It would be nice if CacheBasedPathDirectives would support an expiration 
> time, after which they would be automatically removed by the NameNode.  This 
> time would probably be in wall-block time for the convenience of system 
> administrators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5538) URLConnectionFactory should pick up the SSL related configuration by default

2013-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829661#comment-13829661
 ] 

Hadoop QA commented on HDFS-5538:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615270/HDFS-5538.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.ipc.TestRPC
  org.apache.hadoop.ipc.TestIPC
  org.apache.hadoop.conf.TestConfiguration
  org.apache.hadoop.ipc.TestRPCCallBenchmark
  org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager
  
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
  
org.apache.hadoop.hdfs.server.namenode.metrics.TestNNMetricFilesInGetListingOps
  org.apache.hadoop.hdfs.TestListFilesInFileContext
  org.apache.hadoop.security.TestRefreshUserMappings
  org.apache.hadoop.hdfs.server.namenode.ha.TestHAWebUI
  
org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
  org.apache.hadoop.fs.TestSymlinkHdfsFileContext
  org.apache.hadoop.hdfs.TestDFSClientFailover
  
org.apache.hadoop.hdfs.server.namenode.ha.TestDFSZKFailoverController
  org.apache.hadoop.hdfs.server.datanode.TestBlockReport
  org.apache.hadoop.hdfs.TestParallelRead
  
org.apache.hadoop.hdfs.server.blockmanagement.TestHeartbeatHandling
  org.apache.hadoop.hdfs.TestLeaseRecovery2
  org.apache.hadoop.hdfs.TestBlockMissingException
  org.apache.hadoop.security.TestPermissionSymlinks
  org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
  org.apache.hadoop.hdfs.TestParallelUnixDomainRead
  org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache
  org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
  org.apache.hadoop.hdfs.TestLargeBlock
  org.apache.hadoop.fs.TestGlobPaths
  org.apache.hadoop.hdfs.TestCrcCorruption
  
org.apache.hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation
  org.apache.hadoop.hdfs.TestFileAppend
  org.apache.hadoop.hdfs.TestPread
  org.apache.hadoop.hdfs.server.namenode.TestEditLogRace
  org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean
  org.apache.hadoop.hdfs.TestReplication
  org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
  org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
  org.apache.hadoop.hdfs.TestMissingBlocksAlert
  org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
  
org.apache.hadoop.hdfs.server.datanode.TestMultipleNNDataBlockScanner
  
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
  org.apache.hadoop.hdfs.TestDFSAddressConfig
  org.apache.hadoop.hdfs.web.TestTokenAspect
  org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
  org.apache.hadoop.hdfs.TestSmallBlock
  org.apache.hadoop.hdfs.web.TestWebHDFS
  org.apache.hadoop.hdfs.server.namenode.TestParallelImageWrite
  org.apache.hadoop.hdfs.TestBlockReaderLocalLegacy
  org.apache.hadoop.hdfs.server.namenode.TestBackupNode
  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
  org.apache.hadoop.hdfs.server.datanode.TestRefreshNamenodes
  org.apache.hadoop.hdfs.server.namenode.TestAuditLogger
  org.apache.hadoop.hdfs.server.namenode.TestFsck
  org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
  
org.apache.hadoop.hdfs.ser

[jira] [Updated] (HDFS-5538) URLConnectionFactory should pick up the SSL related configuration by default

2013-11-21 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5538:
-

Attachment: HDFS-5538.001.patch

> URLConnectionFactory should pick up the SSL related configuration by default
> 
>
> Key: HDFS-5538
> URL: https://issues.apache.org/jira/browse/HDFS-5538
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5538.000.patch, HDFS-5538.001.patch
>
>
> The default instance of URLConnectionFactory, DEFAULT_CONNECTION_FACTORY does 
> not pick up any hadoop-specific, SSL-related configuration. Its customers 
> have to set up the ConnectionConfigurator explicitly in order to pick up 
> these configurations.
> This is less than ideal for HTTPS because whenever the code needs to make a 
> HTTPS connection, the code is forced to go through the set up.
> This jira refactors URLConnectionFactory to ease the handling of HTTPS 
> connections (compared to the DEFAULT_CONNECTION_FACTORY we have right now).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5544) Adding Test case For Checking dfs.checksum type as NULL value

2013-11-21 Thread sathish (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829644#comment-13829644
 ] 

sathish commented on HDFS-5544:
---

Thanks UMA for Reviewing the patch

> Adding Test case For Checking dfs.checksum type as NULL value
> -
>
> Key: HDFS-5544
> URL: https://issues.apache.org/jira/browse/HDFS-5544
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.1.0-beta
> Environment: HDFS-TEST
>Reporter: sathish
>Assignee: sathish
>Priority: Minor
> Attachments: HDFS-5544.patch
>
>
> https://issues.apache.org/jira/i#browse/HADOOP-9114,
> For checking the dfs.checksumtype as NULL,it is better  to add one unit test 
> case



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5550) Journal Node is not upgrade during HDFS upgrade

2013-11-21 Thread Fengdong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829643#comment-13829643
 ] 

Fengdong Yu commented on HDFS-5550:
---

{code}
org.apache.hadoop.hdfs.server.namenode.TransferFsImage$HttpGetFailedException: 
Fetch of 
http://test1.com:8480/getJournal?jid=test-cluster&segmentTxId=17&storageInfo=-48%3A435001357%3A1385094520024%3ACID-aa689dac-da2f-4ef2-bd71-76e9b4856cd1
 failed with status code 403
Response message:
This node has storage info 
'-43:435001357:0:CID-aa689dac-da2f-4ef2-bd71-76e9b4856cd1' but the requesting 
node expected 
'-48:435001357:1385094520024:CID-aa689dac-da2f-4ef2-bd71-76e9b4856cd1'
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:383)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:376)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
at 
org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:489)
at 
org.apache.hadoop.security.SecurityUtil.doAsCurrentUser(SecurityUtil.java:483)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog.getInputStream(EditLogFileInputStream.java:375)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.init(EditLogFileInputStream.java:130)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:170)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:221)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:83)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:140)
at 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:83)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:140)
at 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:83)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:163)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:116)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:730)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:227)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:321)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:279)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:296)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:456)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:292)
{code}

> Journal Node is not upgrade during HDFS upgrade
> ---
>
> Key: HDFS-5550
> URL: https://issues.apache.org/jira/browse/HDFS-5550
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, hdfs-client
>Affects Versions: 3.0.0, 2.2.0, 2.2.1
>Reporter: Fengdong Yu
>Priority: Blocker
>
> HDFS upgrade from 2.0.3 to 2.2.0,  but Journal node doesn't upgrade, a 
> directly problem is VERSION file is old.
> so SNN relay edit log through http://hostname:8480/getJournal?*** get 403, 
> because VERSION is mismatched.
> I marked this as Blocker, does that OK?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5430) Support TTL on CacheBasedPathDirectives

2013-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5430:
--

Attachment: hdfs-5430-1.patch

Patch attached, I tried making a binary one this time for the edits stored test.

This adds a new {{expiryTime}} field to {{CacheDirective}} and {{Info}}. Adding 
and removing via CacheAdmin uses relative times for ease of use, but else it's 
just an {{expiryTime}} timestamp. Once expired, directives are no longer 
cached, but are not automatically removed. The user needs to go in and clean 
them up, or modify them so they are valid again.

I tested manually a bit by adding directives with expirations and watched them 
time out and be uncached. Also included is a unit test.

One further idea for an improvement is putting a max TTL on a CachePool; this 
way, all directives added to the pool eventually expire.

> Support TTL on CacheBasedPathDirectives
> ---
>
> Key: HDFS-5430
> URL: https://issues.apache.org/jira/browse/HDFS-5430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: hdfs-5430-1.patch
>
>
> It would be nice if CacheBasedPathDirectives would support an expiration 
> time, after which they would be automatically removed by the NameNode.  This 
> time would probably be in wall-block time for the convenience of system 
> administrators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5550) Journal Node is not upgrade during HDFS upgrade

2013-11-21 Thread Fengdong Yu (JIRA)
Fengdong Yu created HDFS-5550:
-

 Summary: Journal Node is not upgrade during HDFS upgrade
 Key: HDFS-5550
 URL: https://issues.apache.org/jira/browse/HDFS-5550
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, hdfs-client
Affects Versions: 2.2.0, 3.0.0, 2.2.1
Reporter: Fengdong Yu
Priority: Blocker


HDFS upgrade from 2.0.3 to 2.2.0,  but Journal node doesn't upgrade, a directly 
problem is VERSION file is old.

so SNN relay edit log through http://hostname:8480/getJournal?*** get 403, 
because VERSION is mismatched.

If marked this as Blocker, does that OK?





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5550) Journal Node is not upgrade during HDFS upgrade

2013-11-21 Thread Fengdong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengdong Yu updated HDFS-5550:
--

Description: 
HDFS upgrade from 2.0.3 to 2.2.0,  but Journal node doesn't upgrade, a directly 
problem is VERSION file is old.

so SNN relay edit log through http://hostname:8480/getJournal?*** get 403, 
because VERSION is mismatched.

I marked this as Blocker, does that OK?



  was:
HDFS upgrade from 2.0.3 to 2.2.0,  but Journal node doesn't upgrade, a directly 
problem is VERSION file is old.

so SNN relay edit log through http://hostname:8480/getJournal?*** get 403, 
because VERSION is mismatched.

If marked this as Blocker, does that OK?




> Journal Node is not upgrade during HDFS upgrade
> ---
>
> Key: HDFS-5550
> URL: https://issues.apache.org/jira/browse/HDFS-5550
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, hdfs-client
>Affects Versions: 3.0.0, 2.2.0, 2.2.1
>Reporter: Fengdong Yu
>Priority: Blocker
>
> HDFS upgrade from 2.0.3 to 2.2.0,  but Journal node doesn't upgrade, a 
> directly problem is VERSION file is old.
> so SNN relay edit log through http://hostname:8480/getJournal?*** get 403, 
> because VERSION is mismatched.
> I marked this as Blocker, does that OK?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5548) Use ConcurrentHashMap in portmap

2013-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829635#comment-13829635
 ] 

Hadoop QA commented on HDFS-5548:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615253/HDFS-5548.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 eclipse:eclipse{color}.  The patch failed to build with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5537//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5537//console

This message is automatically generated.

> Use ConcurrentHashMap in portmap
> 
>
> Key: HDFS-5548
> URL: https://issues.apache.org/jira/browse/HDFS-5548
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5548.000.patch, HDFS-5548.001.patch
>
>
> Portmap uses a HashMap to store the port mapping. It synchronizes the access 
> of the hash map by locking itself. It can be simplified by using a 
> ConcurrentHashMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5540) Fix TestBlocksWithNotEnoughRacks

2013-11-21 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-5540:


Attachment: HDFS-5540.v1.patch

attach initial patch, make the check more often, for 1000ms to 100ms

> Fix TestBlocksWithNotEnoughRacks
> 
>
> Key: HDFS-5540
> URL: https://issues.apache.org/jira/browse/HDFS-5540
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HDFS-5540.v1.patch
>
>
> TestBlocksWithNotEnoughRacks fails with timed out waiting for corrupt replicas
> java.util.concurrent.TimeoutException: Timed out waiting for corrupt 
> replicas. Waiting for 1, but only found 0
>   at 
> org.apache.hadoop.hdfs.DFSTestUtil.waitCorruptReplicas(DFSTestUtil.java:351)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:219)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5540) Fix TestBlocksWithNotEnoughRacks

2013-11-21 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-5540:


Status: Patch Available  (was: Open)

> Fix TestBlocksWithNotEnoughRacks
> 
>
> Key: HDFS-5540
> URL: https://issues.apache.org/jira/browse/HDFS-5540
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>
> TestBlocksWithNotEnoughRacks fails with timed out waiting for corrupt replicas
> java.util.concurrent.TimeoutException: Timed out waiting for corrupt 
> replicas. Waiting for 1, but only found 0
>   at 
> org.apache.hadoop.hdfs.DFSTestUtil.waitCorruptReplicas(DFSTestUtil.java:351)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:219)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5312) Refactor DFSUtil#getInfoServer to return an URI

2013-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829625#comment-13829625
 ] 

Hadoop QA commented on HDFS-5312:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615243/HDFS-5312.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5533//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5533//console

This message is automatically generated.

> Refactor DFSUtil#getInfoServer to return an URI
> ---
>
> Key: HDFS-5312
> URL: https://issues.apache.org/jira/browse/HDFS-5312
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5312.000.patch, HDFS-5312.001.patch, 
> HDFS-5312.002.patch, HDFS-5312.003.patch
>
>
> DFSUtil#getInfoServer() returns only the authority (i.e., host+port) when 
> searching for the http / https server. The task of figuring out which scheme 
> that the authority supposes to use. Unsurprisingly, the task is cumbersome, 
> and the code performs the task in an inconsistent way.
> This JIRA proposes to return an URI object instead of a string, so that the 
> scheme is an inherent parts of the return value, which eliminates the task of 
> figuring out the scheme by design. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5540) Fix TestBlocksWithNotEnoughRacks

2013-11-21 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829622#comment-13829622
 ] 

Binglin Chang commented on HDFS-5540:
-

Read the 
[log|https://builds.apache.org/job/PreCommit-HDFS-Build/5504//testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestBlocksWithNotEnoughRacks/testCorruptBlockRereplicatedAcrossRacks/]

{code}
2013-11-20 17:29:58,638 INFO  DataNode.clienttrace 
(BlockSender.java:sendBlock(734)) - src: /127.0.0.1:45980, dest: 
/127.0.0.1:47450, bytes: 516, op: HDFS_READ, cliID: 
DFSClient_NONMAPREDUCE_1758168951_1, offset: 0, srvID: 
DS-655102145-67.195.138.24-45980-1384968592304, blockid: 
BP-62019746-67.195.138.24-1384968591859:blk_1073741825_1001, duration: 244566
Waiting for 1 corrupt replicas

2013-11-20 17:29:58,660 INFO  BlockStateChange 
(CorruptReplicasMap.java:addToCorruptReplicasMap(88)) - BLOCK 
NameSystem.addToCorruptReplicasMap: blk_1073741825 added as corrupt on 
127.0.0.1:41043 by localhost/127.0.0.1 because client machine reported it

2013-11-20 17:29:59,346 INFO  datanode.DataNode 
(DataXceiver.java:writeBlock(594)) - Received 
BP-62019746-67.195.138.24-1384968591859:blk_1073741825_1001 src: 
/127.0.0.1:39752 dest: /127.0.0.1:49340 of size 512
2013-11-20 17:29:59,347 INFO  BlockStateChange 
(BlockManager.java:logAddStoredBlock(2275)) - BLOCK* addStoredBlock: blockMap 
updated: 127.0.0.1:49340 is added to blk_1073741825_1001 size 512
2013-11-20 17:29:59,347 INFO  BlockStateChange 
(BlockManager.java:invalidateBlock(1092)) - BLOCK* invalidateBlock: 
blk_1073741825_1001(same as stored) on 127.0.0.1:41043

2013-11-20 17:29:59,640 INFO  FSNamesystem.audit 
(FSNamesystem.java:logAuditMessage(7373)) - allowed=true ugi=jenkins 
(auth:SIMPLE) ip=/127.0.0.1 cmd=open  src=/testFile dst=null  perm=null
Waiting for 1 corrupt replicas
{code}

>From the log we can see that, DFSTestUtil.waitCorruptReplicas check corrupt 
>repls every 1 second, but hdfs found and recover the block just within just a 
>second, so DFSTestUtil.waitCorruptReplicas have never detected the corrupt 
>block, causing timeout.



> Fix TestBlocksWithNotEnoughRacks
> 
>
> Key: HDFS-5540
> URL: https://issues.apache.org/jira/browse/HDFS-5540
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>
> TestBlocksWithNotEnoughRacks fails with timed out waiting for corrupt replicas
> java.util.concurrent.TimeoutException: Timed out waiting for corrupt 
> replicas. Waiting for 1, but only found 0
>   at 
> org.apache.hadoop.hdfs.DFSTestUtil.waitCorruptReplicas(DFSTestUtil.java:351)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:219)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-2882) DN continues to start up, even if block pool fails to initialize

2013-11-21 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829621#comment-13829621
 ] 

Vinay commented on HDFS-2882:
-

Ok. agree its a misconfiguration from user. In these cases datanode should 
decide whether to continue with failed state or to exit. 

bq. I definitely think there are issues around DataNode / block pool lifecycle, 
but I don't have a good handle on what they are yet. I need to review what the 
expected behavior is in these scenarios.
Sure.!

> DN continues to start up, even if block pool fails to initialize
> 
>
> Key: HDFS-2882
> URL: https://issues.apache.org/jira/browse/HDFS-2882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.0.2-alpha
>Reporter: Todd Lipcon
>Assignee: Vinay
> Attachments: HDFS-2882.patch, HDFS-2882.patch, HDFS-2882.patch, 
> HDFS-2882.patch, HDFS-2882.patch, HDFS-2882.patch, hdfs-2882.txt
>
>
> I started a DN on a machine that was completely out of space on one of its 
> drives. I saw the following:
> 2012-02-02 09:56:50,499 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool BP-448349972-172.29.5.192-1323816762969 (storage id 
> DS-507718931-172.29.5.194-11072-12978
> 42002148) service to styx01.sf.cloudera.com/172.29.5.192:8021
> java.io.IOException: Mkdirs failed to create 
> /data/1/scratch/todd/styx-datadir/current/BP-448349972-172.29.5.192-1323816762969/tmp
> at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$BlockPoolSlice.(FSDataset.java:335)
> but the DN continued to run, spewing NPEs when it tried to do block reports, 
> etc. This was on the HDFS-1623 branch but may affect trunk as well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-2882) DN continues to start up, even if block pool fails to initialize

2013-11-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829617#comment-13829617
 ] 

Colin Patrick McCabe commented on HDFS-2882:


bq. These are not exactly mis-configurations, May be possible cases where we 
need to decide datanode should go down/keep running.

No, they are exactly misconfigurations.  Remember them again:

{code}
Scenario 1: user configures DN to point to a single cluster which doesn't match 
its storage
Scenario 2: user configures DN to point to one NN. The user adds an additional 
nameservice to the config and issues a -refreshNamenodes call. The newly added 
nameservice is from the wrong cluster.
Scenario 3: user configures DN to point to two different NNs which are on 
different clusters, and starts up.
{code}

None of those are correct configurations.

bq. Another issue HDFS-5529 raised by Brahma is due to disk error.

That is a good point.  Based on the log message he is seeing, the DN definitely 
is continuing on for some length of time after a block pool has failed, which 
seems related to this bug.

I definitely think there are issues around DataNode / block pool lifecycle, but 
I don't have a good handle on what they are yet.  I need to review what the 
expected behavior is in these scenarios.

> DN continues to start up, even if block pool fails to initialize
> 
>
> Key: HDFS-2882
> URL: https://issues.apache.org/jira/browse/HDFS-2882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.0.2-alpha
>Reporter: Todd Lipcon
>Assignee: Vinay
> Attachments: HDFS-2882.patch, HDFS-2882.patch, HDFS-2882.patch, 
> HDFS-2882.patch, HDFS-2882.patch, HDFS-2882.patch, hdfs-2882.txt
>
>
> I started a DN on a machine that was completely out of space on one of its 
> drives. I saw the following:
> 2012-02-02 09:56:50,499 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool BP-448349972-172.29.5.192-1323816762969 (storage id 
> DS-507718931-172.29.5.194-11072-12978
> 42002148) service to styx01.sf.cloudera.com/172.29.5.192:8021
> java.io.IOException: Mkdirs failed to create 
> /data/1/scratch/todd/styx-datadir/current/BP-448349972-172.29.5.192-1323816762969/tmp
> at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$BlockPoolSlice.(FSDataset.java:335)
> but the DN continued to run, spewing NPEs when it tried to do block reports, 
> etc. This was on the HDFS-1623 branch but may affect trunk as well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5549) Support for implementing custom FsDatasetSpi from outside the project

2013-11-21 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829613#comment-13829613
 ] 

Vinay commented on HDFS-5549:
-

Hi [~icorderi],
May I know the usecases of this improvement.?

You want to change the Datanode to datanode block transfer to some other 
implementation?

> Support for implementing custom FsDatasetSpi from outside the project
> -
>
> Key: HDFS-5549
> URL: https://issues.apache.org/jira/browse/HDFS-5549
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Ignacio Corderi
> Attachments: HDFS-5549.patch
>
>
> Visibility for multiple methods and a few classes got changed to public to 
> allow FsDatasetSpi and all the related classes that need subtyping to be 
> fully implemented from outside the HDFS project.
> Blocks transfers got abstracted to a factory given that the behavior will be 
> changed for DataNodes using Kinetic drives. The existing DataNode to DataNode 
> block transfer functionality got moved to LegacyBlockTransferer, no new 
> configuration is needed to use this class and have the same behavior that is 
> currently present.
> DataNodes have an additional configuration key 
> DFS_DATANODE_BLOCKTRANSFERER_FACTORY_KEY to override the default block 
> transfer behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4516) Client crash after block allocation and NN switch before lease recovery for the same file can cause readers to fail forever

2013-11-21 Thread Han Xiao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829594#comment-13829594
 ] 

Han Xiao commented on HDFS-4516:


The revision is elegant. We are benefit from the modified fsync to which the 
blocklength para is added.
Nice work.

> Client crash after block allocation and NN switch before lease recovery for 
> the same file can cause readers to fail forever
> ---
>
> Key: HDFS-4516
> URL: https://issues.apache.org/jira/browse/HDFS-4516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Uma Maheswara Rao G
>Assignee: Vinay
>Priority: Critical
> Fix For: 3.0.0, 2.3.0, 2.2.1
>
> Attachments: HDFS-4516-Test.patch, HDFS-4516.patch, HDFS-4516.patch, 
> HDFS-4516.patch, HDFS-4516.txt
>
>
> If client crashes just after allocating block( blocks not yet created in DNs) 
> and NN also switched after this, then new Namenode will not know about locs.
> Further details will be in comment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5286) Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature

2013-11-21 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5286:
-

Description: 
Similar to the case of INodeFile (HFDS-5285), we should also flatten the 
INodeDirectory hierarchy.

This is the first step to add DirectoryWithQuotaFeature for replacing 
INodeDirectoryWithQuota.

  was:Similar to the case of INodeFile (HFDS-5285), we should also flatten the 
INodeDirectory hierarchy.

Summary: Flatten INodeDirectory hierarchy: add 
DirectoryWithQuotaFeature  (was: Flatten INodeDirectory hierarchy)

> Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature
> ---
>
> Key: HDFS-5286
> URL: https://issues.apache.org/jira/browse/HDFS-5286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>
> Similar to the case of INodeFile (HFDS-5285), we should also flatten the 
> INodeDirectory hierarchy.
> This is the first step to add DirectoryWithQuotaFeature for replacing 
> INodeDirectoryWithQuota.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5549) Support for implementing custom FsDatasetSpi from outside the project

2013-11-21 Thread Ignacio Corderi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Corderi updated HDFS-5549:
--

Affects Version/s: 3.0.0
   Status: Patch Available  (was: Open)

> Support for implementing custom FsDatasetSpi from outside the project
> -
>
> Key: HDFS-5549
> URL: https://issues.apache.org/jira/browse/HDFS-5549
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Ignacio Corderi
> Attachments: HDFS-5549.patch
>
>
> Visibility for multiple methods and a few classes got changed to public to 
> allow FsDatasetSpi and all the related classes that need subtyping to be 
> fully implemented from outside the HDFS project.
> Blocks transfers got abstracted to a factory given that the behavior will be 
> changed for DataNodes using Kinetic drives. The existing DataNode to DataNode 
> block transfer functionality got moved to LegacyBlockTransferer, no new 
> configuration is needed to use this class and have the same behavior that is 
> currently present.
> DataNodes have an additional configuration key 
> DFS_DATANODE_BLOCKTRANSFERER_FACTORY_KEY to override the default block 
> transfer behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5549) Support for implementing custom FsDatasetSpi from outside the project

2013-11-21 Thread Ignacio Corderi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Corderi updated HDFS-5549:
--

Attachment: HDFS-5549.patch

> Support for implementing custom FsDatasetSpi from outside the project
> -
>
> Key: HDFS-5549
> URL: https://issues.apache.org/jira/browse/HDFS-5549
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Ignacio Corderi
> Attachments: HDFS-5549.patch
>
>
> Visibility for multiple methods and a few classes got changed to public to 
> allow FsDatasetSpi and all the related classes that need subtyping to be 
> fully implemented from outside the HDFS project.
> Blocks transfers got abstracted to a factory given that the behavior will be 
> changed for DataNodes using Kinetic drives. The existing DataNode to DataNode 
> block transfer functionality got moved to LegacyBlockTransferer, no new 
> configuration is needed to use this class and have the same behavior that is 
> currently present.
> DataNodes have an additional configuration key 
> DFS_DATANODE_BLOCKTRANSFERER_FACTORY_KEY to override the default block 
> transfer behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5549) Support for implementing custom FsDatasetSpi from outside the project

2013-11-21 Thread Ignacio Corderi (JIRA)
Ignacio Corderi created HDFS-5549:
-

 Summary: Support for implementing custom FsDatasetSpi from outside 
the project
 Key: HDFS-5549
 URL: https://issues.apache.org/jira/browse/HDFS-5549
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Ignacio Corderi


Visibility for multiple methods and a few classes got changed to public to 
allow FsDatasetSpi and all the related classes that need subtyping to be 
fully implemented from outside the HDFS project.

Blocks transfers got abstracted to a factory given that the behavior will be 
changed for DataNodes using Kinetic drives. The existing DataNode to DataNode 
block transfer functionality got moved to LegacyBlockTransferer, no new 
configuration is needed to use this class and have the same behavior that is 
currently present.

DataNodes have an additional configuration key 
DFS_DATANODE_BLOCKTRANSFERER_FACTORY_KEY to override the default block transfer 
behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5548) Use ConcurrentHashMap in portmap

2013-11-21 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5548:
-

Attachment: HDFS-5548.001.patch

> Use ConcurrentHashMap in portmap
> 
>
> Key: HDFS-5548
> URL: https://issues.apache.org/jira/browse/HDFS-5548
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5548.000.patch, HDFS-5548.001.patch
>
>
> Portmap uses a HashMap to store the port mapping. It synchronizes the access 
> of the hash map by locking itself. It can be simplified by using a 
> ConcurrentHashMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5285) Flatten INodeFile hierarchy: Add UnderContruction Feature

2013-11-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829566#comment-13829566
 ] 

Hudson commented on HDFS-5285:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4785 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4785/])
HDFS-5285. Flatten INodeFile hierarchy: Replace INodeFileUnderConstruction and 
INodeFileUnderConstructionWithSnapshot with FileUnderContructionFeature.  
Contributed by jing9 (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544389)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileUnderConstructionFeature.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileUnderConstruction.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileUnderConstructionWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/CreateEditsLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java


> Flatten INodeFile hierarchy: Add UnderContruction Feature
> -
>
> Key: HDFS-5285
> URL: https://issues.apache.org/jira/browse/HDFS-5285
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Jing Zhao
> Fix For: 3.0.0
>
> Attachments: HDFS-5285.001.patch, HDFS-5285.002.patch, 
> HDFS-5285.003.patch, h5285_20131001.patch, h5285_20131002.patch, 
> h5285_20131118.patch
>
>
> For files, there are INodeFile, INodeFileUnderConstruction, 
> INodeFileWithSnapshot and INodeFileUnderConstructionWithSnapshot for 
> representing whether a file is under construction or whether it is in some 
> snapshot.  The following are two major problems of the current approach:
> - Java class does not support multiple inheritances so that 
> INodeFileUnderConstructionWithSnapshot cannot extend both 
> INodeFileUnderConstruction and INodeFileWithS

[jira] [Commented] (HDFS-5285) Flatten INodeFile hierarchy: Add UnderContruction Feature

2013-11-21 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829557#comment-13829557
 ] 

Vinay commented on HDFS-5285:
-

+1, patch looks good Jing.


> Flatten INodeFile hierarchy: Add UnderContruction Feature
> -
>
> Key: HDFS-5285
> URL: https://issues.apache.org/jira/browse/HDFS-5285
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Jing Zhao
> Fix For: 3.0.0
>
> Attachments: HDFS-5285.001.patch, HDFS-5285.002.patch, 
> HDFS-5285.003.patch, h5285_20131001.patch, h5285_20131002.patch, 
> h5285_20131118.patch
>
>
> For files, there are INodeFile, INodeFileUnderConstruction, 
> INodeFileWithSnapshot and INodeFileUnderConstructionWithSnapshot for 
> representing whether a file is under construction or whether it is in some 
> snapshot.  The following are two major problems of the current approach:
> - Java class does not support multiple inheritances so that 
> INodeFileUnderConstructionWithSnapshot cannot extend both 
> INodeFileUnderConstruction and INodeFileWithSnapshot.
> - The number of classes is exponential to the number of features.  Currently, 
> there are only two features, UnderConstruction and WithSnapshot.  The number 
> of classes is 2^2 = 4.  It is hard to add one more feature since the number 
> of classes will become 2^3 = 8.
> As a first step, we implement an Under-Construction feature to replace 
> INodeFileUnderConstruction and INodeFileUnderConstructionWithSnapshot in this 
> jira.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5285) Flatten INodeFile hierarchy: Add UnderContruction Feature

2013-11-21 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5285:
-

   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Jing!

Thanks also Vinay for reviewing this.

> Flatten INodeFile hierarchy: Add UnderContruction Feature
> -
>
> Key: HDFS-5285
> URL: https://issues.apache.org/jira/browse/HDFS-5285
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Jing Zhao
> Fix For: 3.0.0
>
> Attachments: HDFS-5285.001.patch, HDFS-5285.002.patch, 
> HDFS-5285.003.patch, h5285_20131001.patch, h5285_20131002.patch, 
> h5285_20131118.patch
>
>
> For files, there are INodeFile, INodeFileUnderConstruction, 
> INodeFileWithSnapshot and INodeFileUnderConstructionWithSnapshot for 
> representing whether a file is under construction or whether it is in some 
> snapshot.  The following are two major problems of the current approach:
> - Java class does not support multiple inheritances so that 
> INodeFileUnderConstructionWithSnapshot cannot extend both 
> INodeFileUnderConstruction and INodeFileWithSnapshot.
> - The number of classes is exponential to the number of features.  Currently, 
> there are only two features, UnderConstruction and WithSnapshot.  The number 
> of classes is 2^2 = 4.  It is hard to add one more feature since the number 
> of classes will become 2^3 = 8.
> As a first step, we implement an Under-Construction feature to replace 
> INodeFileUnderConstruction and INodeFileUnderConstructionWithSnapshot in this 
> jira.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5548) Use ConcurrentHashMap in portmap

2013-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829553#comment-13829553
 ] 

Hadoop QA commented on HDFS-5548:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615193/HDFS-5548.000.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
-2 warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-nfs:

  org.apache.hadoop.portmap.TestPortmap

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5534//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5534//console

This message is automatically generated.

> Use ConcurrentHashMap in portmap
> 
>
> Key: HDFS-5548
> URL: https://issues.apache.org/jira/browse/HDFS-5548
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5548.000.patch
>
>
> Portmap uses a HashMap to store the port mapping. It synchronizes the access 
> of the hash map by locking itself. It can be simplified by using a 
> ConcurrentHashMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-2882) DN continues to start up, even if block pool fails to initialize

2013-11-21 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829549#comment-13829549
 ] 

Vinay commented on HDFS-2882:
-

bq. All of the scenarios Todd outlined were misconfigurations.
These are not exactly mis-configurations, May be possible cases where we need 
to decide datanode should go down/keep running. Even if its mis-configuration 
then definitely datanode should go down right?

bq. Do you believe this problem can occur just because a directory is 
unwritable?
Yes I do. In the todd's case issue came due to disk full. Another issue 
HDFS-5529 raised by [~brahma] is due to disk error. 

> DN continues to start up, even if block pool fails to initialize
> 
>
> Key: HDFS-2882
> URL: https://issues.apache.org/jira/browse/HDFS-2882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.0.2-alpha
>Reporter: Todd Lipcon
>Assignee: Vinay
> Attachments: HDFS-2882.patch, HDFS-2882.patch, HDFS-2882.patch, 
> HDFS-2882.patch, HDFS-2882.patch, HDFS-2882.patch, hdfs-2882.txt
>
>
> I started a DN on a machine that was completely out of space on one of its 
> drives. I saw the following:
> 2012-02-02 09:56:50,499 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool BP-448349972-172.29.5.192-1323816762969 (storage id 
> DS-507718931-172.29.5.194-11072-12978
> 42002148) service to styx01.sf.cloudera.com/172.29.5.192:8021
> java.io.IOException: Mkdirs failed to create 
> /data/1/scratch/todd/styx-datadir/current/BP-448349972-172.29.5.192-1323816762969/tmp
> at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$BlockPoolSlice.(FSDataset.java:335)
> but the DN continued to run, spewing NPEs when it tried to do block reports, 
> etc. This was on the HDFS-1623 branch but may affect trunk as well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5312) Refactor DFSUtil#getInfoServer to return an URI

2013-11-21 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5312:
-

Attachment: HDFS-5312.003.patch

> Refactor DFSUtil#getInfoServer to return an URI
> ---
>
> Key: HDFS-5312
> URL: https://issues.apache.org/jira/browse/HDFS-5312
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5312.000.patch, HDFS-5312.001.patch, 
> HDFS-5312.002.patch, HDFS-5312.003.patch
>
>
> DFSUtil#getInfoServer() returns only the authority (i.e., host+port) when 
> searching for the http / https server. The task of figuring out which scheme 
> that the authority supposes to use. Unsurprisingly, the task is cumbersome, 
> and the code performs the task in an inconsistent way.
> This JIRA proposes to return an URI object instead of a string, so that the 
> scheme is an inherent parts of the return value, which eliminates the task of 
> figuring out the scheme by design. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5548) Use ConcurrentHashMap in portmap

2013-11-21 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5548:
-

Status: Patch Available  (was: Open)

> Use ConcurrentHashMap in portmap
> 
>
> Key: HDFS-5548
> URL: https://issues.apache.org/jira/browse/HDFS-5548
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5548.000.patch
>
>
> Portmap uses a HashMap to store the port mapping. It synchronizes the access 
> of the hash map by locking itself. It can be simplified by using a 
> ConcurrentHashMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HDFS-5546) race condition crashes "hadoop ls -R" when directories are moved/removed

2013-11-21 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta reassigned HDFS-5546:


Assignee: Kousuke Saruta

> race condition crashes "hadoop ls -R" when directories are moved/removed
> 
>
> Key: HDFS-5546
> URL: https://issues.apache.org/jira/browse/HDFS-5546
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Colin Patrick McCabe
>Assignee: Kousuke Saruta
>Priority: Minor
>
> This seems to be a rare race condition where we have a sequence of events 
> like this:
> 1. org.apache.hadoop.shell.Ls calls DFS#getFileStatus on directory D.
> 2. someone deletes or moves directory D
> 3. org.apache.hadoop.shell.Ls calls PathData#getDirectoryContents(D), which 
> calls DFS#listStatus(D). This throws FileNotFoundException.
> 4. ls command terminates with FNF



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5546) race condition crashes "hadoop ls -R" when directories are moved/removed

2013-11-21 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5546:
-

Assignee: (was: Kousuke Saruta)

> race condition crashes "hadoop ls -R" when directories are moved/removed
> 
>
> Key: HDFS-5546
> URL: https://issues.apache.org/jira/browse/HDFS-5546
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Colin Patrick McCabe
>Priority: Minor
>
> This seems to be a rare race condition where we have a sequence of events 
> like this:
> 1. org.apache.hadoop.shell.Ls calls DFS#getFileStatus on directory D.
> 2. someone deletes or moves directory D
> 3. org.apache.hadoop.shell.Ls calls PathData#getDirectoryContents(D), which 
> calls DFS#listStatus(D). This throws FileNotFoundException.
> 4. ls command terminates with FNF



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-2882) DN continues to start up, even if block pool fails to initialize

2013-11-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829504#comment-13829504
 ] 

Colin Patrick McCabe commented on HDFS-2882:


All of the scenarios Todd outlined were misconfigurations.  Do you believe this 
problem can occur just because a directory is unwritable?

I will try to reproduce this tomorrow with an HA cluster.

> DN continues to start up, even if block pool fails to initialize
> 
>
> Key: HDFS-2882
> URL: https://issues.apache.org/jira/browse/HDFS-2882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.0.2-alpha
>Reporter: Todd Lipcon
>Assignee: Vinay
> Attachments: HDFS-2882.patch, HDFS-2882.patch, HDFS-2882.patch, 
> HDFS-2882.patch, HDFS-2882.patch, HDFS-2882.patch, hdfs-2882.txt
>
>
> I started a DN on a machine that was completely out of space on one of its 
> drives. I saw the following:
> 2012-02-02 09:56:50,499 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool BP-448349972-172.29.5.192-1323816762969 (storage id 
> DS-507718931-172.29.5.194-11072-12978
> 42002148) service to styx01.sf.cloudera.com/172.29.5.192:8021
> java.io.IOException: Mkdirs failed to create 
> /data/1/scratch/todd/styx-datadir/current/BP-448349972-172.29.5.192-1323816762969/tmp
> at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$BlockPoolSlice.(FSDataset.java:335)
> but the DN continued to run, spewing NPEs when it tried to do block reports, 
> etc. This was on the HDFS-1623 branch but may affect trunk as well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5407) Fix typos in DFSClientCache

2013-11-21 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5407:
-

Fix Version/s: 2.2.1

> Fix typos in DFSClientCache
> ---
>
> Key: HDFS-5407
> URL: https://issues.apache.org/jira/browse/HDFS-5407
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Trivial
> Fix For: 2.2.1
>
> Attachments: HDFS-5407.000.patch
>
>
> in DFSClientCache, clientRemovealListener() should be clientRemovalListener().



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5407) Fix typos in DFSClientCache

2013-11-21 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5407:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Fix typos in DFSClientCache
> ---
>
> Key: HDFS-5407
> URL: https://issues.apache.org/jira/browse/HDFS-5407
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Trivial
> Attachments: HDFS-5407.000.patch
>
>
> in DFSClientCache, clientRemovealListener() should be clientRemovalListener().



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-2832) Enable support for heterogeneous storages in HDFS

2013-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829486#comment-13829486
 ] 

Hadoop QA commented on HDFS-2832:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615196/h2832_20131121.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 45 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
-2 warning messages.

{color:red}-1 eclipse:eclipse{color}.  The patch failed to build with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5532//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5532//console

This message is automatically generated.

> Enable support for heterogeneous storages in HDFS
> -
>
> Key: HDFS-2832
> URL: https://issues.apache.org/jira/browse/HDFS-2832
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 0.24.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: 20130813-HeterogeneousStorage.pdf, H2832_20131107.patch, 
> editsStored, h2832_20131023.patch, h2832_20131023b.patch, 
> h2832_20131025.patch, h2832_20131028.patch, h2832_20131028b.patch, 
> h2832_20131029.patch, h2832_20131103.patch, h2832_20131104.patch, 
> h2832_20131105.patch, h2832_20131107b.patch, h2832_20131108.patch, 
> h2832_20131110.patch, h2832_20131110b.patch, h2832_2013.patch, 
> h2832_20131112.patch, h2832_20131112b.patch, h2832_20131114.patch, 
> h2832_20131118.patch, h2832_20131119.patch, h2832_20131119b.patch, 
> h2832_20131121.patch
>
>
> HDFS currently supports configuration where storages are a list of 
> directories. Typically each of these directories correspond to a volume with 
> its own file system. All these directories are homogeneous and therefore 
> identified as a single storage at the namenode. I propose, change to the 
> current model where Datanode * is a * storage, to Datanode * is a collection 
> * of strorages. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4506) In branch-1, HDFS short circuit fails non-transparently when user does not have unix permissions

2013-11-21 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829474#comment-13829474
 ] 

Kousuke Saruta commented on HDFS-4506:
--

getLocalBLockReader calls BlockReaderLocal.newBlockReader and newBlockReader 
accesses local file. When we access local files with FileInputStream ( and 
subclasses ) without proper permission, FileNotFoundException will be thrown. 
So, can we modify like as follows?

{code}
  private BlockReader getLocalBlockReader(Configuration conf,
  String src, Block blk, Token accessToken,
  DatanodeInfo chosenNode, int socketTimeout, long offsetIntoBlock)
  throws InvalidToken, IOException {
try {
  return BlockReaderLocal.newBlockReader(conf, src, blk, accessToken,
  chosenNode, socketTimeout, offsetIntoBlock, blk.getNumBytes()
  - offsetIntoBlock, connectToDnViaHostname);
-} catch (RemoteException re) {
-  throw re.unwrapRemoteException(InvalidToken.class,
-  AccessControlException.class);
+} catch (FileNotFoundException fe) {
+  throw new AcceccControlException(fe);
+} catch (IOException ie) {
+  throw ie;
 }
   }
{code}

> In branch-1, HDFS short circuit fails non-transparently when user does not 
> have unix permissions
> 
>
> Key: HDFS-4506
> URL: https://issues.apache.org/jira/browse/HDFS-4506
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 1.1.1
>Reporter: Enis Soztutar
>
> We found a case, where if the short circuit user name is configured 
> correctly, but the user does not have enough permissions in unix, DFS 
> operations fails with IOException, rather than silently failing over through 
> datanode. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5312) Refactor DFSUtil#getInfoServer to return an URI

2013-11-21 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829469#comment-13829469
 ] 

Jing Zhao commented on HDFS-5312:
-

The patch looks good to me. Some comments:
# With the patch the fsimage can only be transferred through http, since "http" 
is hard-coded when generating the URL. I guess you plan to fix this in 
HDFS-5311 and HDFS-5536? Another option here is to still use 
HttpConfig#getSchemePrefix and replace the call in later jiras.
# Nit: in DFSUtil#getInfoServer, it will be better to change the following code 
to "http".eqauls(scheme)
{code}
+if (scheme.equals("http")) {
+  authority = getSuffixedConf(conf, DFS_NAMENODE_HTTP_ADDRESS_KEY,
+  DFS_NAMENODE_HTTP_ADDRESS_DEFAULT, suffixes);
+} else if (scheme.equals("https")) {
{code}
Also, let's avoid directly using constant strings here. By making this change,
we can also remove the Preconditions check.

> Refactor DFSUtil#getInfoServer to return an URI
> ---
>
> Key: HDFS-5312
> URL: https://issues.apache.org/jira/browse/HDFS-5312
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5312.000.patch, HDFS-5312.001.patch, 
> HDFS-5312.002.patch
>
>
> DFSUtil#getInfoServer() returns only the authority (i.e., host+port) when 
> searching for the http / https server. The task of figuring out which scheme 
> that the authority supposes to use. Unsurprisingly, the task is cumbersome, 
> and the code performs the task in an inconsistent way.
> This JIRA proposes to return an URI object instead of a string, so that the 
> scheme is an inherent parts of the return value, which eliminates the task of 
> figuring out the scheme by design. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5532) Enable the webhdfs by default to support new HDFS web UI

2013-11-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13828927#comment-13828927
 ] 

Hudson commented on HDFS-5532:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1615 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1615/])
HDFS-5532. Enable the webhdfs by default to support new HDFS web UI. 
Contributed by Vinay. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544047)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java


> Enable the webhdfs by default to support new HDFS web UI
> 
>
> Key: HDFS-5532
> URL: https://issues.apache.org/jira/browse/HDFS-5532
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Vinay
>Assignee: Vinay
> Fix For: 2.3.0
>
> Attachments: HDFS-5532.patch, HDFS-5532.patch
>
>
> Recently in HDFS-5444, new HDFS web UI is made as default. 
>  but this needs webhdfs to be enabled. 
> WebHDFS is disabled by default. Lets enable it by default to support new 
> really cool web UI.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5407) Fix typos in DFSClientCache

2013-11-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829453#comment-13829453
 ] 

Hudson commented on HDFS-5407:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4783 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4783/])
HDFS-5407. Fix typos in DFSClientCache. Contributed by Haohui Mai (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544362)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/DFSClientCache.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix typos in DFSClientCache
> ---
>
> Key: HDFS-5407
> URL: https://issues.apache.org/jira/browse/HDFS-5407
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Trivial
> Attachments: HDFS-5407.000.patch
>
>
> in DFSClientCache, clientRemovealListener() should be clientRemovalListener().



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-2882) DN continues to start up, even if block pool fails to initialize

2013-11-21 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829452#comment-13829452
 ] 

Vinay commented on HDFS-2882:
-

Hi colin, 
Thanks for taking a look at patch and sorry for confusing you.

Yes.. its able to reproduce easily in only HA installation.
1. Make one of the data directory unwritable
2. Restart the datanode

Here blockpool initialization will fail for first name node connected and that 
BPSA will exit. But for second namenode it will not try to initialize block 
pool. As namespace info was not null. . 
And it tries to send heartbeats and throws NPEs continously. 

Todd suggested 3 scenarios to be handled in this case. And he proposed an 
initial patch.  I just continued the approach. 

> DN continues to start up, even if block pool fails to initialize
> 
>
> Key: HDFS-2882
> URL: https://issues.apache.org/jira/browse/HDFS-2882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.0.2-alpha
>Reporter: Todd Lipcon
>Assignee: Vinay
> Attachments: HDFS-2882.patch, HDFS-2882.patch, HDFS-2882.patch, 
> HDFS-2882.patch, HDFS-2882.patch, HDFS-2882.patch, hdfs-2882.txt
>
>
> I started a DN on a machine that was completely out of space on one of its 
> drives. I saw the following:
> 2012-02-02 09:56:50,499 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool BP-448349972-172.29.5.192-1323816762969 (storage id 
> DS-507718931-172.29.5.194-11072-12978
> 42002148) service to styx01.sf.cloudera.com/172.29.5.192:8021
> java.io.IOException: Mkdirs failed to create 
> /data/1/scratch/todd/styx-datadir/current/BP-448349972-172.29.5.192-1323816762969/tmp
> at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$BlockPoolSlice.(FSDataset.java:335)
> but the DN continued to run, spewing NPEs when it tried to do block reports, 
> etc. This was on the HDFS-1623 branch but may affect trunk as well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5407) Fix typos in DFSClientCache

2013-11-21 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5407:
-

Priority: Trivial  (was: Minor)

> Fix typos in DFSClientCache
> ---
>
> Key: HDFS-5407
> URL: https://issues.apache.org/jira/browse/HDFS-5407
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Trivial
> Attachments: HDFS-5407.000.patch
>
>
> in DFSClientCache, clientRemovealListener() should be clientRemovalListener().



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5407) Fix typos in DFSClientCache

2013-11-21 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829448#comment-13829448
 ] 

Brandon Li commented on HDFS-5407:
--

I've committed the patch. Thank you, Haohui, for the contribution! 

> Fix typos in DFSClientCache
> ---
>
> Key: HDFS-5407
> URL: https://issues.apache.org/jira/browse/HDFS-5407
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Trivial
> Attachments: HDFS-5407.000.patch
>
>
> in DFSClientCache, clientRemovealListener() should be clientRemovalListener().



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5542) Fix TODO and clean up the code in HDFS-2832.

2013-11-21 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5542:
-

Attachment: (was: h5542_20131121.patch)

> Fix TODO and clean up the code in HDFS-2832.
> 
>
> Key: HDFS-5542
> URL: https://issues.apache.org/jira/browse/HDFS-5542
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Attachments: h5542_20131121.patch
>
>
> - Fix TODOs.
> - Remove unused code.
> - Reduce visibility (e.g. change public to package private.)
> - Simplify the code if possible.
> - Fix comments and javadoc.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5288) Close idle connections in portmap

2013-11-21 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5288:
-

Fix Version/s: 2.2.1

> Close idle connections in portmap
> -
>
> Key: HDFS-5288
> URL: https://issues.apache.org/jira/browse/HDFS-5288
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.2.1
>
> Attachments: HDFS-5288.000.patch, HDFS-5288.001.patch
>
>
> Currently the portmap daemon does not close idle connections. The daemon 
> should close any idle connections to save resources.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5288) Close idle connections in portmap

2013-11-21 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5288:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Close idle connections in portmap
> -
>
> Key: HDFS-5288
> URL: https://issues.apache.org/jira/browse/HDFS-5288
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5288.000.patch, HDFS-5288.001.patch
>
>
> Currently the portmap daemon does not close idle connections. The daemon 
> should close any idle connections to save resources.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5288) Close idle connections in portmap

2013-11-21 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829444#comment-13829444
 ] 

Brandon Li commented on HDFS-5288:
--

I've committed the patch. Thank you, Haohui, for the contribution! 

> Close idle connections in portmap
> -
>
> Key: HDFS-5288
> URL: https://issues.apache.org/jira/browse/HDFS-5288
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.2.1
>
> Attachments: HDFS-5288.000.patch, HDFS-5288.001.patch
>
>
> Currently the portmap daemon does not close idle connections. The daemon 
> should close any idle connections to save resources.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5542) Fix TODO and clean up the code in HDFS-2832.

2013-11-21 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5542:
-

Attachment: h5542_20131121.patch

> Fix TODO and clean up the code in HDFS-2832.
> 
>
> Key: HDFS-5542
> URL: https://issues.apache.org/jira/browse/HDFS-5542
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Attachments: h5542_20131121.patch
>
>
> - Fix TODOs.
> - Remove unused code.
> - Reduce visibility (e.g. change public to package private.)
> - Simplify the code if possible.
> - Fix comments and javadoc.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5542) Fix TODO and clean up the code in HDFS-2832.

2013-11-21 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5542:
-

Attachment: h5542_20131121.patch

h5542_20131121.patch: also changes the following:
- renames test method with "ForTesting" suffix;
- throws IllegalStateException if it fails to convert to and from PB;
- changes DatanodeStorageInfo.state to final (we assume storage won't change 
state);
- replaces AbstractList and ArrayList with List;
- throws IllegalArgumentException if it fails to parse storage location 
specified in conf;
- adds \[default = DISK] to DatanodeStorageProto.

> Fix TODO and clean up the code in HDFS-2832.
> 
>
> Key: HDFS-5542
> URL: https://issues.apache.org/jira/browse/HDFS-5542
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Attachments: h5542_20131121.patch
>
>
> - Fix TODOs.
> - Remove unused code.
> - Reduce visibility (e.g. change public to package private.)
> - Simplify the code if possible.
> - Fix comments and javadoc.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5288) Close idle connections in portmap

2013-11-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829438#comment-13829438
 ] 

Hudson commented on HDFS-5288:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4782 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4782/])
HDFS-5288. Close idle connections in portmap. Contributed by Haohui Mai 
(brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544352)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/Portmap.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/PortmapResponse.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/RpcProgramPortmap.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/portmap
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/portmap/TestPortmap.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/TestPortmapRegister.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Close idle connections in portmap
> -
>
> Key: HDFS-5288
> URL: https://issues.apache.org/jira/browse/HDFS-5288
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5288.000.patch, HDFS-5288.001.patch
>
>
> Currently the portmap daemon does not close idle connections. The daemon 
> should close any idle connections to save resources.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5547) Fix build break after merge from trunk to HDFS-2832

2013-11-21 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-5547.
-

  Resolution: Fixed
Hadoop Flags: Reviewed

Thanks for the quick look Jing! I committed it to branch HDFS-2832.

> Fix build break after merge from trunk to HDFS-2832
> ---
>
> Key: HDFS-5547
> URL: https://issues.apache.org/jira/browse/HDFS-5547
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: h5547.01.patch
>
>
> Auto-merge from trunk to HDFS-2832 left some bad imports in PBHelper.java.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5547) Fix build break after merge from trunk to HDFS-2832

2013-11-21 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829398#comment-13829398
 ] 

Jing Zhao commented on HDFS-5547:
-

+1

> Fix build break after merge from trunk to HDFS-2832
> ---
>
> Key: HDFS-5547
> URL: https://issues.apache.org/jira/browse/HDFS-5547
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: h5547.01.patch
>
>
> Auto-merge from trunk to HDFS-2832 left some bad imports in PBHelper.java.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5547) Fix build break after merge from trunk to HDFS-2832

2013-11-21 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-5547:


Issue Type: Sub-task  (was: Bug)
Parent: HDFS-2832

> Fix build break after merge from trunk to HDFS-2832
> ---
>
> Key: HDFS-5547
> URL: https://issues.apache.org/jira/browse/HDFS-5547
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: h5547.01.patch
>
>
> Auto-merge from trunk to HDFS-2832 left some bad imports in PBHelper.java.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5285) Flatten INodeFile hierarchy: Add UnderContruction Feature

2013-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829389#comment-13829389
 ] 

Hadoop QA commented on HDFS-5285:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615172/HDFS-5285.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5530//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5530//console

This message is automatically generated.

> Flatten INodeFile hierarchy: Add UnderContruction Feature
> -
>
> Key: HDFS-5285
> URL: https://issues.apache.org/jira/browse/HDFS-5285
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Jing Zhao
> Attachments: HDFS-5285.001.patch, HDFS-5285.002.patch, 
> HDFS-5285.003.patch, h5285_20131001.patch, h5285_20131002.patch, 
> h5285_20131118.patch
>
>
> For files, there are INodeFile, INodeFileUnderConstruction, 
> INodeFileWithSnapshot and INodeFileUnderConstructionWithSnapshot for 
> representing whether a file is under construction or whether it is in some 
> snapshot.  The following are two major problems of the current approach:
> - Java class does not support multiple inheritances so that 
> INodeFileUnderConstructionWithSnapshot cannot extend both 
> INodeFileUnderConstruction and INodeFileWithSnapshot.
> - The number of classes is exponential to the number of features.  Currently, 
> there are only two features, UnderConstruction and WithSnapshot.  The number 
> of classes is 2^2 = 4.  It is hard to add one more feature since the number 
> of classes will become 2^3 = 8.
> As a first step, we implement an Under-Construction feature to replace 
> INodeFileUnderConstruction and INodeFileUnderConstructionWithSnapshot in this 
> jira.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5545) Allow specifying endpoints for listeners in HttpServer

2013-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829387#comment-13829387
 ] 

Hadoop QA commented on HDFS-5545:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615165/HDFS-5545.000.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1556 javac 
compiler warnings (more than the trunk's current 1544 warnings).

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy:

  org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl
  org.apache.hadoop.yarn.webapp.TestWebApp
  
org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerResync
  
org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServer
  
org.apache.hadoop.yarn.server.nodemanager.TestNodeStatusUpdater
  
org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerReboot
  
org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown
  
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestLocalResourcesTrackerImpl
  org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives
  
org.apache.hadoop.hdfs.server.namenode.TestEditLogFileInputStream

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5529//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5529//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5529//console

This message is automatically generated.

> Allow specifying endpoints for listeners in HttpServer
> --
>
> Key: HDFS-5545
> URL: https://issues.apache.org/jira/browse/HDFS-5545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5545.000.patch
>
>
> Currently HttpServer listens to HTTP port and provides a method to allow the 
> users to add an SSL listeners after the server starts. This complicates the 
> logic if the client needs to set up HTTP / HTTPS serverfs.
> This jira proposes to replace these two methods with the concepts of listener 
> endpoints. A listener endpoints is a URI (i.e., scheme + host + port) that 
> the HttpServer should listen to. This concept simplifies the task of managing 
> the HTTP server from HDFS / YARN.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5288) Close idle connections in portmap

2013-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829371#comment-13829371
 ] 

Hadoop QA commented on HDFS-5288:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615192/HDFS-5288.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs hadoop-hdfs-project/hadoop-hdfs-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5531//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5531//console

This message is automatically generated.

> Close idle connections in portmap
> -
>
> Key: HDFS-5288
> URL: https://issues.apache.org/jira/browse/HDFS-5288
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5288.000.patch, HDFS-5288.001.patch
>
>
> Currently the portmap daemon does not close idle connections. The daemon 
> should close any idle connections to save resources.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-2832) Enable support for heterogeneous storages in HDFS

2013-11-21 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-2832:


Attachment: h2832_20131121.patch

> Enable support for heterogeneous storages in HDFS
> -
>
> Key: HDFS-2832
> URL: https://issues.apache.org/jira/browse/HDFS-2832
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 0.24.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: 20130813-HeterogeneousStorage.pdf, H2832_20131107.patch, 
> editsStored, h2832_20131023.patch, h2832_20131023b.patch, 
> h2832_20131025.patch, h2832_20131028.patch, h2832_20131028b.patch, 
> h2832_20131029.patch, h2832_20131103.patch, h2832_20131104.patch, 
> h2832_20131105.patch, h2832_20131107b.patch, h2832_20131108.patch, 
> h2832_20131110.patch, h2832_20131110b.patch, h2832_2013.patch, 
> h2832_20131112.patch, h2832_20131112b.patch, h2832_20131114.patch, 
> h2832_20131118.patch, h2832_20131119.patch, h2832_20131119b.patch, 
> h2832_20131121.patch
>
>
> HDFS currently supports configuration where storages are a list of 
> directories. Typically each of these directories correspond to a volume with 
> its own file system. All these directories are homogeneous and therefore 
> identified as a single storage at the namenode. I propose, change to the 
> current model where Datanode * is a * storage, to Datanode * is a collection 
> * of strorages. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5407) Fix typos in DFSClientCache

2013-11-21 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829367#comment-13829367
 ] 

Brandon Li commented on HDFS-5407:
--

+1

> Fix typos in DFSClientCache
> ---
>
> Key: HDFS-5407
> URL: https://issues.apache.org/jira/browse/HDFS-5407
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Minor
> Attachments: HDFS-5407.000.patch
>
>
> in DFSClientCache, clientRemovealListener() should be clientRemovalListener().



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5548) Use ConcurrentHashMap in portmap

2013-11-21 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5548:
-

Attachment: HDFS-5548.000.patch

> Use ConcurrentHashMap in portmap
> 
>
> Key: HDFS-5548
> URL: https://issues.apache.org/jira/browse/HDFS-5548
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5548.000.patch
>
>
> Portmap uses a HashMap to store the port mapping. It synchronizes the access 
> of the hash map by locking itself. It can be simplified by using a 
> ConcurrentHashMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5548) Use ConcurrentHashMap in portmap

2013-11-21 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-5548:


 Summary: Use ConcurrentHashMap in portmap
 Key: HDFS-5548
 URL: https://issues.apache.org/jira/browse/HDFS-5548
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5548.000.patch

Portmap uses a HashMap to store the port mapping. It synchronizes the access of 
the hash map by locking itself. It can be simplified by using a 
ConcurrentHashMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5288) Close idle connections in portmap

2013-11-21 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5288:
-

Attachment: HDFS-5288.001.patch

> Close idle connections in portmap
> -
>
> Key: HDFS-5288
> URL: https://issues.apache.org/jira/browse/HDFS-5288
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5288.000.patch, HDFS-5288.001.patch
>
>
> Currently the portmap daemon does not close idle connections. The daemon 
> should close any idle connections to save resources.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4516) Client crash after block allocation and NN switch before lease recovery for the same file can cause readers to fail forever

2013-11-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13828920#comment-13828920
 ] 

Hudson commented on HDFS-4516:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1589 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1589/])
HDFS-4516. Client crash after block allocation and NN switch before lease 
recovery for the same file can cause readers to fail forever. Contributed by 
Vinay. (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1543829)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSClientAdapter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPersistBlocks.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java


> Client crash after block allocation and NN switch before lease recovery for 
> the same file can cause readers to fail forever
> ---
>
> Key: HDFS-4516
> URL: https://issues.apache.org/jira/browse/HDFS-4516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Uma Maheswara Rao G
>Assignee: Vinay
>Priority: Critical
> Fix For: 3.0.0, 2.3.0, 2.2.1
>
> Attachments: HDFS-4516-Test.patch, HDFS-4516.patch, HDFS-4516.patch, 
> HDFS-4516.patch, HDFS-4516.txt
>
>
> If client crashes just after allocating block( blocks not yet created in DNs) 
> and NN also switched after this, then new Namenode will not know about locs.
> Further details will be in comment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5525) Inline dust templates

2013-11-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13828922#comment-13828922
 ] 

Hudson commented on HDFS-5525:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1589 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1589/])
HDFS-5525. Inline dust templates for new Web UI. Contributed by Haohui Mai. 
(jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1543895)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfs-dust.js
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.dust.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer-block-info.dust.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.dust.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css


> Inline dust templates
> -
>
> Key: HDFS-5525
> URL: https://issues.apache.org/jira/browse/HDFS-5525
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 3.0.0
>
> Attachments: HDFS-5525.000.patch, HDFS-5525.000.patch, screenshot.png
>
>
> Currently the dust templates are stored as separate files on the server side. 
> The web UI has to make separate HTTP requests to load the templates, which 
> increases the network overheads and page load latency.
> This jira proposes to inline all dust templates with the main HTML file, so 
> that the page can be loaded faster.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5544) Adding Test case For Checking dfs.checksum type as NULL value

2013-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829100#comment-13829100
 ] 

Hadoop QA commented on HDFS-5544:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615131/HDFS-5544.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.TestPathBasedCacheRequests
  org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5526//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5526//console

This message is automatically generated.

> Adding Test case For Checking dfs.checksum type as NULL value
> -
>
> Key: HDFS-5544
> URL: https://issues.apache.org/jira/browse/HDFS-5544
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.1.0-beta
> Environment: HDFS-TEST
>Reporter: sathish
>Assignee: sathish
>Priority: Minor
> Attachments: HDFS-5544.patch
>
>
> https://issues.apache.org/jira/i#browse/HADOOP-9114,
> For checking the dfs.checksumtype as NULL,it is better  to add one unit test 
> case



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5531) Combine the getNsQuota() and getDsQuota() methods in INode

2013-11-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13828914#comment-13828914
 ] 

Hudson commented on HDFS-5531:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1589 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1589/])
HDFS-5531. Combine the getNsQuota() and getDsQuota() methods in INode. 
(szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544018)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Content.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryWithQuota.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Quota.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/EnumCounters.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSImageTestUtil.java


> Combine the getNsQuota() and getDsQuota() methods in INode
> --
>
> Key: HDFS-5531
> URL: https://issues.apache.org/jira/browse/HDFS-5531
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: h5531_20131119.patch, h5531_20131120.patch, 
> h5531_20131120b.patch
>
>
> I suggest to combine these two methods into 
> {code}
> public Quota.Counts getQuotaCounts()
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-3987) Support webhdfs over HTTPS

2013-11-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13828919#comment-13828919
 ] 

Hudson commented on HDFS-3987:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1589 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1589/])
HDFS-3987. Support webhdfs over HTTPS. Contributed by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1543962)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/SWebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/TokenAspect.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestSymlinkHdfs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationTokenForProxyUser.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestHttpsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTimeouts.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/WebHdfsTestUtil.java


> Support webhdfs over HTTPS
> --
>
> Key: HDFS-3987
> URL: https://issues.apache.org/jira/browse/HDFS-3987
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.0.2-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Haohui Mai
> Fix For: 2.3.0
>
> Attachments: HDFS-3987.000.patch, HDFS-3987.001.patch, 
> HDFS-3987.002.patch, HDFS-3987.003.patch, HDFS-3987.004.patch, 
> HDFS-3987.005.patch, HDFS-3987.006.patch, HDFS-3987.007.patch, 
> HDFS-3987.008.patch, HDFS-3987.009.patch
>
>
> This is a follow up of HDFS-3983.
> We should have a new filesystem client impl/binding for encrypted WebHDFS, 
> i.e. *webhdfss://*
> On the server side, webhdfs and httpfs we should only need to start the 
> service on a secured (HTTPS) endpoint.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5014) BPOfferService#processCommandFromActor() synchronization on namenode RPC call delays IBR to Active NN, if Stanby NN is unstable

2013-11-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13828921#comment-13828921
 ] 

Hudson commented on HDFS-5014:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1589 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1589/])
HDFS-5014. Process register commands with out holding BPOfferService lock. 
Contributed by Vinay. (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1543861)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java


> BPOfferService#processCommandFromActor() synchronization on namenode RPC call 
> delays IBR to Active NN, if Stanby NN is unstable
> ---
>
> Key: HDFS-5014
> URL: https://issues.apache.org/jira/browse/HDFS-5014
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, ha
>Affects Versions: 3.0.0, 2.0.4-alpha
>Reporter: Vinay
>Assignee: Vinay
> Fix For: 3.0.0, 2.3.0, 2.2.1
>
> Attachments: HDFS-5014-v2.patch, HDFS-5014-v2.patch, 
> HDFS-5014-v2.patch, HDFS-5014-v2.patch, HDFS-5014-v2.patch, 
> HDFS-5014-v2.patch, HDFS-5014.patch, HDFS-5014.patch, HDFS-5014.patch, 
> HDFS-5014.patch, HDFS-5014.patch, HDFS-5014.patch, HDFS-5014.patch
>
>
> In one of our cluster, following has happened which failed HDFS write.
> 1. Standby NN was unstable and continously restarting due to some errors. But 
> Active NN was stable.
> 2. MR Job was writing files.
> 3. At some point SNN went down again while datanode processing the REGISTER 
> command for SNN. 
> 4. Datanodes started retrying to connect to SNN to register at the following 
> code  in BPServiceActor#retrieveNamespaceInfo() which will be called under 
> synchronization.
> {code}  try {
> nsInfo = bpNamenode.versionRequest();
> LOG.debug(this + " received versionRequest response: " + nsInfo);
> break;{code}
> Unfortunately in all datanodes at same point this happened.
> 5. For next 7-8 min standby was down, and no blocks were reported to active 
> NN at this point and writes have failed.
> So culprit is {{BPOfferService#processCommandFromActor()}} is completely 
> synchronized which is not required.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-2882) DN continues to start up, even if block pool fails to initialize

2013-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13828955#comment-13828955
 ] 

Hadoop QA commented on HDFS-2882:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615100/HDFS-2882.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
  
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
  org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot
  org.apache.hadoop.hdfs.security.TestDelegationToken
  
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration
  org.apache.hadoop.hdfs.TestDFSRollback
  org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
  
org.apache.hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks
  org.apache.hadoop.hdfs.TestDecommission
  org.apache.hadoop.hdfs.server.datanode.TestHSync
  
org.apache.hadoop.hdfs.server.blockmanagement.TestPendingReplication
  org.apache.hadoop.hdfs.server.datanode.TestBlockReplacement
  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart
  
org.apache.hadoop.hdfs.server.blockmanagement.TestHeartbeatHandling
  
org.apache.hadoop.hdfs.server.datanode.TestMultipleNNDataBlockScanner
  org.apache.hadoop.hdfs.server.datanode.TestDiskError
  org.apache.hadoop.hdfs.TestFileCreation
  org.apache.hadoop.hdfs.server.datanode.TestBlockReport
  org.apache.hadoop.hdfs.server.datanode.TestTransferRbw
  org.apache.hadoop.hdfs.TestDFSStartupVersions
  org.apache.hadoop.net.TestNetworkTopology
  org.apache.hadoop.hdfs.TestFileCorruption
  
org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
  org.apache.hadoop.hdfs.TestDFSUpgrade
  org.apache.hadoop.hdfs.TestDatanodeConfig
  org.apache.hadoop.hdfs.TestEncryptedTransfer
  org.apache.hadoop.hdfs.TestReplication
  org.apache.hadoop.hdfs.TestSafeMode
  org.apache.hadoop.hdfs.server.datanode.TestBPOfferService

  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.server.namenode.TestBackupNode
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5524//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5524//console

This message is automatically generated.

> DN continues to start up, even if block pool fails to initialize
> 
>
> Key: HDFS-2882
> URL: https://issues.apache.org/jira/browse/HDFS-2882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.0.2-alpha
>Reporter: Todd Lipcon
>Assignee: Vinay
> Attachments: HDFS-2882.patch, HDFS-2882.patch, HDFS-2882.patch, 
> HDFS-2882.patch, hdfs-2882.txt
>
>
> I started a DN on a machine that was completely out of space on one of its 
> drives. I saw the following:
> 2012-02-02 09:56:50,499 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool BP-448349972-172.29.5.192-1323816762969 (storage id 
> DS-507718931-172.29.5.194-11072-12978
> 42002148) service to styx01.sf.cloudera.com/172.29.5.192:8021
> java.io.IOException: Mkdirs failed to create 
> /data/1/scratch/todd/styx-datadir/current/BP-448349972-172.29.5.192-1323816762969/tmp
> a

[jira] [Commented] (HDFS-2882) DN continues to start up, even if block pool fails to initialize

2013-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13828900#comment-13828900
 ] 

Hadoop QA commented on HDFS-2882:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615095/HDFS-2882.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestSafeMode
  org.apache.hadoop.hdfs.TestEncryptedTransfer
  org.apache.hadoop.hdfs.server.datanode.TestBPOfferService
  org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache
  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart
  
org.apache.hadoop.hdfs.server.blockmanagement.TestPendingReplication
  
org.apache.hadoop.hdfs.server.datanode.TestMultipleNNDataBlockScanner
  org.apache.hadoop.hdfs.TestFileCreation
  org.apache.hadoop.hdfs.server.datanode.TestBlockReplacement
  org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot
  org.apache.hadoop.hdfs.TestDFSStartupVersions
  org.apache.hadoop.hdfs.TestDatanodeConfig
  org.apache.hadoop.hdfs.TestDFSRollback
  org.apache.hadoop.hdfs.TestDecommission
  org.apache.hadoop.hdfs.server.datanode.TestDiskError
  org.apache.hadoop.hdfs.server.datanode.TestTransferRbw
  org.apache.hadoop.hdfs.TestFileCorruption
  org.apache.hadoop.net.TestNetworkTopology
  
org.apache.hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks
  org.apache.hadoop.hdfs.server.datanode.TestBlockReport
  org.apache.hadoop.hdfs.security.TestDelegationToken
  org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
  
org.apache.hadoop.hdfs.server.blockmanagement.TestHeartbeatHandling
  
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
  
org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
  org.apache.hadoop.hdfs.TestReplication
  org.apache.hadoop.hdfs.TestDFSUpgrade
  org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure
  
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration
  org.apache.hadoop.hdfs.server.datanode.TestHSync

  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.server.namenode.TestBackupNode
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5523//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5523//console

This message is automatically generated.

> DN continues to start up, even if block pool fails to initialize
> 
>
> Key: HDFS-2882
> URL: https://issues.apache.org/jira/browse/HDFS-2882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.0.2-alpha
>Reporter: Todd Lipcon
>Assignee: Vinay
> Attachments: HDFS-2882.patch, HDFS-2882.patch, HDFS-2882.patch, 
> HDFS-2882.patch, hdfs-2882.txt
>
>
> I started a DN on a machine that was completely out of space on one of its 
> drives. I saw the following:
> 2012-02-02 09:56:50,499 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool BP-448349972-172.29.5.192-1323816762969 (storage id 
> DS-507718931-172.29.5.194-11072-12978
> 42002148) service to styx01.sf.cloudera.com/172.29.5.192:8021
>

[jira] [Updated] (HDFS-5544) Adding Test case For Checking dfs.checksum type as NULL value

2013-11-21 Thread sathish (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sathish updated HDFS-5544:
--

Status: Patch Available  (was: Open)

> Adding Test case For Checking dfs.checksum type as NULL value
> -
>
> Key: HDFS-5544
> URL: https://issues.apache.org/jira/browse/HDFS-5544
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.1.0-beta
> Environment: HDFS-TEST
>Reporter: sathish
>Priority: Minor
> Attachments: HDFS-5544.patch
>
>
> https://issues.apache.org/jira/i#browse/HADOOP-9114,
> For checking the dfs.checksumtype as NULL,it is better  to add one unit test 
> case



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5473) Consistent naming of user-visible caching classes and methods

2013-11-21 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5473:
---

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

> Consistent naming of user-visible caching classes and methods
> -
>
> Key: HDFS-5473
> URL: https://issues.apache.org/jira/browse/HDFS-5473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: HDFS-5473.002.patch, HDFS-5473.003.patch
>
>
> It's kind of warty that (after HDFS-5326 goes in) DistributedFileSystem has 
> {{*CachePool}} methods take a {{CachePoolInfo}} and 
> {{*PathBasedCacheDirective}} methods that thake a 
> {{PathBasedCacheDirective}}. We should consider renaming {{CachePoolInfo}} to 
> {{CachePool}} for consistency.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5544) Adding Test case For Checking dfs.checksum type as NULL value

2013-11-21 Thread sathish (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sathish updated HDFS-5544:
--

Attachment: HDFS-5544.patch

Attaching a Test case in patch.Please review it

> Adding Test case For Checking dfs.checksum type as NULL value
> -
>
> Key: HDFS-5544
> URL: https://issues.apache.org/jira/browse/HDFS-5544
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.1.0-beta
> Environment: HDFS-TEST
>Reporter: sathish
>Priority: Minor
> Attachments: HDFS-5544.patch
>
>
> https://issues.apache.org/jira/i#browse/HADOOP-9114,
> For checking the dfs.checksumtype as NULL,it is better  to add one unit test 
> case



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5547) Fix build break after merge from trunk to HDFS-2832

2013-11-21 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-5547:


Attachment: h5547.01.patch

> Fix build break after merge from trunk to HDFS-2832
> ---
>
> Key: HDFS-5547
> URL: https://issues.apache.org/jira/browse/HDFS-5547
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: h5547.01.patch
>
>
> Auto-merge from trunk to HDFS-2832 left some bad imports in PBHelper.java.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5288) Close idle connections in portmap

2013-11-21 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829317#comment-13829317
 ] 

Brandon Li commented on HDFS-5288:
--

[~wheat9]: Please rebase the patch. 
Nit: the following method is not needed anymore:
{noformat}
static void testRequest(XDR request, XDR request2) {
 RegistrationClient registrationClient = new RegistrationClient("localhost",
Nfs3Constant.SUN_RPCBIND, request);
registrationClient.run();
  }
{noformat}

> Close idle connections in portmap
> -
>
> Key: HDFS-5288
> URL: https://issues.apache.org/jira/browse/HDFS-5288
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5288.000.patch
>
>
> Currently the portmap daemon does not close idle connections. The daemon 
> should close any idle connections to save resources.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5547) Fix build break after merge from trunk to HDFS-2832

2013-11-21 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-5547:
---

 Summary: Fix build break after merge from trunk to HDFS-2832
 Key: HDFS-5547
 URL: https://issues.apache.org/jira/browse/HDFS-5547
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: Heterogeneous Storage (HDFS-2832)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


Auto-merge from trunk to HDFS-2832 left some bad imports in PBHelper.java.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5543) fix narrow race condition in TestPathBasedCacheRequests

2013-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829276#comment-13829276
 ] 

Hadoop QA commented on HDFS-5543:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615153/HDFS-5543.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5528//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5528//console

This message is automatically generated.

> fix narrow race condition in TestPathBasedCacheRequests
> ---
>
> Key: HDFS-5543
> URL: https://issues.apache.org/jira/browse/HDFS-5543
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5543.001.patch, HDFS-5543.002.patch
>
>
> TestPathBasedCacheRequests has a narrow race condition in 
> testWaitForCachedReplicasInDirectory where an assert checking the number of 
> bytes cached may fail.  The reason is because waitForCachedBlock looks at the 
> NameNode data structures directly to see how many replicas are cached, but 
> the scanner asynchronously updates the cache entries with this information.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5543) fix narrow race condition in TestPathBasedCacheRequests

2013-11-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829303#comment-13829303
 ] 

Hudson commented on HDFS-5543:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4781 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4781/])
HDFS-5543. Fix narrow race condition in TestPathBasedCacheRequests (cmccabe) 
(cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544310)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java


> fix narrow race condition in TestPathBasedCacheRequests
> ---
>
> Key: HDFS-5543
> URL: https://issues.apache.org/jira/browse/HDFS-5543
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: HDFS-5543.001.patch, HDFS-5543.002.patch
>
>
> TestPathBasedCacheRequests has a narrow race condition in 
> testWaitForCachedReplicasInDirectory where an assert checking the number of 
> bytes cached may fail.  The reason is because waitForCachedBlock looks at the 
> NameNode data structures directly to see how many replicas are cached, but 
> the scanner asynchronously updates the cache entries with this information.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5543) fix narrow race condition in TestPathBasedCacheRequests

2013-11-21 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5543:
---

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

> fix narrow race condition in TestPathBasedCacheRequests
> ---
>
> Key: HDFS-5543
> URL: https://issues.apache.org/jira/browse/HDFS-5543
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: HDFS-5543.001.patch, HDFS-5543.002.patch
>
>
> TestPathBasedCacheRequests has a narrow race condition in 
> testWaitForCachedReplicasInDirectory where an assert checking the number of 
> bytes cached may fail.  The reason is because waitForCachedBlock looks at the 
> NameNode data structures directly to see how many replicas are cached, but 
> the scanner asynchronously updates the cache entries with this information.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-2882) DN continues to start up, even if block pool fails to initialize

2013-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829267#comment-13829267
 ] 

Hadoop QA commented on HDFS-2882:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615147/HDFS-2882.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5527//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5527//console

This message is automatically generated.

> DN continues to start up, even if block pool fails to initialize
> 
>
> Key: HDFS-2882
> URL: https://issues.apache.org/jira/browse/HDFS-2882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.0.2-alpha
>Reporter: Todd Lipcon
>Assignee: Vinay
> Attachments: HDFS-2882.patch, HDFS-2882.patch, HDFS-2882.patch, 
> HDFS-2882.patch, HDFS-2882.patch, HDFS-2882.patch, hdfs-2882.txt
>
>
> I started a DN on a machine that was completely out of space on one of its 
> drives. I saw the following:
> 2012-02-02 09:56:50,499 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool BP-448349972-172.29.5.192-1323816762969 (storage id 
> DS-507718931-172.29.5.194-11072-12978
> 42002148) service to styx01.sf.cloudera.com/172.29.5.192:8021
> java.io.IOException: Mkdirs failed to create 
> /data/1/scratch/todd/styx-datadir/current/BP-448349972-172.29.5.192-1323816762969/tmp
> at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$BlockPoolSlice.(FSDataset.java:335)
> but the DN continued to run, spewing NPEs when it tried to do block reports, 
> etc. This was on the HDFS-1623 branch but may affect trunk as well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5546) race condition crashes "hadoop ls -R" when directories are moved/removed

2013-11-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829261#comment-13829261
 ] 

Colin Patrick McCabe commented on HDFS-5546:


The best solution is probably to catch the FNF in #3, and simply not put that 
directory in the listing, since it doesn't exist by that name any more. I guess 
you could argue that we should re-list the parent directory in this case to 
make sure new stuff doesn't exist in it (like a renamed version of D), but that 
seems like it would open a difficult can of worms since we'd have arbitrary 
levels of backtracking. Also, we can't really know whether any of the work we 
had already done is still valid, since the names of directories could all have 
changed.

> race condition crashes "hadoop ls -R" when directories are moved/removed
> 
>
> Key: HDFS-5546
> URL: https://issues.apache.org/jira/browse/HDFS-5546
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Colin Patrick McCabe
>Priority: Minor
>
> This seems to be a rare race condition where we have a sequence of events 
> like this:
> 1. org.apache.hadoop.shell.Ls calls DFS#getFileStatus on directory D.
> 2. someone deletes or moves directory D
> 3. org.apache.hadoop.shell.Ls calls PathData#getDirectoryContents(D), which 
> calls DFS#listStatus(D). This throws FileNotFoundException.
> 4. ls command terminates with FNF



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5546) race condition crashes "hadoop ls -R" when directories are moved/removed

2013-11-21 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5546:
--

 Summary: race condition crashes "hadoop ls -R" when directories 
are moved/removed
 Key: HDFS-5546
 URL: https://issues.apache.org/jira/browse/HDFS-5546
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Colin Patrick McCabe
Priority: Minor


This seems to be a rare race condition where we have a sequence of events like 
this:
1. org.apache.hadoop.shell.Ls calls DFS#getFileStatus on directory D.
2. someone deletes or moves directory D
3. org.apache.hadoop.shell.Ls calls PathData#getDirectoryContents(D), which 
calls DFS#listStatus(D). This throws FileNotFoundException.
4. ls command terminates with FNF



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-4506) In branch-1, HDFS short circuit fails non-transparently when user does not have unix permissions

2013-11-21 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4506:
---

Summary: In branch-1, HDFS short circuit fails non-transparently when user 
does not have unix permissions  (was: HDFS short circuit fails 
non-transparently when user does not have unix permissions)

> In branch-1, HDFS short circuit fails non-transparently when user does not 
> have unix permissions
> 
>
> Key: HDFS-4506
> URL: https://issues.apache.org/jira/browse/HDFS-4506
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 1.1.1
>Reporter: Enis Soztutar
>
> We found a case, where if the short circuit user name is configured 
> correctly, but the user does not have enough permissions in unix, DFS 
> operations fails with IOException, rather than silently failing over through 
> datanode. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5285) Flatten INodeFile hierarchy: Add UnderContruction Feature

2013-11-21 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5285:


Attachment: HDFS-5285.003.patch

Thanks for the comments Vinay! Update the patch to address your comments.

bq. Better to add an error message to these checks

I have not changed this part in the new patch yet. Currently I just temporarily 
use Preconditions check to verify the under-construction file and some of them 
can be removed in the future I guess. We can revisit this later.

> Flatten INodeFile hierarchy: Add UnderContruction Feature
> -
>
> Key: HDFS-5285
> URL: https://issues.apache.org/jira/browse/HDFS-5285
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Jing Zhao
> Attachments: HDFS-5285.001.patch, HDFS-5285.002.patch, 
> HDFS-5285.003.patch, h5285_20131001.patch, h5285_20131002.patch, 
> h5285_20131118.patch
>
>
> For files, there are INodeFile, INodeFileUnderConstruction, 
> INodeFileWithSnapshot and INodeFileUnderConstructionWithSnapshot for 
> representing whether a file is under construction or whether it is in some 
> snapshot.  The following are two major problems of the current approach:
> - Java class does not support multiple inheritances so that 
> INodeFileUnderConstructionWithSnapshot cannot extend both 
> INodeFileUnderConstruction and INodeFileWithSnapshot.
> - The number of classes is exponential to the number of features.  Currently, 
> there are only two features, UnderConstruction and WithSnapshot.  The number 
> of classes is 2^2 = 4.  It is hard to add one more feature since the number 
> of classes will become 2^3 = 8.
> As a first step, we implement an Under-Construction feature to replace 
> INodeFileUnderConstruction and INodeFileUnderConstructionWithSnapshot in this 
> jira.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-2882) DN continues to start up, even if block pool fails to initialize

2013-11-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829224#comment-13829224
 ] 

Colin Patrick McCabe commented on HDFS-2882:


If you look at the original description of this JIRA, by Todd, it looks like 
this:

{code}
I started a DN on a machine that was completely out of space on one of its 
drives. I saw the following:
2012-02-02 09:56:50,499 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: 
Initialization failed for block pool Block pool 
BP-448349972-172.29.5.192-1323816762969 (storage id 
DS-507718931-172.29.5.194-11072-12978
42002148) service to styx01.sf.cloudera.com/172.29.5.192:8021
java.io.IOException: Mkdirs failed to create 
/data/1/scratch/todd/styx-datadir/current/BP-448349972-172.29.5.192-1323816762969/tmp
at 
org.apache.hadoop.hdfs.server.datanode.FSDataset$BlockPoolSlice.(FSDataset.java:335)
but the DN continued to run, spewing NPEs when it tried to do block reports, 
etc. This was on the HDFS-1623 branch but may affect trunk as well.
{code}

His concern was that the block pool failed to initialize, but the the DN 
continued to start up anyway, leading to a system that was not functional.  I 
tried to reproduce this on trunk (as opposed to the HDFS-1623 branch that Todd 
was using).  I was unable to reproduce this behavior: every time I got the 
block pool to fail to initialize, the DN also did not start up.  My theory is 
that this was either a bug that affected only the HDFS-1623 branch, or a bug 
that is related to a race condition that is very hard to reproduce.  You can 
see how I tried to reproduce it here: 
https://issues.apache.org/jira/browse/HDFS-2882?focusedCommentId=13555717&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13555717

Vinay, I don't have a clear idea of what your patch is trying to do.  I can see 
that it adds a retry state machine.  But as you yourself commented, 
BPServiceActor#retrieveNamespaceInfo() already loops until the NameNode 
responds.  So why do we need another retry mechanism?

Also, when I asked you whether you had reproduced Todd's problem, I didn't mean 
in a unit test.  I meant have you started up the DN and had it fail to 
initialize a block pool, but continue to start?

I also wonder if any of this is addressed in the HDFS-2832 branch, which 
changes the way DataNode storage ID is handled, among other things.

> DN continues to start up, even if block pool fails to initialize
> 
>
> Key: HDFS-2882
> URL: https://issues.apache.org/jira/browse/HDFS-2882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.0.2-alpha
>Reporter: Todd Lipcon
>Assignee: Vinay
> Attachments: HDFS-2882.patch, HDFS-2882.patch, HDFS-2882.patch, 
> HDFS-2882.patch, HDFS-2882.patch, HDFS-2882.patch, hdfs-2882.txt
>
>
> I started a DN on a machine that was completely out of space on one of its 
> drives. I saw the following:
> 2012-02-02 09:56:50,499 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool BP-448349972-172.29.5.192-1323816762969 (storage id 
> DS-507718931-172.29.5.194-11072-12978
> 42002148) service to styx01.sf.cloudera.com/172.29.5.192:8021
> java.io.IOException: Mkdirs failed to create 
> /data/1/scratch/todd/styx-datadir/current/BP-448349972-172.29.5.192-1323816762969/tmp
> at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$BlockPoolSlice.(FSDataset.java:335)
> but the DN continued to run, spewing NPEs when it tried to do block reports, 
> etc. This was on the HDFS-1623 branch but may affect trunk as well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5545) Allow specifying endpoints for listeners in HttpServer

2013-11-21 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5545:
-

Attachment: HDFS-5545.000.patch

> Allow specifying endpoints for listeners in HttpServer
> --
>
> Key: HDFS-5545
> URL: https://issues.apache.org/jira/browse/HDFS-5545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5545.000.patch
>
>
> Currently HttpServer listens to HTTP port and provides a method to allow the 
> users to add an SSL listeners after the server starts. This complicates the 
> logic if the client needs to set up HTTP / HTTPS serverfs.
> This jira proposes to replace these two methods with the concepts of listener 
> endpoints. A listener endpoints is a URI (i.e., scheme + host + port) that 
> the HttpServer should listen to. This concept simplifies the task of managing 
> the HTTP server from HDFS / YARN.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5545) Allow specifying endpoints for listeners in HttpServer

2013-11-21 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5545:
-

Description: 
Currently HttpServer listens to HTTP port and provides a method to allow the 
users to add an SSL listeners after the server starts. This complicates the 
logic if the client needs to set up HTTP / HTTPS serverfs.

This jira proposes to replace these two methods with the concepts of listener 
endpoints. A listener endpoints is a URI (i.e., scheme + host + port) that the 
HttpServer should listen to. This concept simplifies the task of managing the 
HTTP server from HDFS / YARN.

  was:
Currently HttpServer listens to HTTP port and provides a method to allow the 
users to add an SSL listeners after the server starts. This complicates the 
logic if the client needs to set up HTTP / HTTPS serverfs.

This jira proposes to replace these two methods with the concepts of listener 
endpoints. A listener endpoints is a URI (i.e., scheme + host + port) that the 
HttpServer should listen to.


> Allow specifying endpoints for listeners in HttpServer
> --
>
> Key: HDFS-5545
> URL: https://issues.apache.org/jira/browse/HDFS-5545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5545.000.patch
>
>
> Currently HttpServer listens to HTTP port and provides a method to allow the 
> users to add an SSL listeners after the server starts. This complicates the 
> logic if the client needs to set up HTTP / HTTPS serverfs.
> This jira proposes to replace these two methods with the concepts of listener 
> endpoints. A listener endpoints is a URI (i.e., scheme + host + port) that 
> the HttpServer should listen to. This concept simplifies the task of managing 
> the HTTP server from HDFS / YARN.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5545) Allow specifying endpoints for listeners in HttpServer

2013-11-21 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5545:
-

Status: Patch Available  (was: Open)

> Allow specifying endpoints for listeners in HttpServer
> --
>
> Key: HDFS-5545
> URL: https://issues.apache.org/jira/browse/HDFS-5545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5545.000.patch
>
>
> Currently HttpServer listens to HTTP port and provides a method to allow the 
> users to add an SSL listeners after the server starts. This complicates the 
> logic if the client needs to set up HTTP / HTTPS serverfs.
> This jira proposes to replace these two methods with the concepts of listener 
> endpoints. A listener endpoints is a URI (i.e., scheme + host + port) that 
> the HttpServer should listen to. This concept simplifies the task of managing 
> the HTTP server from HDFS / YARN.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5545) Allow specifying endpoints for listeners in HttpServer

2013-11-21 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-5545:


 Summary: Allow specifying endpoints for listeners in HttpServer
 Key: HDFS-5545
 URL: https://issues.apache.org/jira/browse/HDFS-5545
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai


Currently HttpServer listens to HTTP port and provides a method to allow the 
users to add an SSL listeners after the server starts. This complicates the 
logic if the client needs to set up HTTP / HTTPS serverfs.

This jira proposes to replace these two methods with the concepts of listener 
endpoints. A listener endpoints is a URI (i.e., scheme + host + port) that the 
HttpServer should listen to.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5543) fix narrow race condition in TestPathBasedCacheRequests

2013-11-21 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829222#comment-13829222
 ] 

Andrew Wang commented on HDFS-5543:
---

+1 pending jenkins

> fix narrow race condition in TestPathBasedCacheRequests
> ---
>
> Key: HDFS-5543
> URL: https://issues.apache.org/jira/browse/HDFS-5543
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5543.001.patch, HDFS-5543.002.patch
>
>
> TestPathBasedCacheRequests has a narrow race condition in 
> testWaitForCachedReplicasInDirectory where an assert checking the number of 
> bytes cached may fail.  The reason is because waitForCachedBlock looks at the 
> NameNode data structures directly to see how many replicas are cached, but 
> the scanner asynchronously updates the cache entries with this information.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   >