[jira] [Commented] (HDFS-4983) Numeric usernames do not work with WebHDFS FS

2013-12-06 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841360#comment-13841360
 ] 

Yongjun Zhang commented on HDFS-4983:
-

Thanks a lot Andrew, just uploaded a version with this slight change you 
pointed out.


> Numeric usernames do not work with WebHDFS FS
> -
>
> Key: HDFS-4983
> URL: https://issues.apache.org/jira/browse/HDFS-4983
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Yongjun Zhang
>  Labels: patch
> Attachments: HDFS-4983.001.patch, HDFS-4983.002.patch, 
> HDFS-4983.003.patch, HDFS-4983.004.patch, HDFS-4983.005.patch
>
>
> Per the file 
> hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java,
>  the DOMAIN pattern is set to: {{^[A-Za-z_][A-Za-z0-9._-]*[$]?$}}.
> Given this, using a username such as "123" seems to fail for some reason 
> (tried on insecure setup):
> {code}
> [123@host-1 ~]$ whoami
> 123
> [123@host-1 ~]$ hadoop fs -fs webhdfs://host-2.domain.com -ls /
> -ls: Invalid value: "123" does not belong to the domain 
> ^[A-Za-z_][A-Za-z0-9._-]*[$]?$
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-4983) Numeric usernames do not work with WebHDFS FS

2013-12-06 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-4983:


Attachment: HDFS-4983.005.patch

> Numeric usernames do not work with WebHDFS FS
> -
>
> Key: HDFS-4983
> URL: https://issues.apache.org/jira/browse/HDFS-4983
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Yongjun Zhang
>  Labels: patch
> Attachments: HDFS-4983.001.patch, HDFS-4983.002.patch, 
> HDFS-4983.003.patch, HDFS-4983.004.patch, HDFS-4983.005.patch
>
>
> Per the file 
> hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java,
>  the DOMAIN pattern is set to: {{^[A-Za-z_][A-Za-z0-9._-]*[$]?$}}.
> Given this, using a username such as "123" seems to fail for some reason 
> (tried on insecure setup):
> {code}
> [123@host-1 ~]$ whoami
> 123
> [123@host-1 ~]$ hadoop fs -fs webhdfs://host-2.domain.com -ls /
> -ls: Invalid value: "123" does not belong to the domain 
> ^[A-Za-z_][A-Za-z0-9._-]*[$]?$
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4114) Deprecate the BackupNode and CheckpointNode in 2.0

2013-12-06 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841326#comment-13841326
 ] 

Konstantin Shvachko commented on HDFS-4114:
---

[~sureshms], I missed your comment of Nov 8 while travelling, sorry, still 
recovering.
As it stands today BackupNode is the only extension of the NameNode in the 
current code base.
It still provides important bindings to downstream projects that I believe we 
both care about: by its mere existence and test coverage.
You are right it's been a while and I have a debt to provide proper ones, which 
is on my todo list.
I understand the burden of supporting and in the mean time want to reiterate my 
readiness to promptly address any related issues.
LMK if I missed or can help with any.

Glanced through your patch.
Saw some things that probably fall as collateral damage, like documentation 
about "Import Checkpoint", which is not related to BN.
But, thanks, it nicely scopes for me the essence of the bindings required.
If you wish we can assign this issue to me so that I could take care of it in 
the future.

> Deprecate the BackupNode and CheckpointNode in 2.0
> --
>
> Key: HDFS-4114
> URL: https://issues.apache.org/jira/browse/HDFS-4114
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eli Collins
>Assignee: Suresh Srinivas
> Attachments: HDFS-4114.patch
>
>
> Per the thread on hdfs-dev@ (http://s.apache.org/tMT) let's remove the 
> BackupNode and CheckpointNode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5633) Improve OfflineImageViewer to use less memory

2013-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841264#comment-13841264
 ] 

Hudson commented on HDFS-5633:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1604 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1604/])
HDFS-5633. Improve OfflineImageViewer to use less memory. Contributed by Jing 
Zhao. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548359)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionVisitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java


> Improve OfflineImageViewer to use less memory
> -
>
> Key: HDFS-5633
> URL: https://issues.apache.org/jira/browse/HDFS-5633
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.4.0
>
> Attachments: HDFS-5633.000.patch
>
>
> Currently after we rename a file/dir which is included in a snapshot, the 
> file/dir can be linked with two different reference INodes. To avoid 
> saving/loading the inode multiple times in/from FSImage, we use a temporary 
> map to record whether we have visited this inode before.
> However, in OfflineImageViewer (specifically, in ImageLoaderCurrent), the 
> current implementation simply records all the directory inodes. This can take 
> a lot of memory when the fsimage is big. We should only record an inode in 
> the temp map when it is referenced by an INodeReference, just like what we do 
> in FSImageFormat.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5514) FSNamesystem's fsLock should allow custom implementation

2013-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841267#comment-13841267
 ] 

Hudson commented on HDFS-5514:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1604 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1604/])
Neglected to add new file in HDFS-5514 (daryn) (daryn: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548167)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java
HDFS-5514. FSNamesystem's fsLock should allow custom implementation (daryn) 
(daryn: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548161)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java


> FSNamesystem's fsLock should allow custom implementation
> 
>
> Key: HDFS-5514
> URL: https://issues.apache.org/jira/browse/HDFS-5514
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HDFS-5514.patch, HDFS-5514.patch
>
>
> Changing {{fsLock}} from a {{ReentrantReadWriteLock}} to an API compatible 
> class that encapsulates the rwLock will allow for more sophisticated locking 
> implementations such as fine grain locking.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5590) Block ID and generation stamp may be reused when persistBlocks is set to false

2013-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841261#comment-13841261
 ] 

Hudson commented on HDFS-5590:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1604 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1604/])
HDFS-5590. Block ID and generation stamp may be reused when persistBlocks is 
set to false. Contributed by Jing Zhao. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548368)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPersistBlocks.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBackupNode.java


> Block ID and generation stamp may be reused when persistBlocks is set to false
> --
>
> Key: HDFS-5590
> URL: https://issues.apache.org/jira/browse/HDFS-5590
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 2.3.0
>
> Attachments: HDFS-5590.000.patch, HDFS-5590.001.patch
>
>
> In a cluster with non-HA setup and dfs.persist.blocks set to false, we may 
> have data loss in the following case:
> # client creates file1 and requests a block from NN and get blk_id1_gs1
> # client writes blk_id1_gs1 to DN
> # NN is restarted and because persistBlocks is false, blk_id1_gs1 may not be 
> persisted in disk
> # another client creates file2 and NN will allocate a new block using the 
> same block id blk_id1_gs1 since block ID and generation stamp are both 
> increased sequentially.
> Now we may have two versions (file1 and file2) of the blk_id1_gs1 (same id, 
> same gs) in the system. It will case data loss.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5630) Hook up cache directive and pool usage statistics

2013-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841260#comment-13841260
 ] 

Hudson commented on HDFS-5630:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1604 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1604/])
HDFS-5630. Hook up cache directive and pool usage statistics. (wang) (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548309)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirective.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CachePool.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCacheAdminConf.xml


> Hook up cache directive and pool usage statistics
> -
>
> Key: HDFS-5630
> URL: https://issues.apache.org/jira/browse/HDFS-5630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, namenode
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 3.0.0
>
> Attachments: hdfs-5630-1.patch, hdfs-5630-2.patch
>
>
> Right now we have stubs for bytes/files statistics for cache pools, but we 
> need to hook them up so they're actually being tracked.
> This is a pre-requisite for enforcing per-pool quotas.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5633) Improve OfflineImageViewer to use less memory

2013-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841247#comment-13841247
 ] 

Hudson commented on HDFS-5633:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1630 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1630/])
HDFS-5633. Improve OfflineImageViewer to use less memory. Contributed by Jing 
Zhao. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548359)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionVisitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java


> Improve OfflineImageViewer to use less memory
> -
>
> Key: HDFS-5633
> URL: https://issues.apache.org/jira/browse/HDFS-5633
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.4.0
>
> Attachments: HDFS-5633.000.patch
>
>
> Currently after we rename a file/dir which is included in a snapshot, the 
> file/dir can be linked with two different reference INodes. To avoid 
> saving/loading the inode multiple times in/from FSImage, we use a temporary 
> map to record whether we have visited this inode before.
> However, in OfflineImageViewer (specifically, in ImageLoaderCurrent), the 
> current implementation simply records all the directory inodes. This can take 
> a lot of memory when the fsimage is big. We should only record an inode in 
> the temp map when it is referenced by an INodeReference, just like what we do 
> in FSImageFormat.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5630) Hook up cache directive and pool usage statistics

2013-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841243#comment-13841243
 ] 

Hudson commented on HDFS-5630:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1630 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1630/])
HDFS-5630. Hook up cache directive and pool usage statistics. (wang) (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548309)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirective.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CachePool.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCacheAdminConf.xml


> Hook up cache directive and pool usage statistics
> -
>
> Key: HDFS-5630
> URL: https://issues.apache.org/jira/browse/HDFS-5630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, namenode
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 3.0.0
>
> Attachments: hdfs-5630-1.patch, hdfs-5630-2.patch
>
>
> Right now we have stubs for bytes/files statistics for cache pools, but we 
> need to hook them up so they're actually being tracked.
> This is a pre-requisite for enforcing per-pool quotas.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5590) Block ID and generation stamp may be reused when persistBlocks is set to false

2013-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841244#comment-13841244
 ] 

Hudson commented on HDFS-5590:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1630 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1630/])
HDFS-5590. Block ID and generation stamp may be reused when persistBlocks is 
set to false. Contributed by Jing Zhao. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548368)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPersistBlocks.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBackupNode.java


> Block ID and generation stamp may be reused when persistBlocks is set to false
> --
>
> Key: HDFS-5590
> URL: https://issues.apache.org/jira/browse/HDFS-5590
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 2.3.0
>
> Attachments: HDFS-5590.000.patch, HDFS-5590.001.patch
>
>
> In a cluster with non-HA setup and dfs.persist.blocks set to false, we may 
> have data loss in the following case:
> # client creates file1 and requests a block from NN and get blk_id1_gs1
> # client writes blk_id1_gs1 to DN
> # NN is restarted and because persistBlocks is false, blk_id1_gs1 may not be 
> persisted in disk
> # another client creates file2 and NN will allocate a new block using the 
> same block id blk_id1_gs1 since block ID and generation stamp are both 
> increased sequentially.
> Now we may have two versions (file1 and file2) of the blk_id1_gs1 (same id, 
> same gs) in the system. It will case data loss.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5514) FSNamesystem's fsLock should allow custom implementation

2013-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841250#comment-13841250
 ] 

Hudson commented on HDFS-5514:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1630 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1630/])
Neglected to add new file in HDFS-5514 (daryn) (daryn: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548167)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java
HDFS-5514. FSNamesystem's fsLock should allow custom implementation (daryn) 
(daryn: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548161)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java


> FSNamesystem's fsLock should allow custom implementation
> 
>
> Key: HDFS-5514
> URL: https://issues.apache.org/jira/browse/HDFS-5514
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HDFS-5514.patch, HDFS-5514.patch
>
>
> Changing {{fsLock}} from a {{ReentrantReadWriteLock}} to an API compatible 
> class that encapsulates the rwLock will allow for more sophisticated locking 
> implementations such as fine grain locking.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5630) Hook up cache directive and pool usage statistics

2013-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841176#comment-13841176
 ] 

Hudson commented on HDFS-5630:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #413 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/413/])
HDFS-5630. Hook up cache directive and pool usage statistics. (wang) (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548309)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirective.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CachePool.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCacheAdminConf.xml


> Hook up cache directive and pool usage statistics
> -
>
> Key: HDFS-5630
> URL: https://issues.apache.org/jira/browse/HDFS-5630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, namenode
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 3.0.0
>
> Attachments: hdfs-5630-1.patch, hdfs-5630-2.patch
>
>
> Right now we have stubs for bytes/files statistics for cache pools, but we 
> need to hook them up so they're actually being tracked.
> This is a pre-requisite for enforcing per-pool quotas.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5590) Block ID and generation stamp may be reused when persistBlocks is set to false

2013-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841177#comment-13841177
 ] 

Hudson commented on HDFS-5590:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #413 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/413/])
HDFS-5590. Block ID and generation stamp may be reused when persistBlocks is 
set to false. Contributed by Jing Zhao. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548368)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPersistBlocks.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBackupNode.java


> Block ID and generation stamp may be reused when persistBlocks is set to false
> --
>
> Key: HDFS-5590
> URL: https://issues.apache.org/jira/browse/HDFS-5590
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 2.3.0
>
> Attachments: HDFS-5590.000.patch, HDFS-5590.001.patch
>
>
> In a cluster with non-HA setup and dfs.persist.blocks set to false, we may 
> have data loss in the following case:
> # client creates file1 and requests a block from NN and get blk_id1_gs1
> # client writes blk_id1_gs1 to DN
> # NN is restarted and because persistBlocks is false, blk_id1_gs1 may not be 
> persisted in disk
> # another client creates file2 and NN will allocate a new block using the 
> same block id blk_id1_gs1 since block ID and generation stamp are both 
> increased sequentially.
> Now we may have two versions (file1 and file2) of the blk_id1_gs1 (same id, 
> same gs) in the system. It will case data loss.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5633) Improve OfflineImageViewer to use less memory

2013-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841180#comment-13841180
 ] 

Hudson commented on HDFS-5633:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #413 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/413/])
HDFS-5633. Improve OfflineImageViewer to use less memory. Contributed by Jing 
Zhao. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548359)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionVisitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java


> Improve OfflineImageViewer to use less memory
> -
>
> Key: HDFS-5633
> URL: https://issues.apache.org/jira/browse/HDFS-5633
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.4.0
>
> Attachments: HDFS-5633.000.patch
>
>
> Currently after we rename a file/dir which is included in a snapshot, the 
> file/dir can be linked with two different reference INodes. To avoid 
> saving/loading the inode multiple times in/from FSImage, we use a temporary 
> map to record whether we have visited this inode before.
> However, in OfflineImageViewer (specifically, in ImageLoaderCurrent), the 
> current implementation simply records all the directory inodes. This can take 
> a lot of memory when the fsimage is big. We should only record an inode in 
> the temp map when it is referenced by an INodeReference, just like what we do 
> in FSImageFormat.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5514) FSNamesystem's fsLock should allow custom implementation

2013-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841183#comment-13841183
 ] 

Hudson commented on HDFS-5514:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #413 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/413/])
Neglected to add new file in HDFS-5514 (daryn) (daryn: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548167)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java
HDFS-5514. FSNamesystem's fsLock should allow custom implementation (daryn) 
(daryn: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548161)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java


> FSNamesystem's fsLock should allow custom implementation
> 
>
> Key: HDFS-5514
> URL: https://issues.apache.org/jira/browse/HDFS-5514
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HDFS-5514.patch, HDFS-5514.patch
>
>
> Changing {{fsLock}} from a {{ReentrantReadWriteLock}} to an API compatible 
> class that encapsulates the rwLock will allow for more sophisticated locking 
> implementations such as fine grain locking.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5312) Generate HTTP / HTTPS URL in DFSUtil#getInfoServer() based on the configured http policy

2013-12-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841123#comment-13841123
 ] 

Hadoop QA commented on HDFS-5312:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617346/HDFS-5312.008.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5657//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5657//console

This message is automatically generated.

> Generate HTTP / HTTPS URL in DFSUtil#getInfoServer() based on the configured 
> http policy
> 
>
> Key: HDFS-5312
> URL: https://issues.apache.org/jira/browse/HDFS-5312
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5312.000.patch, HDFS-5312.001.patch, 
> HDFS-5312.002.patch, HDFS-5312.003.patch, HDFS-5312.004.patch, 
> HDFS-5312.005.patch, HDFS-5312.006.patch, HDFS-5312.007.patch, 
> HDFS-5312.008.patch
>
>
> DFSUtil#getInfoServer() returns only the authority (i.e., host+port) when 
> searching for the http / https server. This is insufficient because HDFS-5536 
> and related jiras allows NN / DN / JN to open HTTPS only using the HTTPS_ONLY 
> policy.
> This JIRA addresses two issues. First, DFSUtil#getInfoServer() should return 
> an URI instead of a string, so that the scheme is an inherent parts of the 
> return value, which eliminates the task of figuring out the scheme by design. 
> Second, it introduces a new function to choose whether http or https should 
> be used to connect to the remote server based of dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5637) try to refeatchToken while local read InvalidToken occurred

2013-12-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841110#comment-13841110
 ] 

Hadoop QA commented on HDFS-5637:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617342/HDFS-5637.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5656//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5656//console

This message is automatically generated.

> try to refeatchToken while local read InvalidToken occurred
> ---
>
> Key: HDFS-5637
> URL: https://issues.apache.org/jira/browse/HDFS-5637
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, security
>Affects Versions: 2.0.5-alpha, 2.2.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HDFS-5637.txt
>
>
> we observed several warning logs like below from region server nodes:
> 2013-12-05,13:22:26,042 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
> connect to /10.2.201.110:11402 for block, add to deadNodes and continue. 
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
> block_token_identifier (expiryDate=1386060141977, keyId=-333530248, 
> userId=hbase_srv, blockPoolId=BP-1310313570-10.101.10.66-1373527541386, 
> blockId=-190217754078101701, access modes=[READ]) is expired.
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:88)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockToken(DataNode.java:1082)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1033)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
> block_token_identifier (expiryDate=1386060141977, keyId=-333530248, 
> userId=hbase_srv, blockPoolId=BP-1310313570-10.101.10.66-1373527541386, 
> blockId=-190217754078101701, access modes=[READ]) is expired.
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:88)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockToken(DataNode.java:1082)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1033)
> at 
> org.apache.hadoop.hdf

<    1   2