[jira] [Assigned] (HDFS-8155) Support OAuth2 authentication in WebHDFS

2015-04-16 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng reassigned HDFS-8155:
---

Assignee: Kai Zheng

> Support OAuth2 authentication in WebHDFS
> 
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Kai Zheng
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8155) Support OAuth2 authentication in WebHDFS

2015-04-16 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497688#comment-14497688
 ] 

Kai Zheng commented on HDFS-8155:
-

Thanks a lot for reporting the JIRA. I have plans for working on OAuth2 and 
implementing the WebHDFS case, so I am taking this JIRA. Initial patches and 
design draft will be uploaded to HADOOP-11766 in this week or early next week, 
please help review and comment then. Thanks.

> Support OAuth2 authentication in WebHDFS
> 
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8155) Support OAuth2 authentication in WebHDFS

2015-04-16 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497700#comment-14497700
 ] 

Kai Zheng commented on HDFS-8155:
-

[~jghoman],

I noticed this issue was linked to HDFS-8154 as depended. Would you provide 
your rational? I thought the OAuth2 support for Web HDFS can be done separately 
like we would do for Hadoop Web UI, or you mean more than that? Thanks.

> Support OAuth2 authentication in WebHDFS
> 
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Kai Zheng
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7994) Detect if resevered EC Block ID is already used

2015-04-16 Thread Hui Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hui Zheng updated HDFS-7994:

Attachment: HDFS-7994_002.patch

Thanks [~szetszwo] for reviewing!
Consider we are using BlockManager to manage hasNonEcBlockUsingStripID,I added 
a method "addBlockCollectionWithCheck" which do some additional check and 
update the hasNonEcBlockUsingStripID and reuse "addBlockCollection"  into 
BlockManager.

> Detect if resevered EC Block ID is already used
> ---
>
> Key: HDFS-7994
> URL: https://issues.apache.org/jira/browse/HDFS-7994
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Hui Zheng
> Attachments: HDFS-7994_001.patch, HDFS-7994_002.patch
>
>
> Since random block IDs were supported by some early version of HDFS, the 
> block ID reserved for EC blocks could be already used by some existing blocks 
> in a cluster. During NameNode startup, it detects if there are reserved EC 
> block IDs used by non-EC blocks. If it is the case, NameNode will do an 
> additional blocksMap lookup when there is a miss in a blockGroupsMap lookup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7993) Incorrect descriptions in fsck when nodes are decommissioned

2015-04-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497718#comment-14497718
 ] 

Hadoop QA commented on HDFS-7993:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12725779/HDFS-7993.4.patch
  against trunk revision 1b89a3e.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10286//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10286//console

This message is automatically generated.

> Incorrect descriptions in fsck when nodes are decommissioned
> 
>
> Key: HDFS-7993
> URL: https://issues.apache.org/jira/browse/HDFS-7993
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Ming Ma
>Assignee: J.Andreina
> Attachments: HDFS-7993.1.patch, HDFS-7993.2.patch, HDFS-7993.3.patch, 
> HDFS-7993.4.patch
>
>
> When you run fsck with "-files" or "-racks", you will get something like 
> below if one of the replicas is decommissioned.
> {noformat}
> blk_x len=y repl=3 [dn1, dn2, dn3, dn4]
> {noformat}
> That is because in NamenodeFsck, the repl count comes from live replicas 
> count; while the actual nodes come from LocatedBlock which include 
> decommissioned nodes.
> Another issue in NamenodeFsck is BlockPlacementPolicy's verifyBlockPlacement 
> verifies LocatedBlock that includes decommissioned nodes. However, it seems 
> better to exclude the decommissioned nodes in the verification; just like how 
> fsck excludes decommissioned nodes when it check for under replicated blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-8136) Client gets and uses EC schema when reads and writes a stripping file

2015-04-16 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-8136 started by Kai Sasaki.

> Client gets and uses EC schema when reads and writes a stripping file
> -
>
> Key: HDFS-8136
> URL: https://issues.apache.org/jira/browse/HDFS-8136
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Zheng
>Assignee: Kai Sasaki
> Attachments: HDFS-8136.1.patch
>
>
> Discussed with [~umamaheswararao] and [~vinayrpet], in client when reading 
> and writing a stripping file, it can invoke a separate call to NameNode to 
> request the EC schema associated with the EC zone where the file is in. Then 
> the schema can be used to guide the reading and writing. Currently it uses 
> hard-coded values.
> Optionally, as an optimization consideration, client may cache schema info 
> per file or per zone or per schema name. We could add schema name in 
> {{HdfsFileStatus}} for that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8136) Client gets and uses EC schema when reads and writes a stripping file

2015-04-16 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-8136:
-
Attachment: HDFS-8136.1.patch

> Client gets and uses EC schema when reads and writes a stripping file
> -
>
> Key: HDFS-8136
> URL: https://issues.apache.org/jira/browse/HDFS-8136
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Zheng
>Assignee: Kai Sasaki
> Attachments: HDFS-8136.1.patch
>
>
> Discussed with [~umamaheswararao] and [~vinayrpet], in client when reading 
> and writing a stripping file, it can invoke a separate call to NameNode to 
> request the EC schema associated with the EC zone where the file is in. Then 
> the schema can be used to guide the reading and writing. Currently it uses 
> hard-coded values.
> Optionally, as an optimization consideration, client may cache schema info 
> per file or per zone or per schema name. We could add schema name in 
> {{HdfsFileStatus}} for that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8136) Client gets and uses EC schema when reads and writes a stripping file

2015-04-16 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497785#comment-14497785
 ] 

Kai Zheng commented on HDFS-8136:
-

The patch looks pretty good to me. Did you run the related tests?

> Client gets and uses EC schema when reads and writes a stripping file
> -
>
> Key: HDFS-8136
> URL: https://issues.apache.org/jira/browse/HDFS-8136
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Zheng
>Assignee: Kai Sasaki
> Attachments: HDFS-8136.1.patch
>
>
> Discussed with [~umamaheswararao] and [~vinayrpet], in client when reading 
> and writing a stripping file, it can invoke a separate call to NameNode to 
> request the EC schema associated with the EC zone where the file is in. Then 
> the schema can be used to guide the reading and writing. Currently it uses 
> hard-coded values.
> Optionally, as an optimization consideration, client may cache schema info 
> per file or per zone or per schema name. We could add schema name in 
> {{HdfsFileStatus}} for that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7949) WebImageViewer need support file size calculation with striped blocks

2015-04-16 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497829#comment-14497829
 ] 

Kai Zheng commented on HDFS-7949:
-

For the assertion, it would be good not to use hard-coded value, or give it a 
nice name. For example, if you write {{TEST_WRITE_BYTES}}, then you will get 
that number after reading back, and then compare with it.

> WebImageViewer need support file size calculation with striped blocks
> -
>
> Key: HDFS-7949
> URL: https://issues.apache.org/jira/browse/HDFS-7949
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Hui Zheng
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HDFS-7949-001.patch, HDFS-7949-002.patch, 
> HDFS-7949-003.patch, HDFS-7949-004.patch
>
>
> The file size calculation should be changed when the blocks of the file are 
> striped in WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7891) A block placement policy with best rack failure tolerance

2015-04-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497870#comment-14497870
 ] 

Hadoop QA commented on HDFS-7891:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12725788/HDFS-7891.006.patch
  against trunk revision 1b89a3e.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10287//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10287//console

This message is automatically generated.

> A block placement policy with best rack failure tolerance
> -
>
> Key: HDFS-7891
> URL: https://issues.apache.org/jira/browse/HDFS-7891
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Attachments: HDFS-7891.005.dup.patch, HDFS-7891.005.patch, 
> HDFS-7891.006.patch
>
>
> a block placement policy tries its best to place replicas to most racks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8136) Client gets and uses EC schema when reads and writes a stripping file

2015-04-16 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497909#comment-14497909
 ] 

Kai Sasaki commented on HDFS-8136:
--

There seems to be {{TestDFSStripedOutputStream}} and it passed. But there is 
not a test class about reader, {{DFSStripedInputStream}}. Is it better to 
implement now in this JIRA? Or can we test after resolve other dependencies 
such as DN.

> Client gets and uses EC schema when reads and writes a stripping file
> -
>
> Key: HDFS-8136
> URL: https://issues.apache.org/jira/browse/HDFS-8136
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Zheng
>Assignee: Kai Sasaki
> Attachments: HDFS-8136.1.patch
>
>
> Discussed with [~umamaheswararao] and [~vinayrpet], in client when reading 
> and writing a stripping file, it can invoke a separate call to NameNode to 
> request the EC schema associated with the EC zone where the file is in. Then 
> the schema can be used to guide the reading and writing. Currently it uses 
> hard-coded values.
> Optionally, as an optimization consideration, client may cache schema info 
> per file or per zone or per schema name. We could add schema name in 
> {{HdfsFileStatus}} for that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8149) The footer of the Web UI "Hadoop, 2014" is old

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497931#comment-14497931
 ] 

Hudson commented on HDFS-8149:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2097 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2097/])
HDFS-8149. The footer of the Web UI "Hadoop, 2014" is old. Contributed by 
Brahma Reddy Battula. (aajisaka: rev de0f1700c150a819b38028c44ef1926507086e6c)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/journal/index.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/index.html


> The footer of the Web UI "Hadoop, 2014" is old
> --
>
> Key: HDFS-8149
> URL: https://issues.apache.org/jira/browse/HDFS-8149
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.1
>
> Attachments: HDFS-8149.patch
>
>
> Need to be updated to 2015.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8151) Always use snapshot path as source when invalid snapshot names are used for diff based distcp

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497929#comment-14497929
 ] 

Hudson commented on HDFS-8151:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2097 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2097/])
HDFS-8151. Always use snapshot path as source when invalid snapshot names are 
used for diff based distcp. Contributed by Jing Zhao. (jing9: rev 
4c097e473bb1f18d1510deb61bae2bcb8c156f18)
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSync.java


> Always use snapshot path as source when invalid snapshot names are used for 
> diff based distcp
> -
>
> Key: HDFS-8151
> URL: https://issues.apache.org/jira/browse/HDFS-8151
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.7.0
>Reporter: Sushmitha Sreenivasan
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-8151.000.patch
>
>
> This is a bug reported by [~ssreenivasan]:
> HDFS-8036 makes the diff-based distcp use snapshot path as the source. This 
> should also happen when
> # invalid snapshot names are provided as distcp parameters thus the diff 
> report computation on the target cluster fails
> # there is modification happening in the target cluster thus 
> {{checkNoChange}} returns false
> In other cases like source and target FS are not DistributedFileSystem, we 
> should throw exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7934) Update RollingUpgrade rollback documentation: should use bootstrapstandby for standby NN

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497935#comment-14497935
 ] 

Hudson commented on HDFS-7934:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2097 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2097/])
HDFS-7934. Update RollingUpgrade rollback documentation: should use 
bootstrapstandby for standby NN. Contributed by J. Andreina. (jing9: rev 
b172d03595d1591e7f542791224607d8c5fce3e2)
* hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Update RollingUpgrade rollback documentation: should use bootstrapstandby for 
> standby NN
> 
>
> Key: HDFS-7934
> URL: https://issues.apache.org/jira/browse/HDFS-7934
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.0
>Reporter: J.Andreina
>Assignee: J.Andreina
>Priority: Critical
> Fix For: 2.7.1
>
> Attachments: HDFS-7934.1.patch, HDFS-7934.2.patch
>
>
> During Rolling upgrade rollback , standby namenode startup fails , while 
> loading edits and when  there is no local copy of edits created after upgrade 
> ( which is already been removed  by Active Namenode from journal manager and 
> from Active's local). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8144) Split TestLazyPersistFiles into multiple tests

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497930#comment-14497930
 ] 

Hudson commented on HDFS-8144:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2097 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2097/])
HDFS-8144. Split TestLazyPersistFiles into multiple tests. (Arpit Agarwal) 
(arp: rev 9e8309a1b2989d07d43e20940d9ac12b7b43482f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistReplicaPlacement.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistReplicaRecovery.java


> Split TestLazyPersistFiles into multiple tests
> --
>
> Key: HDFS-8144
> URL: https://issues.apache.org/jira/browse/HDFS-8144
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HDFS-8144.01.patch, HDFS-8144.02.patch
>
>
> TestLazyPersistFiles has grown too large and includes both NN and DN tests. 
> We can split up related tests into smaller files to keep the test case 
> manageable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8151) Always use snapshot path as source when invalid snapshot names are used for diff based distcp

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497938#comment-14497938
 ] 

Hudson commented on HDFS-8151:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #156 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/156/])
HDFS-8151. Always use snapshot path as source when invalid snapshot names are 
used for diff based distcp. Contributed by Jing Zhao. (jing9: rev 
4c097e473bb1f18d1510deb61bae2bcb8c156f18)
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSync.java


> Always use snapshot path as source when invalid snapshot names are used for 
> diff based distcp
> -
>
> Key: HDFS-8151
> URL: https://issues.apache.org/jira/browse/HDFS-8151
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.7.0
>Reporter: Sushmitha Sreenivasan
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-8151.000.patch
>
>
> This is a bug reported by [~ssreenivasan]:
> HDFS-8036 makes the diff-based distcp use snapshot path as the source. This 
> should also happen when
> # invalid snapshot names are provided as distcp parameters thus the diff 
> report computation on the target cluster fails
> # there is modification happening in the target cluster thus 
> {{checkNoChange}} returns false
> In other cases like source and target FS are not DistributedFileSystem, we 
> should throw exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8144) Split TestLazyPersistFiles into multiple tests

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497939#comment-14497939
 ] 

Hudson commented on HDFS-8144:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #156 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/156/])
HDFS-8144. Split TestLazyPersistFiles into multiple tests. (Arpit Agarwal) 
(arp: rev 9e8309a1b2989d07d43e20940d9ac12b7b43482f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistReplicaPlacement.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistReplicaRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java


> Split TestLazyPersistFiles into multiple tests
> --
>
> Key: HDFS-8144
> URL: https://issues.apache.org/jira/browse/HDFS-8144
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HDFS-8144.01.patch, HDFS-8144.02.patch
>
>
> TestLazyPersistFiles has grown too large and includes both NN and DN tests. 
> We can split up related tests into smaller files to keep the test case 
> manageable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8149) The footer of the Web UI "Hadoop, 2014" is old

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497940#comment-14497940
 ] 

Hudson commented on HDFS-8149:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #156 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/156/])
HDFS-8149. The footer of the Web UI "Hadoop, 2014" is old. Contributed by 
Brahma Reddy Battula. (aajisaka: rev de0f1700c150a819b38028c44ef1926507086e6c)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/journal/index.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/index.html
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.html


> The footer of the Web UI "Hadoop, 2014" is old
> --
>
> Key: HDFS-8149
> URL: https://issues.apache.org/jira/browse/HDFS-8149
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.1
>
> Attachments: HDFS-8149.patch
>
>
> Need to be updated to 2015.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7934) Update RollingUpgrade rollback documentation: should use bootstrapstandby for standby NN

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497944#comment-14497944
 ] 

Hudson commented on HDFS-7934:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #156 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/156/])
HDFS-7934. Update RollingUpgrade rollback documentation: should use 
bootstrapstandby for standby NN. Contributed by J. Andreina. (jing9: rev 
b172d03595d1591e7f542791224607d8c5fce3e2)
* hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Update RollingUpgrade rollback documentation: should use bootstrapstandby for 
> standby NN
> 
>
> Key: HDFS-7934
> URL: https://issues.apache.org/jira/browse/HDFS-7934
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.0
>Reporter: J.Andreina
>Assignee: J.Andreina
>Priority: Critical
> Fix For: 2.7.1
>
> Attachments: HDFS-7934.1.patch, HDFS-7934.2.patch
>
>
> During Rolling upgrade rollback , standby namenode startup fails , while 
> loading edits and when  there is no local copy of edits created after upgrade 
> ( which is already been removed  by Active Namenode from journal manager and 
> from Active's local). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8144) Split TestLazyPersistFiles into multiple tests

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497947#comment-14497947
 ] 

Hudson commented on HDFS-8144:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #165 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/165/])
HDFS-8144. Split TestLazyPersistFiles into multiple tests. (Arpit Agarwal) 
(arp: rev 9e8309a1b2989d07d43e20940d9ac12b7b43482f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistReplicaRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistReplicaPlacement.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java


> Split TestLazyPersistFiles into multiple tests
> --
>
> Key: HDFS-8144
> URL: https://issues.apache.org/jira/browse/HDFS-8144
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HDFS-8144.01.patch, HDFS-8144.02.patch
>
>
> TestLazyPersistFiles has grown too large and includes both NN and DN tests. 
> We can split up related tests into smaller files to keep the test case 
> manageable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8151) Always use snapshot path as source when invalid snapshot names are used for diff based distcp

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497946#comment-14497946
 ] 

Hudson commented on HDFS-8151:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #165 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/165/])
HDFS-8151. Always use snapshot path as source when invalid snapshot names are 
used for diff based distcp. Contributed by Jing Zhao. (jing9: rev 
4c097e473bb1f18d1510deb61bae2bcb8c156f18)
* 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSync.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java


> Always use snapshot path as source when invalid snapshot names are used for 
> diff based distcp
> -
>
> Key: HDFS-8151
> URL: https://issues.apache.org/jira/browse/HDFS-8151
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.7.0
>Reporter: Sushmitha Sreenivasan
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-8151.000.patch
>
>
> This is a bug reported by [~ssreenivasan]:
> HDFS-8036 makes the diff-based distcp use snapshot path as the source. This 
> should also happen when
> # invalid snapshot names are provided as distcp parameters thus the diff 
> report computation on the target cluster fails
> # there is modification happening in the target cluster thus 
> {{checkNoChange}} returns false
> In other cases like source and target FS are not DistributedFileSystem, we 
> should throw exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7934) Update RollingUpgrade rollback documentation: should use bootstrapstandby for standby NN

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497952#comment-14497952
 ] 

Hudson commented on HDFS-7934:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #165 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/165/])
HDFS-7934. Update RollingUpgrade rollback documentation: should use 
bootstrapstandby for standby NN. Contributed by J. Andreina. (jing9: rev 
b172d03595d1591e7f542791224607d8c5fce3e2)
* hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Update RollingUpgrade rollback documentation: should use bootstrapstandby for 
> standby NN
> 
>
> Key: HDFS-7934
> URL: https://issues.apache.org/jira/browse/HDFS-7934
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.0
>Reporter: J.Andreina
>Assignee: J.Andreina
>Priority: Critical
> Fix For: 2.7.1
>
> Attachments: HDFS-7934.1.patch, HDFS-7934.2.patch
>
>
> During Rolling upgrade rollback , standby namenode startup fails , while 
> loading edits and when  there is no local copy of edits created after upgrade 
> ( which is already been removed  by Active Namenode from journal manager and 
> from Active's local). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8149) The footer of the Web UI "Hadoop, 2014" is old

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497948#comment-14497948
 ] 

Hudson commented on HDFS-8149:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #165 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/165/])
HDFS-8149. The footer of the Web UI "Hadoop, 2014" is old. Contributed by 
Brahma Reddy Battula. (aajisaka: rev de0f1700c150a819b38028c44ef1926507086e6c)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/index.html
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/journal/index.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.html


> The footer of the Web UI "Hadoop, 2014" is old
> --
>
> Key: HDFS-8149
> URL: https://issues.apache.org/jira/browse/HDFS-8149
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.1
>
> Attachments: HDFS-8149.patch
>
>
> Need to be updated to 2015.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8151) Always use snapshot path as source when invalid snapshot names are used for diff based distcp

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497955#comment-14497955
 ] 

Hudson commented on HDFS-8151:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #899 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/899/])
HDFS-8151. Always use snapshot path as source when invalid snapshot names are 
used for diff based distcp. Contributed by Jing Zhao. (jing9: rev 
4c097e473bb1f18d1510deb61bae2bcb8c156f18)
* 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSync.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Always use snapshot path as source when invalid snapshot names are used for 
> diff based distcp
> -
>
> Key: HDFS-8151
> URL: https://issues.apache.org/jira/browse/HDFS-8151
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.7.0
>Reporter: Sushmitha Sreenivasan
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-8151.000.patch
>
>
> This is a bug reported by [~ssreenivasan]:
> HDFS-8036 makes the diff-based distcp use snapshot path as the source. This 
> should also happen when
> # invalid snapshot names are provided as distcp parameters thus the diff 
> report computation on the target cluster fails
> # there is modification happening in the target cluster thus 
> {{checkNoChange}} returns false
> In other cases like source and target FS are not DistributedFileSystem, we 
> should throw exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8149) The footer of the Web UI "Hadoop, 2014" is old

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497957#comment-14497957
 ] 

Hudson commented on HDFS-8149:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #899 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/899/])
HDFS-8149. The footer of the Web UI "Hadoop, 2014" is old. Contributed by 
Brahma Reddy Battula. (aajisaka: rev de0f1700c150a819b38028c44ef1926507086e6c)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/journal/index.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/index.html
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html


> The footer of the Web UI "Hadoop, 2014" is old
> --
>
> Key: HDFS-8149
> URL: https://issues.apache.org/jira/browse/HDFS-8149
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.1
>
> Attachments: HDFS-8149.patch
>
>
> Need to be updated to 2015.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7934) Update RollingUpgrade rollback documentation: should use bootstrapstandby for standby NN

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497961#comment-14497961
 ] 

Hudson commented on HDFS-7934:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #899 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/899/])
HDFS-7934. Update RollingUpgrade rollback documentation: should use 
bootstrapstandby for standby NN. Contributed by J. Andreina. (jing9: rev 
b172d03595d1591e7f542791224607d8c5fce3e2)
* hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Update RollingUpgrade rollback documentation: should use bootstrapstandby for 
> standby NN
> 
>
> Key: HDFS-7934
> URL: https://issues.apache.org/jira/browse/HDFS-7934
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.0
>Reporter: J.Andreina
>Assignee: J.Andreina
>Priority: Critical
> Fix For: 2.7.1
>
> Attachments: HDFS-7934.1.patch, HDFS-7934.2.patch
>
>
> During Rolling upgrade rollback , standby namenode startup fails , while 
> loading edits and when  there is no local copy of edits created after upgrade 
> ( which is already been removed  by Active Namenode from journal manager and 
> from Active's local). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8144) Split TestLazyPersistFiles into multiple tests

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497956#comment-14497956
 ] 

Hudson commented on HDFS-8144:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #899 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/899/])
HDFS-8144. Split TestLazyPersistFiles into multiple tests. (Arpit Agarwal) 
(arp: rev 9e8309a1b2989d07d43e20940d9ac12b7b43482f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistReplicaRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistReplicaPlacement.java


> Split TestLazyPersistFiles into multiple tests
> --
>
> Key: HDFS-8144
> URL: https://issues.apache.org/jira/browse/HDFS-8144
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HDFS-8144.01.patch, HDFS-8144.02.patch
>
>
> TestLazyPersistFiles has grown too large and includes both NN and DN tests. 
> We can split up related tests into smaller files to keep the test case 
> manageable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8159) [HDFS-Quota] Verification is not done while setting dir namequota and size

2015-04-16 Thread Jagadesh Kiran N (JIRA)
Jagadesh Kiran N created HDFS-8159:
--

 Summary: [HDFS-Quota] Verification is not done while setting dir 
namequota and size
 Key: HDFS-8159
 URL: https://issues.apache.org/jira/browse/HDFS-8159
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: HDFS
Affects Versions: 2.6.0
 Environment: Suse 11 SP3
Reporter: Jagadesh Kiran N
Priority: Minor


Name Quota and space quota is not verifying when setting a new value to a 
directory which already has subdirectories or contents.
Below are the steps to re-produce the cases:

*+Case-1+*

Step-1) Create a New folder 
hdfs dfs -mkdir /test
Step-2) Create sub folders
hdfs dfs -mkdir /test/one
hdfs dfs -mkdir /test/two
hdfs dfs -mkdir /test/three
Step-3) Set Name Quota as two 
hdfs dfsadmin  -setQuota 2 /test
Step-3) Quota will be set with out the validating the dirs 

+Output:+ Eventhough name quota value is lower than the existing number of 
dirs, its not validating and allowing to set the new value.

+Suggestion:+ Validate the name quota against the number of contents before 
setting the new value.

*+Case-2+*

Step-1) Add any new folder or file , it will give error message
mkdir: The NameSpace quota (directories and files) of directory /test is 
exceeded: quota=2 file count=5
Step-2) Clear the Quota 
hdfs dfsadmin -clrQuota /test
Step-3) Now Set the Size less than the folder size 
hdfs dfsadmin -setSpaceQuota 10 /test

+Output:+ Eventhough space quota value is less than the size of the existing 
dir contents, its not validating and allowing to set the new value.

+Suggestion:+ Validate the quota against the used space before setting the new 
value.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8159) [HDFS-Quota] Verification is not done while setting dir namequota and size

2015-04-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498036#comment-14498036
 ] 

Allen Wittenauer commented on HDFS-8159:


Is the issue that you can set it to a value lower than what is currently 
present?  If so, that's working as intended  Otherwise admins would never be 
able to lower quotas on already existing directories with quotas.  This also 
acts as a forcing function to remove data.

> [HDFS-Quota] Verification is not done while setting dir namequota and size
> --
>
> Key: HDFS-8159
> URL: https://issues.apache.org/jira/browse/HDFS-8159
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.6.0
> Environment: Suse 11 SP3
>Reporter: Jagadesh Kiran N
>Priority: Minor
>
> Name Quota and space quota is not verifying when setting a new value to a 
> directory which already has subdirectories or contents.
> Below are the steps to re-produce the cases:
> *+Case-1+*
> Step-1) Create a New folder 
> hdfs dfs -mkdir /test
> Step-2) Create sub folders
> hdfs dfs -mkdir /test/one
> hdfs dfs -mkdir /test/two
> hdfs dfs -mkdir /test/three
> Step-3) Set Name Quota as two 
> hdfs dfsadmin  -setQuota 2 /test
> Step-3) Quota will be set with out the validating the dirs 
> +Output:+ Eventhough name quota value is lower than the existing number of 
> dirs, its not validating and allowing to set the new value.
> +Suggestion:+ Validate the name quota against the number of contents before 
> setting the new value.
> *+Case-2+*
> Step-1) Add any new folder or file , it will give error message
> mkdir: The NameSpace quota (directories and files) of directory /test is 
> exceeded: quota=2 file count=5
> Step-2) Clear the Quota 
> hdfs dfsadmin -clrQuota /test
> Step-3) Now Set the Size less than the folder size 
> hdfs dfsadmin -setSpaceQuota 10 /test
> +Output:+ Eventhough space quota value is less than the size of the existing 
> dir contents, its not validating and allowing to set the new value.
> +Suggestion:+ Validate the quota against the used space before setting the 
> new value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8142) DistributedFileSystem encryption zone commands should resolve relative paths

2015-04-16 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498073#comment-14498073
 ] 

Andrew Wang commented on HDFS-8142:
---

LGTM, good find. Thanks Rakesh, will commit shortly.

> DistributedFileSystem encryption zone commands should resolve relative paths
> 
>
> Key: HDFS-8142
> URL: https://issues.apache.org/jira/browse/HDFS-8142
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8142-001.patch
>
>
> Presently {{DFS#createEncryptionZone}} and {{DFS#getEZForPath}} APIs are not 
> resolving the given path relative to the {{workingDir}}. This jira is to 
> discuss and provide the implementation of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8142) DistributedFileSystem encryption zone commands should resolve relative paths

2015-04-16 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8142:
--
Affects Version/s: 2.6.0

> DistributedFileSystem encryption zone commands should resolve relative paths
> 
>
> Key: HDFS-8142
> URL: https://issues.apache.org/jira/browse/HDFS-8142
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: encryption
> Fix For: 2.8.0
>
> Attachments: HDFS-8142-001.patch
>
>
> Presently {{DFS#createEncryptionZone}} and {{DFS#getEZForPath}} APIs are not 
> resolving the given path relative to the {{workingDir}}. This jira is to 
> discuss and provide the implementation of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8142) DistributedFileSystem encryption zone commands should resolve relative paths

2015-04-16 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8142:
--
Labels: encryption  (was: )

> DistributedFileSystem encryption zone commands should resolve relative paths
> 
>
> Key: HDFS-8142
> URL: https://issues.apache.org/jira/browse/HDFS-8142
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: encryption
> Fix For: 2.8.0
>
> Attachments: HDFS-8142-001.patch
>
>
> Presently {{DFS#createEncryptionZone}} and {{DFS#getEZForPath}} APIs are not 
> resolving the given path relative to the {{workingDir}}. This jira is to 
> discuss and provide the implementation of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8142) DistributedFileSystem encryption zone commands should resolve relative paths

2015-04-16 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8142:
--
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2, thanks again Rakesh!

> DistributedFileSystem encryption zone commands should resolve relative paths
> 
>
> Key: HDFS-8142
> URL: https://issues.apache.org/jira/browse/HDFS-8142
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: encryption
> Fix For: 2.8.0
>
> Attachments: HDFS-8142-001.patch
>
>
> Presently {{DFS#createEncryptionZone}} and {{DFS#getEZForPath}} APIs are not 
> resolving the given path relative to the {{workingDir}}. This jira is to 
> discuss and provide the implementation of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8142) DistributedFileSystem encryption zone commands should resolve relative paths

2015-04-16 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8142:
--
Summary: DistributedFileSystem encryption zone commands should resolve 
relative paths  (was: DistributedFileSystem#EncryptionZones should resolve 
given path relative to workingDir)

> DistributedFileSystem encryption zone commands should resolve relative paths
> 
>
> Key: HDFS-8142
> URL: https://issues.apache.org/jira/browse/HDFS-8142
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8142-001.patch
>
>
> Presently {{DFS#createEncryptionZone}} and {{DFS#getEZForPath}} APIs are not 
> resolving the given path relative to the {{workingDir}}. This jira is to 
> discuss and provide the implementation of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8142) DistributedFileSystem encryption zone commands should resolve relative paths

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498080#comment-14498080
 ] 

Hudson commented on HDFS-8142:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7596 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7596/])
HDFS-8142. DistributedFileSystem encryption zone commands should resolve 
relative paths. Contributed by Rakesh R. (wang: rev 
2e8ea780a45c0eccb8f106b2bf072b59446a1cc4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> DistributedFileSystem encryption zone commands should resolve relative paths
> 
>
> Key: HDFS-8142
> URL: https://issues.apache.org/jira/browse/HDFS-8142
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: encryption
> Fix For: 2.8.0
>
> Attachments: HDFS-8142-001.patch
>
>
> Presently {{DFS#createEncryptionZone}} and {{DFS#getEZForPath}} APIs are not 
> resolving the given path relative to the {{workingDir}}. This jira is to 
> discuss and provide the implementation of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8160) Long delays when calling hdfsOpenFile()

2015-04-16 Thread Rod (JIRA)
Rod created HDFS-8160:
-

 Summary: Long delays when calling hdfsOpenFile()
 Key: HDFS-8160
 URL: https://issues.apache.org/jira/browse/HDFS-8160
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 2.5.2
 Environment: 3-node Apache Hadoop 2.5.2 cluster running on Ubuntu 
14.04 

dfshealth overview:
Security is off.
Safemode is off.

8 files and directories, 9 blocks = 17 total filesystem object(s).

Heap Memory used 45.78 MB of 90.5 MB Heap Memory. Max Heap Memory is 889 MB.

Non Heap Memory used 36.3 MB of 70.44 MB Commited Non Heap Memory. Max Non Heap 
Memory is 130 MB.
Configured Capacity:118.02 GB
DFS Used:   2.77 GB
Non DFS Used:   12.19 GB
DFS Remaining:  103.06 GB
DFS Used%:  2.35%
DFS Remaining%: 87.32%
Block Pool Used:2.77 GB
Block Pool Used%:   2.35%
DataNodes usages% (Min/Median/Max/stdDev):  2.35% / 2.35% / 2.35% / 0.00%
Live Nodes  3 (Decommissioned: 0)
Dead Nodes  0 (Decommissioned: 0)
Decommissioning Nodes   0
Number of Under-Replicated Blocks   0
Number of Blocks Pending Deletion   0

Datanode Information
In operation
NodeLast contactAdmin State CapacityUsedNon DFS Used
Remaining   Blocks  Block pool used Failed Volumes  Version
hadoop252-3 (x.x.x.10:50010)1   In Service  39.34 GB944.85 
MB   3.63 GB 34.79 GB9   944.85 MB (2.35%)   0   2.5.2
hadoop252-1 (x.x.x.8:50010) 0   In Service  39.34 GB944.85 
MB   4.94 GB 33.48 GB9   944.85 MB (2.35%)   0   2.5.2
hadoop252-2 (x.x.x.9:50010) 1   In Service  39.34 GB944.85 
MB   3.63 GB 34.79 GB9   944.85 MB (2.35%)   0   2.5.2

java version "1.7.0_76"
Java(TM) SE Runtime Environment (build 1.7.0_76-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.76-b04, mixed mode)
Reporter: Rod


Calling hdfsOpenFile on a file residing on target 3-node Hadoop cluster 
(described in detail in Environment section) blocks for a long time (several 
minutes).  I've noticed that the delay is related to the size of the target 
file. 
For example, attempting to hdfsOpenFile() on a file of filesize 852483361 took 
121 seconds, but a file of 15458 took less than a second.

Also, during the long delay, the following stacktrace is routed to standard out:

2015-04-16 10:32:13,943 WARN  [main] hdfs.BlockReaderFactory 
(BlockReaderFactory.java:getRemoteBlockReaderFromTcp(693)) - I/O error 
constructing remote block reader.
org.apache.hadoop.net.ConnectTimeoutException: 6 millis timeout while 
waiting for channel to be ready for connect. ch : 
java.nio.channels.SocketChannel[connection-pending remote=/10.40.8.10:50010]
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533)
at 
org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3101)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:755)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:670)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:337)
at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:576)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:800)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:854)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:143)
2015-04-16 10:32:13,946 WARN  [main] hdfs.DFSClient 
(DFSInputStream.java:blockSeekTo(612)) - Failed to connect to /10.40.8.10:50010 
for block, add to deadNodes and continue. 
org.apache.hadoop.net.ConnectTimeoutException: 6 millis timeout while 
waiting for channel to be ready for connect. ch : 
java.nio.channels.SocketChannel[connection-pending remote=/10.40.8.10:50010]
org.apache.hadoop.net.ConnectTimeoutException: 6 millis timeout while 
waiting for channel to be ready for connect. ch : 
java.nio.channels.SocketChannel[connection-pending remote=/10.40.8.10:50010]
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533)
at 
org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3101)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:755)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:670)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:337)
at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:576)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:800)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:854)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStrea

[jira] [Commented] (HDFS-8151) Always use snapshot path as source when invalid snapshot names are used for diff based distcp

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498122#comment-14498122
 ] 

Hudson commented on HDFS-8151:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #166 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/166/])
HDFS-8151. Always use snapshot path as source when invalid snapshot names are 
used for diff based distcp. Contributed by Jing Zhao. (jing9: rev 
4c097e473bb1f18d1510deb61bae2bcb8c156f18)
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSync.java


> Always use snapshot path as source when invalid snapshot names are used for 
> diff based distcp
> -
>
> Key: HDFS-8151
> URL: https://issues.apache.org/jira/browse/HDFS-8151
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.7.0
>Reporter: Sushmitha Sreenivasan
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-8151.000.patch
>
>
> This is a bug reported by [~ssreenivasan]:
> HDFS-8036 makes the diff-based distcp use snapshot path as the source. This 
> should also happen when
> # invalid snapshot names are provided as distcp parameters thus the diff 
> report computation on the target cluster fails
> # there is modification happening in the target cluster thus 
> {{checkNoChange}} returns false
> In other cases like source and target FS are not DistributedFileSystem, we 
> should throw exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8149) The footer of the Web UI "Hadoop, 2014" is old

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498124#comment-14498124
 ] 

Hudson commented on HDFS-8149:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #166 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/166/])
HDFS-8149. The footer of the Web UI "Hadoop, 2014" is old. Contributed by 
Brahma Reddy Battula. (aajisaka: rev de0f1700c150a819b38028c44ef1926507086e6c)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/journal/index.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/index.html
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> The footer of the Web UI "Hadoop, 2014" is old
> --
>
> Key: HDFS-8149
> URL: https://issues.apache.org/jira/browse/HDFS-8149
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.1
>
> Attachments: HDFS-8149.patch
>
>
> Need to be updated to 2015.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7934) Update RollingUpgrade rollback documentation: should use bootstrapstandby for standby NN

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498128#comment-14498128
 ] 

Hudson commented on HDFS-7934:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #166 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/166/])
HDFS-7934. Update RollingUpgrade rollback documentation: should use 
bootstrapstandby for standby NN. Contributed by J. Andreina. (jing9: rev 
b172d03595d1591e7f542791224607d8c5fce3e2)
* hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Update RollingUpgrade rollback documentation: should use bootstrapstandby for 
> standby NN
> 
>
> Key: HDFS-7934
> URL: https://issues.apache.org/jira/browse/HDFS-7934
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.0
>Reporter: J.Andreina
>Assignee: J.Andreina
>Priority: Critical
> Fix For: 2.7.1
>
> Attachments: HDFS-7934.1.patch, HDFS-7934.2.patch
>
>
> During Rolling upgrade rollback , standby namenode startup fails , while 
> loading edits and when  there is no local copy of edits created after upgrade 
> ( which is already been removed  by Active Namenode from journal manager and 
> from Active's local). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8144) Split TestLazyPersistFiles into multiple tests

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498123#comment-14498123
 ] 

Hudson commented on HDFS-8144:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #166 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/166/])
HDFS-8144. Split TestLazyPersistFiles into multiple tests. (Arpit Agarwal) 
(arp: rev 9e8309a1b2989d07d43e20940d9ac12b7b43482f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistReplicaPlacement.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistReplicaRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyWriter.java


> Split TestLazyPersistFiles into multiple tests
> --
>
> Key: HDFS-8144
> URL: https://issues.apache.org/jira/browse/HDFS-8144
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HDFS-8144.01.patch, HDFS-8144.02.patch
>
>
> TestLazyPersistFiles has grown too large and includes both NN and DN tests. 
> We can split up related tests into smaller files to keep the test case 
> manageable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8161) Both Namenodes are in standby State

2015-04-16 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-8161:
--

 Summary: Both Namenodes are in standby State
 Key: HDFS-8161
 URL: https://issues.apache.org/jira/browse/HDFS-8161
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


Scenario:

Start cluster with three Nodes.
Reboot Machine where ZKFC is not running..( Here Active Node ZKFC should open 
session with this ZK )

Now  ZKFC ( Active NN's ) session expire and try re-establish connection with 
another ZK...Bythe time  ZKFC ( StndBy NN's ) will try to fence old active and 
create the active Breadcrumb and Makes SNN to active state..

But immediately it fence to standby state.. ( Here is the doubt)

Hence both will be in standby state..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8161) Both Namenodes are in standby State

2015-04-16 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498164#comment-14498164
 ] 

Brahma Reddy Battula commented on HDFS-8161:


 *{color:blue}ACTIVE NAMENODE ZKFC :{color}* 
==

{noformat}
2015-04-16 11:32:36,872 | INFO  | Health Monitor for NameNode at 
HOST-114/IP.114:25000-SendThread(ZKHOST-212:24002) | Client session timed out, 
have not heard from server in 30015ms for sessionid 0x154cb2b3e4746ace, closing 
socket connection and attempting reconnect | 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1120)
2015-04-16 11:32:36,974 | INFO  | Health Monitor for NameNode at 
IP-114/IP.114:25000-EventThread | Session disconnected. Entering neutral 
mode... | 
org.apache.hadoop.ha.ActiveStandbyElector.processWatchEvent(ActiveStandbyElector.java:558)
2015-04-16 11:32:37,632 | INFO  | Health Monitor for NameNode at 
IP-114/IP.114:25000-SendThread(HOST-114:24002) | Client will use GSSAPI as SASL 
mechanism. | 
org.apache.zookeeper.client.ZooKeeperSaslClient$1.run(ZooKeeperSaslClient.java:285)
2015-04-16 11:32:37,633 | INFO  | Health Monitor for NameNode at 
HOST-114/IP.114:25000-SendThread(HOST-114:24002) | Opening socket connection to 
server HOST-114/IP.114:24002. Will attempt to SASL-authenticate using Login 
Context section 'Client' | 
org.apache.zookeeper.ClientCnxn$SendThread.logStartConnect(ClientCnxn.java:999)
2015-04-16 11:32:37,634 | INFO  | Health Monitor for NameNode at 
HOST-114/IP.114:25000-SendThread(HOST-114:24002) | Socket connection 
established to HOST-114/IP.114:24002, initiating session | 
org.apache.zookeeper.ClientCnxn$SendThread.primeConnection(ClientCnxn.java:854)
2015-04-16 11:32:37,635 | INFO  | Health Monitor for NameNode at 
HOST-114/IP.114:25000-SendThread(HOST-114:24002) | Unable to reconnect to 
ZooKeeper service, session 0x154cb2b3e4746ace has expired, closing socket 
connection | 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1118)
2015-04-16 11:32:37,636 | INFO  | Health Monitor for NameNode at 
HOST-114/IP.114:25000-EventThread | Session expired. Entering neutral mode and 
rejoining... | 
org.apache.hadoop.ha.ActiveStandbyElector.processWatchEvent(ActiveStandbyElector.java:568)
2015-04-16 11:32:37,636 | INFO  | Health Monitor for NameNode at 
HOST-114/IP.114:25000-EventThread | Trying to re-establish ZK session | 
org.apache.hadoop.ha.ActiveStandbyElector.reJoinElection(ActiveStandbyElector.java:670)
2015-04-16 11:32:37,639 | INFO  | Health Monitor for NameNode at 
HOST-114/IP.114:25000-EventThread | Initiating client connection, 
connectString=ZKHOST-212:24002,HOST-114:24002,HOST-117:24002 
sessionTimeout=45000 
watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@2127d120 
| org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:438)
2015-04-16 11:32:37,641 | INFO  | Health Monitor for NameNode at 
HOST-114/IP.114:25000-SendThread(HOST-117:24002) | Client will use GSSAPI as 
SASL mechanism. | 
org.apache.zookeeper.client.ZooKeeperSaslClient$1.run(ZooKeeperSaslClient.java:285)
2015-04-16 11:32:37,642 | INFO  | Health Monitor for NameNode at 
HOST-114/IP.114:25000-SendThread(HOST-117:24002) | Opening socket connection to 
server HOST-117/IP.117:24002. Will attempt to SASL-authenticate using Login 
Context section 'Client' | 
org.apache.zookeeper.ClientCnxn$SendThread.logStartConnect(ClientCnxn.java:999)
2015-04-16 11:32:37,642 | INFO  | Health Monitor for NameNode at 
HOST-114/IP.114:25000-SendThread(HOST-117:24002) | Socket connection 
established to HOST-117/IP.117:24002, initiating session | 
org.apache.zookeeper.ClientCnxn$SendThread.primeConnection(ClientCnxn.java:854)
2015-04-16 11:32:37,661 | INFO  | Health Monitor for NameNode at 
HOST-114/IP.114:25000-SendThread(HOST-117:24002) | Session establishment 
complete on server HOST-117/IP.117:24002, sessionid = 0x174cbd419924047a, 
negotiated timeout = 45000 | 
org.apache.zookeeper.ClientCnxn$SendThread.onConnected(ClientCnxn.java:1259)
2015-04-16 11:32:37,664 | INFO  | Health Monitor for NameNode at 
HOST-114/IP.114:25000-EventThread | EventThread shut down | 
org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:512)
2015-04-16 11:32:37,666 | INFO  | Health Monitor for NameNode at 
HOST-114/IP.114:25000-EventThread | Session connected. | 
org.apache.hadoop.ha.ActiveStandbyElector.processWatchEvent(ActiveStandbyElector.java:547)
2015-04-16 11:32:37,672 | INFO  | Health Monitor for NameNode at 
HOST-114/IP.114:25000-EventThread | Successfully authenticated to ZooKeeper 
using SASL. | 
org.apache.hadoop.ha.ActiveStandbyElector.processWatchEvent(ActiveStandbyElector.java:573)
2015-04-16 11:32:37,699 | INFO  | Health Monitor for NameNode at 
HOST-114/IP.114:25000-EventThread | ZK Election indicated that NameNode at 
HOST-114/IP.114:25000 should become standby | 
org.apache.hadoop.ha.ZKFailoverController.becomeStandby(ZKF

[jira] [Updated] (HDFS-8161) Both Namenodes are in standby State

2015-04-16 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8161:
---
Attachment: ACTIVEBreadcumb and StandbyElector.txt

> Both Namenodes are in standby State
> ---
>
> Key: HDFS-8161
> URL: https://issues.apache.org/jira/browse/HDFS-8161
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: ACTIVEBreadcumb and StandbyElector.txt
>
>
> Scenario:
> 
> Start cluster with three Nodes.
> Reboot Machine where ZKFC is not running..( Here Active Node ZKFC should open 
> session with this ZK )
> Now  ZKFC ( Active NN's ) session expire and try re-establish connection with 
> another ZK...Bythe time  ZKFC ( StndBy NN's ) will try to fence old active 
> and create the active Breadcrumb and Makes SNN to active state..
> But immediately it fence to standby state.. ( Here is the doubt)
> Hence both will be in standby state..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8151) Always use snapshot path as source when invalid snapshot names are used for diff based distcp

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498175#comment-14498175
 ] 

Hudson commented on HDFS-8151:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2115 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2115/])
HDFS-8151. Always use snapshot path as source when invalid snapshot names are 
used for diff based distcp. Contributed by Jing Zhao. (jing9: rev 
4c097e473bb1f18d1510deb61bae2bcb8c156f18)
* 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSync.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java


> Always use snapshot path as source when invalid snapshot names are used for 
> diff based distcp
> -
>
> Key: HDFS-8151
> URL: https://issues.apache.org/jira/browse/HDFS-8151
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.7.0
>Reporter: Sushmitha Sreenivasan
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-8151.000.patch
>
>
> This is a bug reported by [~ssreenivasan]:
> HDFS-8036 makes the diff-based distcp use snapshot path as the source. This 
> should also happen when
> # invalid snapshot names are provided as distcp parameters thus the diff 
> report computation on the target cluster fails
> # there is modification happening in the target cluster thus 
> {{checkNoChange}} returns false
> In other cases like source and target FS are not DistributedFileSystem, we 
> should throw exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8149) The footer of the Web UI "Hadoop, 2014" is old

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498177#comment-14498177
 ] 

Hudson commented on HDFS-8149:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2115 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2115/])
HDFS-8149. The footer of the Web UI "Hadoop, 2014" is old. Contributed by 
Brahma Reddy Battula. (aajisaka: rev de0f1700c150a819b38028c44ef1926507086e6c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/journal/index.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/index.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html


> The footer of the Web UI "Hadoop, 2014" is old
> --
>
> Key: HDFS-8149
> URL: https://issues.apache.org/jira/browse/HDFS-8149
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.1
>
> Attachments: HDFS-8149.patch
>
>
> Need to be updated to 2015.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7934) Update RollingUpgrade rollback documentation: should use bootstrapstandby for standby NN

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498182#comment-14498182
 ] 

Hudson commented on HDFS-7934:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2115 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2115/])
HDFS-7934. Update RollingUpgrade rollback documentation: should use 
bootstrapstandby for standby NN. Contributed by J. Andreina. (jing9: rev 
b172d03595d1591e7f542791224607d8c5fce3e2)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml


> Update RollingUpgrade rollback documentation: should use bootstrapstandby for 
> standby NN
> 
>
> Key: HDFS-7934
> URL: https://issues.apache.org/jira/browse/HDFS-7934
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.0
>Reporter: J.Andreina
>Assignee: J.Andreina
>Priority: Critical
> Fix For: 2.7.1
>
> Attachments: HDFS-7934.1.patch, HDFS-7934.2.patch
>
>
> During Rolling upgrade rollback , standby namenode startup fails , while 
> loading edits and when  there is no local copy of edits created after upgrade 
> ( which is already been removed  by Active Namenode from journal manager and 
> from Active's local). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8144) Split TestLazyPersistFiles into multiple tests

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498176#comment-14498176
 ] 

Hudson commented on HDFS-8144:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2115 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2115/])
HDFS-8144. Split TestLazyPersistFiles into multiple tests. (Arpit Agarwal) 
(arp: rev 9e8309a1b2989d07d43e20940d9ac12b7b43482f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistReplicaPlacement.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistReplicaRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java


> Split TestLazyPersistFiles into multiple tests
> --
>
> Key: HDFS-8144
> URL: https://issues.apache.org/jira/browse/HDFS-8144
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HDFS-8144.01.patch, HDFS-8144.02.patch
>
>
> TestLazyPersistFiles has grown too large and includes both NN and DN tests. 
> We can split up related tests into smaller files to keep the test case 
> manageable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8161) Both Namenodes are in standby State

2015-04-16 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498186#comment-14498186
 ] 

Brahma Reddy Battula commented on HDFS-8161:


Attached the Active Breadcumb details..


I am thinking , following code needs to be modify...

{code}
 Code code = Code.get(rc);
if (isSuccess(code)) {
  // we successfully created the znode. we are the leader. start monitoring
  if (becomeActive()) {
monitorActiveStatus();
  } else {
reJoinElectionAfterFailureToBecomeActive();
  }
  return;
}

if (isNodeExists(code)) {
  if (createRetryCount == 0) {
// znode exists and we did not retry the operation. so a different
// instance has created it. become standby and monitor lock.
becomeStandby();
  }
  // if we had retried then the znode could have been created by our first
  // attempt to the server (that we lost) and this node exists response is
  // for the second attempt. verify this case via ephemeral node owner. this
  // will happen on the callback for monitoring the lock.
  monitorActiveStatus();
  return;
}
{code}


Any pointers to this issue..?

> Both Namenodes are in standby State
> ---
>
> Key: HDFS-8161
> URL: https://issues.apache.org/jira/browse/HDFS-8161
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: ACTIVEBreadcumb and StandbyElector.txt
>
>
> Scenario:
> 
> Start cluster with three Nodes.
> Reboot Machine where ZKFC is not running..( Here Active Node ZKFC should open 
> session with this ZK )
> Now  ZKFC ( Active NN's ) session expire and try re-establish connection with 
> another ZK...Bythe time  ZKFC ( StndBy NN's ) will try to fence old active 
> and create the active Breadcrumb and Makes SNN to active state..
> But immediately it fence to standby state.. ( Here is the doubt)
> Hence both will be in standby state..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8162) Stack trace routed to standard out

2015-04-16 Thread Rod (JIRA)
Rod created HDFS-8162:
-

 Summary: Stack trace routed to standard out
 Key: HDFS-8162
 URL: https://issues.apache.org/jira/browse/HDFS-8162
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs
Affects Versions: 2.5.2
Reporter: Rod
Priority: Minor


Calling hdfsOpenFile() can generate a stacktrace printout to standard out, 
which can be problematic for caller program which is making use of standard 
out. libhdfs stacktraces should be routed to standard error.

Example of stacktrace:
WARN  [main] hdfs.BlockReaderFactory 
(BlockReaderFactory.java:getRemoteBlockReaderFromTcp(693)) - I/O error 
constructing remote block reader.
org.apache.hadoop.net.ConnectTimeoutException: 6 millis timeout while 
waiting for channel to be ready for connect. ch : 
java.nio.channels.SocketChannel[connection-pending remote=/x.x.x.10:50010]
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533)
at 
org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3101)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:755)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:670)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:337)
at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:576)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:800)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:854)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:143)
2015-04-16 10:32:13,946 WARN  [main] hdfs.DFSClient 
(DFSInputStream.java:blockSeekTo(612)) - Failed to connect to /x.x.x.10:50010 
for block, add to deadNodes and continue. 
org.apache.hadoop.net.ConnectTimeoutException: 6 millis timeout while 
waiting for channel to be ready for connect. ch : 
java.nio.channels.SocketChannel[connection-pending remote=/x.x.x.10:50010]
org.apache.hadoop.net.ConnectTimeoutException: 6 millis timeout while 
waiting for channel to be ready for connect. ch : 
java.nio.channels.SocketChannel[connection-pending remote=/x.x.x.10:50010]
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533)
at 
org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3101)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:755)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:670)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:337)
at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:576)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:800)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:854)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:143)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7678) Erasure coding: DFSInputStream with decode functionality

2015-04-16 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang reassigned HDFS-7678:
---

Assignee: Zhe Zhang  (was: Li Bo)

> Erasure coding: DFSInputStream with decode functionality
> 
>
> Key: HDFS-7678
> URL: https://issues.apache.org/jira/browse/HDFS-7678
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Zhe Zhang
> Attachments: BlockGroupReader.patch
>
>
> A block group reader will read data from BlockGroup no matter in striping 
> layout or contiguous layout. The corrupt blocks can be known before 
> reading(told by namenode), or just be found during reading. The block group 
> reader needs to do decoding work when some blocks are found corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-7678) Erasure coding: DFSInputStream with decode functionality

2015-04-16 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-7678 started by Zhe Zhang.
---
> Erasure coding: DFSInputStream with decode functionality
> 
>
> Key: HDFS-7678
> URL: https://issues.apache.org/jira/browse/HDFS-7678
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Zhe Zhang
> Attachments: BlockGroupReader.patch
>
>
> A block group reader will read data from BlockGroup no matter in striping 
> layout or contiguous layout. The corrupt blocks can be known before 
> reading(told by namenode), or just be found during reading. The block group 
> reader needs to do decoding work when some blocks are found corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8155) Support OAuth2 authentication in WebHDFS

2015-04-16 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498281#comment-14498281
 ] 

Jakob Homan commented on HDFS-8155:
---

Hey Kai-
   This JIRA is part of the larger effort of 8154 to make the WebHDFS REST 
specification more general and accessible to other clients and back-end 
implementations.  It will likely build on your work to add OAuth2 throughout 
the system.  

Effectively, this JIRA is for two items: a) add OAuth2 as a possible 
[authentication 
method|https://hadoop.apache.org/docs/r2.5.1/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Authentication]
 (along with SPENGO, simple and delegation tokens) and b) add support in the 
WebHDFSFileSystem for passing OAuth tokens (or obtaining those tokens via 
configuration-supplied credentials or user/name password) to the WebHDFS 
backend.  I'm interested in the client and non-Namenode WebHDFS backends, while 
you're focusing on the Namenode and other current components.  

I would like to get the change to the WebHDFS spec and support on the client in 
soon.  Happy to use your code, or to commit it if it's ready.

> Support OAuth2 authentication in WebHDFS
> 
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Kai Zheng
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8161) Both Namenodes are in standby State

2015-04-16 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8161:
---
Description: 
Suspected Scenario:

Start cluster with three Nodes.
Reboot Machine where ZKFC is not running..( Here Active Node ZKFC should open 
session with this ZK )

Now  ZKFC ( Active NN's ) session expire and try re-establish connection with 
another ZK...Bythe time  ZKFC ( StndBy NN's ) will try to fence old active and 
create the active Breadcrumb and Makes SNN to active state..

But immediately it fence to standby state.. ( Here is the doubt)

Hence both will be in standby state..

  was:
Scenario:

Start cluster with three Nodes.
Reboot Machine where ZKFC is not running..( Here Active Node ZKFC should open 
session with this ZK )

Now  ZKFC ( Active NN's ) session expire and try re-establish connection with 
another ZK...Bythe time  ZKFC ( StndBy NN's ) will try to fence old active and 
create the active Breadcrumb and Makes SNN to active state..

But immediately it fence to standby state.. ( Here is the doubt)

Hence both will be in standby state..


> Both Namenodes are in standby State
> ---
>
> Key: HDFS-8161
> URL: https://issues.apache.org/jira/browse/HDFS-8161
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: ACTIVEBreadcumb and StandbyElector.txt
>
>
> Suspected Scenario:
> 
> Start cluster with three Nodes.
> Reboot Machine where ZKFC is not running..( Here Active Node ZKFC should open 
> session with this ZK )
> Now  ZKFC ( Active NN's ) session expire and try re-establish connection with 
> another ZK...Bythe time  ZKFC ( StndBy NN's ) will try to fence old active 
> and create the active Breadcrumb and Makes SNN to active state..
> But immediately it fence to standby state.. ( Here is the doubt)
> Hence both will be in standby state..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-8146) Protobuf changes for BlockECRecoveryCommand and its fields for making it ready for transfer to DN

2015-04-16 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-8146 started by Uma Maheswara Rao G.
-
> Protobuf changes for BlockECRecoveryCommand and its fields for making it 
> ready for transfer to DN 
> --
>
> Key: HDFS-8146
> URL: https://issues.apache.org/jira/browse/HDFS-8146
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-8146.0.patch
>
>
> As part of working on HDFS-8137, we need to prepare BlockECRecoveryCommand, 
> BlockECRecoveryInfo, DatanodeStorageInfo, DatanodeDescripter (We can use 
> DatanodeInfo for proto trasfer)  should be ready in proto format for 
> transferring them in command. Since all this code could be straight forward 
> and to have better focussed review on core part, I propose to separate this 
> part in to this JIRA. First I will prepare all this supported classes 
> protobuf ready and then trasnfer them to DN as part of HDFS-8137 by including 
> ECSchema.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8146) Protobuf changes for BlockECRecoveryCommand and its fields for making it ready for transfer to DN

2015-04-16 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-8146:
--
Attachment: HDFS-8146.0.patch

Attached the patch for review.

> Protobuf changes for BlockECRecoveryCommand and its fields for making it 
> ready for transfer to DN 
> --
>
> Key: HDFS-8146
> URL: https://issues.apache.org/jira/browse/HDFS-8146
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-8146.0.patch
>
>
> As part of working on HDFS-8137, we need to prepare BlockECRecoveryCommand, 
> BlockECRecoveryInfo, DatanodeStorageInfo, DatanodeDescripter (We can use 
> DatanodeInfo for proto trasfer)  should be ready in proto format for 
> transferring them in command. Since all this code could be straight forward 
> and to have better focussed review on core part, I propose to separate this 
> part in to this JIRA. First I will prepare all this supported classes 
> protobuf ready and then trasnfer them to DN as part of HDFS-8137 by including 
> ECSchema.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8155) Support OAuth2 authentication in WebHDFS

2015-04-16 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498444#comment-14498444
 ] 

Haohui Mai commented on HDFS-8155:
--

I think that there are two use cases here:

* Using WebHDFS in UI
* Using WebHDFS programmatically (e.g., through {{WebHdfsFileSystem}})

For the first use case -- WebHDFS now recognizes the auth cookie of the UI 
therefore the UI works as long as any third-party filter behaves correctly 
w.r.t. the UI pages.

For the second use case -- WebHDFS is designed to use DT as the authentication 
method. To authenticate, the third-party filter (OAuth2 filter included) should 
control when to issue a DT when getting the {{GETDELEGATIONTOKEN}} call. The DT 
needs to be presented to the server in all subsequent usages.

I don't think injecting any third-party payload (e.g., OAuth tokens) into 
WebHdfsFileSystem make sense.

> Support OAuth2 authentication in WebHDFS
> 
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Kai Zheng
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8135) Remove the deprecated FSConstants class

2015-04-16 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8135:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~gtCarrera9] for the 
contribution.

> Remove the deprecated FSConstants class
> ---
>
> Key: HDFS-8135
> URL: https://issues.apache.org/jira/browse/HDFS-8135
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Fix For: 2.8.0
>
> Attachments: HDFS-8135-041315.patch
>
>
> The {{FSConstants}} class has been marked as deprecated since 0.23. There is 
> no uses of this class in the current code base and it can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7959) WebHdfs logging is missing on Datanode

2015-04-16 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498457#comment-14498457
 ] 

Haohui Mai commented on HDFS-7959:
--

[~kihwal], is latest patch ready to be committed?

> WebHdfs logging is missing on Datanode
> --
>
> Key: HDFS-7959
> URL: https://issues.apache.org/jira/browse/HDFS-7959
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Attachments: HDFS-7959.patch, HDFS-7959.patch
>
>
> After the conversion to netty, webhdfs requests are not logged on datanodes. 
> The existing jetty log only logs the non-webhdfs requests that come through 
> the internal proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8153) Error Message points to wrong parent directory in case of path component name length error

2015-04-16 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-8153:
---
Target Version/s: 2.7.1  (was: 2.5.2)

> Error Message points to wrong parent directory in case of path component name 
> length error
> --
>
> Key: HDFS-8153
> URL: https://issues.apache.org/jira/browse/HDFS-8153
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.5.2
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8153.001.patch
>
>
> If the name component length is greater than the permitted length, the error 
> message points to wrong parent directory for mkdir and touchz.
> Here are examples where the parent directory name is in error message. In 
> this example dfs.namenode.fs-limits.max-component-length is set to 19.
> {code}
> hdfs dfs -mkdir /user/hrt_qa/FileNameLength/really_big_name_dir01
> mkdir: The maximum path component name limit of really_big_name_dir01 in 
> directory /user/hrt_qa/ is exceeded: limit=19 length=21
> {code}
> The expected value for the directory was _/user/hrt_qa/FileNameLength_. The 
> same behavior is observed for touchz
> {code}
> hdfs dfs -touchz /user/hrt_qa/FileNameLength/really_big_name_0004
> touchz: The maximum path component name limit of really_big_name_0004 in 
> directory /user/hrt_qa/ is exceeded: limit=19 length=20
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8156) Define some system schemas in codes

2015-04-16 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498465#comment-14498465
 ] 

Zhe Zhang commented on HDFS-8156:
-

Thanks Kai for the patch! The overall logic looks good.

# Sorry that I didn't have a chance to review the initial HADOOP-11643 patch. 
{{options}} could use some Javadoc to explain. If {{ECSchema}} is for a single 
schema, why do we need a set of options, each containing {{NUM_DATA_UNITS}} 
etc.?
# The change in {{ECSchemaManager}} looks clear and fits the JIRA description. 
# We should add a test to get schema by name. E.g., we can test a valid name 
and a non-existing name.

> Define some system schemas in codes
> ---
>
> Key: HDFS-8156
> URL: https://issues.apache.org/jira/browse/HDFS-8156
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HDFS-8156-v1.patch
>
>
> This is to define and add some system schemas in codes, and also resolve some 
> TODOs left for HDFS-7859 and HDFS-7866 as they're still subject to further 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8135) Remove the deprecated FSConstants class

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498461#comment-14498461
 ] 

Hudson commented on HDFS-8135:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7598 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7598/])
HDFS-8135. Remove the deprecated FSConstants class. Contributed by Li Lu. 
(wheat9: rev 80a2a1242337648135cab0c877203263d1092248)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FSConstants.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Remove the deprecated FSConstants class
> ---
>
> Key: HDFS-8135
> URL: https://issues.apache.org/jira/browse/HDFS-8135
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Fix For: 2.8.0
>
> Attachments: HDFS-8135-041315.patch
>
>
> The {{FSConstants}} class has been marked as deprecated since 0.23. There is 
> no uses of this class in the current code base and it can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8142) DistributedFileSystem encryption zone commands should resolve relative paths

2015-04-16 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498463#comment-14498463
 ] 

Rakesh R commented on HDFS-8142:


Thanks a lot [~andrew.wang] for reviewing and committing the patch. Also, 
thanks [~clamb] for the offline discussions.

> DistributedFileSystem encryption zone commands should resolve relative paths
> 
>
> Key: HDFS-8142
> URL: https://issues.apache.org/jira/browse/HDFS-8142
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: encryption
> Fix For: 2.8.0
>
> Attachments: HDFS-8142-001.patch
>
>
> Presently {{DFS#createEncryptionZone}} and {{DFS#getEZForPath}} APIs are not 
> resolving the given path relative to the {{workingDir}}. This jira is to 
> discuss and provide the implementation of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8153) Error Message points to wrong parent directory in case of path component name length error

2015-04-16 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-8153:
---
Status: Patch Available  (was: Open)

> Error Message points to wrong parent directory in case of path component name 
> length error
> --
>
> Key: HDFS-8153
> URL: https://issues.apache.org/jira/browse/HDFS-8153
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.5.2
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8153.001.patch
>
>
> If the name component length is greater than the permitted length, the error 
> message points to wrong parent directory for mkdir and touchz.
> Here are examples where the parent directory name is in error message. In 
> this example dfs.namenode.fs-limits.max-component-length is set to 19.
> {code}
> hdfs dfs -mkdir /user/hrt_qa/FileNameLength/really_big_name_dir01
> mkdir: The maximum path component name limit of really_big_name_dir01 in 
> directory /user/hrt_qa/ is exceeded: limit=19 length=21
> {code}
> The expected value for the directory was _/user/hrt_qa/FileNameLength_. The 
> same behavior is observed for touchz
> {code}
> hdfs dfs -touchz /user/hrt_qa/FileNameLength/really_big_name_0004
> touchz: The maximum path component name limit of really_big_name_0004 in 
> directory /user/hrt_qa/ is exceeded: limit=19 length=20
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7949) WebImageViewer need support file size calculation with striped blocks

2015-04-16 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498471#comment-14498471
 ] 

Zhe Zhang commented on HDFS-7949:
-

bq. IIUC your suggestion is to move the #spaceConsumed function to 
StripedBlockUtil utility, isn't it?
Yes that's what I meant. Now HDFS-8120 is already in.

> WebImageViewer need support file size calculation with striped blocks
> -
>
> Key: HDFS-7949
> URL: https://issues.apache.org/jira/browse/HDFS-7949
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Hui Zheng
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HDFS-7949-001.patch, HDFS-7949-002.patch, 
> HDFS-7949-003.patch, HDFS-7949-004.patch
>
>
> The file size calculation should be changed when the blocks of the file are 
> striped in WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6153) Document "fileId" and "childrenNum" fields in the FileStatus Json schema

2015-04-16 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498479#comment-14498479
 ] 

Tsz Wo Nicholas Sze commented on HDFS-6153:
---

This is an incompatible API change.  "fileId" and "childrenNum" cannot be added 
as required fields since it breaks API compatibility.  This change is already 
in some releases.  How do we fix it?

> Document "fileId" and "childrenNum" fields in the FileStatus Json schema
> 
>
> Key: HDFS-6153
> URL: https://issues.apache.org/jira/browse/HDFS-6153
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, webhdfs
>Affects Versions: 2.3.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
> Fix For: 2.5.0
>
> Attachments: HDFS-6153.patch, HDFS-6153.patch
>
>
> Now WebHDFS returns FileStatus Json objects include "fileId" and 
> "childrenNum" fields but these fields are not documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8153) Error Message points to wrong parent directory in case of path component name length error

2015-04-16 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498483#comment-14498483
 ] 

Jitendra Nath Pandey commented on HDFS-8153:


[~anu],
   The fix looks ok to me. However, this means that following line is also 
broken
  {{final INodeDirectory parent = existing.getINode(pos - 1).asDirectory();}}
which is used for {{verifyMaxDirItems(parent, parentPath);}} in the same 
method. 

If that is the case, we should file another jira to track the fix for that too.

> Error Message points to wrong parent directory in case of path component name 
> length error
> --
>
> Key: HDFS-8153
> URL: https://issues.apache.org/jira/browse/HDFS-8153
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.5.2
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8153.001.patch
>
>
> If the name component length is greater than the permitted length, the error 
> message points to wrong parent directory for mkdir and touchz.
> Here are examples where the parent directory name is in error message. In 
> this example dfs.namenode.fs-limits.max-component-length is set to 19.
> {code}
> hdfs dfs -mkdir /user/hrt_qa/FileNameLength/really_big_name_dir01
> mkdir: The maximum path component name limit of really_big_name_dir01 in 
> directory /user/hrt_qa/ is exceeded: limit=19 length=21
> {code}
> The expected value for the directory was _/user/hrt_qa/FileNameLength_. The 
> same behavior is observed for touchz
> {code}
> hdfs dfs -touchz /user/hrt_qa/FileNameLength/really_big_name_0004
> touchz: The maximum path component name limit of really_big_name_0004 in 
> directory /user/hrt_qa/ is exceeded: limit=19 length=20
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6153) Document "fileId" and "childrenNum" fields in the FileStatus Json schema

2015-04-16 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-6153:
--
Hadoop Flags: Incompatible change

> Document "fileId" and "childrenNum" fields in the FileStatus Json schema
> 
>
> Key: HDFS-6153
> URL: https://issues.apache.org/jira/browse/HDFS-6153
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, webhdfs
>Affects Versions: 2.3.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
> Fix For: 2.5.0
>
> Attachments: HDFS-6153.patch, HDFS-6153.patch
>
>
> Now WebHDFS returns FileStatus Json objects include "fileId" and 
> "childrenNum" fields but these fields are not documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8163) Using monotonicNow for block report scheduling causes test failures on recently restarted systems

2015-04-16 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-8163:
---

 Summary: Using monotonicNow for block report scheduling causes 
test failures on recently restarted systems
 Key: HDFS-8163
 URL: https://issues.apache.org/jira/browse/HDFS-8163
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.1
Reporter: Arpit Agarwal
Priority: Blocker


{{BPServiceActor#blockReport}} has the following check:

{code}
  List blockReport() throws IOException {
// send block report if timer has expired.
final long startTime = monotonicNow();
if (startTime - lastBlockReport <= dnConf.blockReportInterval) {
  return null;
}
{code}

Many tests set lastBlockReport to zero to trigger an immediate block report via 
{{BPServiceActor#triggerBlockReportForTests}}. However if the machine was 
restarted recently then this startTime could be less than 
{{dnConf.blockReportInterval}} and the block report is not sent.

{{Time#monotonicNow}} uses {{System#nanoTime}} which represents time elapsed 
since an arbitrary origin. The time should be used only for comparison with 
values returned by {{System#nanoTime}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8163) Using monotonicNow for block report scheduling causes test failures on recently restarted systems

2015-04-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8163:

Description: 
{{BPServiceActor#blockReport}} has the following check:

{code}
  List blockReport() throws IOException {
// send block report if timer has expired.
final long startTime = monotonicNow();
if (startTime - lastBlockReport <= dnConf.blockReportInterval) {
  return null;
}
{code}

Many tests trigger an immediate block report via 
{{BPServiceActor#triggerBlockReportForTests}} which sets {{lastBlockReport = 
0}}. However if the machine was restarted recently then startTime will be less 
than {{dnConf.blockReportInterval}} and the block report is not sent.

{{Time#monotonicNow}} uses {{System#nanoTime}} which represents time elapsed 
since an arbitrary origin. The time should be used only for comparison with 
other values returned by {{System#nanoTime}}.

  was:
{{BPServiceActor#blockReport}} has the following check:

{code}
  List blockReport() throws IOException {
// send block report if timer has expired.
final long startTime = monotonicNow();
if (startTime - lastBlockReport <= dnConf.blockReportInterval) {
  return null;
}
{code}

Many tests set lastBlockReport to zero to trigger an immediate block report via 
{{BPServiceActor#triggerBlockReportForTests}}. However if the machine was 
restarted recently then this startTime could be less than 
{{dnConf.blockReportInterval}} and the block report is not sent.

{{Time#monotonicNow}} uses {{System#nanoTime}} which represents time elapsed 
since an arbitrary origin. The time should be used only for comparison with 
values returned by {{System#nanoTime}}.


> Using monotonicNow for block report scheduling causes test failures on 
> recently restarted systems
> -
>
> Key: HDFS-8163
> URL: https://issues.apache.org/jira/browse/HDFS-8163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.1
>Reporter: Arpit Agarwal
>Priority: Blocker
>
> {{BPServiceActor#blockReport}} has the following check:
> {code}
>   List blockReport() throws IOException {
> // send block report if timer has expired.
> final long startTime = monotonicNow();
> if (startTime - lastBlockReport <= dnConf.blockReportInterval) {
>   return null;
> }
> {code}
> Many tests trigger an immediate block report via 
> {{BPServiceActor#triggerBlockReportForTests}} which sets {{lastBlockReport = 
> 0}}. However if the machine was restarted recently then startTime will be 
> less than {{dnConf.blockReportInterval}} and the block report is not sent.
> {{Time#monotonicNow}} uses {{System#nanoTime}} which represents time elapsed 
> since an arbitrary origin. The time should be used only for comparison with 
> other values returned by {{System#nanoTime}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8153) Error Message points to wrong parent directory in case of path component name length error

2015-04-16 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498526#comment-14498526
 ] 

Anu Engineer commented on HDFS-8153:


It is not really broken, the reason why that code looks different is the 
semantics of getINode seems to be different from getPath(int pos). In the 
second call we send the pos as the max length of path component, and if you 
call getPath() without any argument then the pos parameter passed to 
DFSUtil.byteArray2Path string is the length of the path components.

It took me a while to notice this subtle difference, I did another fix in 
DFSUtil but that cause some test failure, hence this minimal invasive fix.



> Error Message points to wrong parent directory in case of path component name 
> length error
> --
>
> Key: HDFS-8153
> URL: https://issues.apache.org/jira/browse/HDFS-8153
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.5.2
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8153.001.patch
>
>
> If the name component length is greater than the permitted length, the error 
> message points to wrong parent directory for mkdir and touchz.
> Here are examples where the parent directory name is in error message. In 
> this example dfs.namenode.fs-limits.max-component-length is set to 19.
> {code}
> hdfs dfs -mkdir /user/hrt_qa/FileNameLength/really_big_name_dir01
> mkdir: The maximum path component name limit of really_big_name_dir01 in 
> directory /user/hrt_qa/ is exceeded: limit=19 length=21
> {code}
> The expected value for the directory was _/user/hrt_qa/FileNameLength_. The 
> same behavior is observed for touchz
> {code}
> hdfs dfs -touchz /user/hrt_qa/FileNameLength/really_big_name_0004
> touchz: The maximum path component name limit of really_big_name_0004 in 
> directory /user/hrt_qa/ is exceeded: limit=19 length=20
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8164) cTime is 0 in VERSION file for newly formatted NameNode.

2015-04-16 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-8164:
---

 Summary: cTime is 0 in VERSION file for newly formatted NameNode.
 Key: HDFS-8164
 URL: https://issues.apache.org/jira/browse/HDFS-8164
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.0.3-alpha
Reporter: Chris Nauroth
Priority: Minor


After formatting a NameNode and inspecting its VERSION file, the cTime property 
shows 0.  The value does get updated to current time during an upgrade, but I 
believe this is intended to be the creation time of the cluster, and therefore 
a value of 0 can cause confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8164) cTime is 0 in VERSION file for newly formatted NameNode.

2015-04-16 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-8164:

Description: After formatting a NameNode and inspecting its VERSION file, 
the cTime property shows 0.  The value does get updated to current time during 
an upgrade, but I believe this is intended to be the creation time of the 
cluster, and therefore the initial value of 0 before an upgrade can cause 
confusion.  (was: After formatting a NameNode and inspecting its VERSION file, 
the cTime property shows 0.  The value does get updated to current time during 
an upgrade, but I believe this is intended to be the creation time of the 
cluster, and therefore a value of 0 can cause confusion.)

> cTime is 0 in VERSION file for newly formatted NameNode.
> 
>
> Key: HDFS-8164
> URL: https://issues.apache.org/jira/browse/HDFS-8164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Chris Nauroth
>Priority: Minor
>
> After formatting a NameNode and inspecting its VERSION file, the cTime 
> property shows 0.  The value does get updated to current time during an 
> upgrade, but I believe this is intended to be the creation time of the 
> cluster, and therefore the initial value of 0 before an upgrade can cause 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8164) cTime is 0 in VERSION file for newly formatted NameNode.

2015-04-16 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498544#comment-14498544
 ] 

Chris Nauroth commented on HDFS-8164:
-

I don't think this bug causes any significant impact, so I set priority to 
minor.  It's just a potential source of confusion when manually inspecting the 
metadata directories.

> cTime is 0 in VERSION file for newly formatted NameNode.
> 
>
> Key: HDFS-8164
> URL: https://issues.apache.org/jira/browse/HDFS-8164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Chris Nauroth
>Priority: Minor
>
> After formatting a NameNode and inspecting its VERSION file, the cTime 
> property shows 0.  The value does get updated to current time during an 
> upgrade, but I believe this is intended to be the creation time of the 
> cluster, and therefore the initial value of 0 before an upgrade can cause 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8163) Using monotonicNow for block report scheduling causes test failures on recently restarted systems

2015-04-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8163:

Description: 
{{BPServiceActor#blockReport}} has the following check:

{code}
  List blockReport() throws IOException {
// send block report if timer has expired.
final long startTime = monotonicNow();
if (startTime - lastBlockReport <= dnConf.blockReportInterval) {
  return null;
}
{code}

Many tests trigger an immediate block report via 
{{BPServiceActor#triggerBlockReportForTests}} which sets {{lastBlockReport = 
0}}. However if the machine was restarted recently then startTime may be less 
than {{dnConf.blockReportInterval}} and the block report is not sent.

{{Time#monotonicNow}} uses {{System#nanoTime}} which represents time elapsed 
since an arbitrary origin. The time should be used only for comparison with 
other values returned by {{System#nanoTime}}.

  was:
{{BPServiceActor#blockReport}} has the following check:

{code}
  List blockReport() throws IOException {
// send block report if timer has expired.
final long startTime = monotonicNow();
if (startTime - lastBlockReport <= dnConf.blockReportInterval) {
  return null;
}
{code}

Many tests trigger an immediate block report via 
{{BPServiceActor#triggerBlockReportForTests}} which sets {{lastBlockReport = 
0}}. However if the machine was restarted recently then startTime will be less 
than {{dnConf.blockReportInterval}} and the block report is not sent.

{{Time#monotonicNow}} uses {{System#nanoTime}} which represents time elapsed 
since an arbitrary origin. The time should be used only for comparison with 
other values returned by {{System#nanoTime}}.


> Using monotonicNow for block report scheduling causes test failures on 
> recently restarted systems
> -
>
> Key: HDFS-8163
> URL: https://issues.apache.org/jira/browse/HDFS-8163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.1
>Reporter: Arpit Agarwal
>Priority: Blocker
>
> {{BPServiceActor#blockReport}} has the following check:
> {code}
>   List blockReport() throws IOException {
> // send block report if timer has expired.
> final long startTime = monotonicNow();
> if (startTime - lastBlockReport <= dnConf.blockReportInterval) {
>   return null;
> }
> {code}
> Many tests trigger an immediate block report via 
> {{BPServiceActor#triggerBlockReportForTests}} which sets {{lastBlockReport = 
> 0}}. However if the machine was restarted recently then startTime may be less 
> than {{dnConf.blockReportInterval}} and the block report is not sent.
> {{Time#monotonicNow}} uses {{System#nanoTime}} which represents time elapsed 
> since an arbitrary origin. The time should be used only for comparison with 
> other values returned by {{System#nanoTime}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8113) NullPointerException in BlockInfoContiguous causes block report failure

2015-04-16 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498553#comment-14498553
 ] 

Colin Patrick McCabe commented on HDFS-8113:


There are already a bunch of places in the code where we check whether 
BlockCollection is null before doing something with it.  Example:
{code}
if (block instanceof BlockInfoContiguous) {
  BlockCollection bc = ((BlockInfoContiguous) block).getBlockCollection();
  String fileName = (bc == null) ? "[orphaned]" : bc.getName();
  out.print(fileName + ": ");
}
{code}

also:
{code}
  private int getReplication(Block block) {
final BlockCollection bc = blocksMap.getBlockCollection(block);
return bc == null? 0: bc.getBlockReplication();
  }
{code}

I think that the majority of cases already have a check.  My suggestion is just 
that we extend this checking against null to all uses of the 
BlockInfoContiguous structure's block collection.

If the problem is too difficult to reproduce with a {{MiniDFSCluster}}, perhaps 
we can just do a unit test of the copy constructor itself.

As I said earlier, I don't understand the rationale for keeping blocks with no 
associated INode out of the BlocksMap.  It complicates the block report since 
it requires us to check whether each block has an associated inode or not 
before adding it to the BlocksMap.  But if that change seems too ambitious for 
this JIRA, we can deal with that later.

> NullPointerException in BlockInfoContiguous causes block report failure
> ---
>
> Key: HDFS-8113
> URL: https://issues.apache.org/jira/browse/HDFS-8113
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Chengbing Liu
>Assignee: Chengbing Liu
> Attachments: HDFS-8113.patch
>
>
> The following copy constructor can throw NullPointerException if {{bc}} is 
> null.
> {code}
>   protected BlockInfoContiguous(BlockInfoContiguous from) {
> this(from, from.bc.getBlockReplication());
> this.bc = from.bc;
>   }
> {code}
> We have observed that some DataNodes keeps failing doing block reports with 
> NameNode. The stacktrace is as follows. Though we are not using the latest 
> version, the problem still exists.
> {quote}
> 2015-03-08 19:28:13,442 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> RemoteException in offerService
> org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
> at org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.(BlockInfo.java:80)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockToMarkCorrupt.(BlockManager.java:1696)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.checkReplicaCorrupt(BlockManager.java:2185)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReportedBlock(BlockManager.java:2047)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiff(BlockManager.java:1950)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1823)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1750)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1069)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:152)
> at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:26382)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1623)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-7993) Incorrect descriptions in fsck when nodes are decommissioned

2015-04-16 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498556#comment-14498556
 ] 

Colin Patrick McCabe edited comment on HDFS-7993 at 4/16/15 7:28 PM:
-

bq. Maybe we can change the description from repl to live repl? It will address 
the confusion others might have.

Can we do that in a separate JIRA?  Since it's an incompatible change we might 
want to do it only in Hadoop 3.0.  There are a lot of people parsing fsck 
output (unfortunately).

The rest looks good, if we can keep the existing output the same I would love 
to add the replicaDetails option.


was (Author: cmccabe):
bq, Maybe we can change the description from repl to live repl? It will address 
the confusion others might have.

Can we do that in a separate JIRA?  Since it's an incompatible change we might 
want to do it only in Hadoop 3.0.  There are a lot of people parsing fsck 
output (unfortunately).

The rest looks good, if we can keep the existing output the same I would love 
to add the replicaDetails option.

> Incorrect descriptions in fsck when nodes are decommissioned
> 
>
> Key: HDFS-7993
> URL: https://issues.apache.org/jira/browse/HDFS-7993
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Ming Ma
>Assignee: J.Andreina
> Attachments: HDFS-7993.1.patch, HDFS-7993.2.patch, HDFS-7993.3.patch, 
> HDFS-7993.4.patch
>
>
> When you run fsck with "-files" or "-racks", you will get something like 
> below if one of the replicas is decommissioned.
> {noformat}
> blk_x len=y repl=3 [dn1, dn2, dn3, dn4]
> {noformat}
> That is because in NamenodeFsck, the repl count comes from live replicas 
> count; while the actual nodes come from LocatedBlock which include 
> decommissioned nodes.
> Another issue in NamenodeFsck is BlockPlacementPolicy's verifyBlockPlacement 
> verifies LocatedBlock that includes decommissioned nodes. However, it seems 
> better to exclude the decommissioned nodes in the verification; just like how 
> fsck excludes decommissioned nodes when it check for under replicated blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7993) Incorrect descriptions in fsck when nodes are decommissioned

2015-04-16 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498556#comment-14498556
 ] 

Colin Patrick McCabe commented on HDFS-7993:


bq, Maybe we can change the description from repl to live repl? It will address 
the confusion others might have.

Can we do that in a separate JIRA?  Since it's an incompatible change we might 
want to do it only in Hadoop 3.0.  There are a lot of people parsing fsck 
output (unfortunately).

The rest looks good, if we can keep the existing output the same I would love 
to add the replicaDetails option.

> Incorrect descriptions in fsck when nodes are decommissioned
> 
>
> Key: HDFS-7993
> URL: https://issues.apache.org/jira/browse/HDFS-7993
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Ming Ma
>Assignee: J.Andreina
> Attachments: HDFS-7993.1.patch, HDFS-7993.2.patch, HDFS-7993.3.patch, 
> HDFS-7993.4.patch
>
>
> When you run fsck with "-files" or "-racks", you will get something like 
> below if one of the replicas is decommissioned.
> {noformat}
> blk_x len=y repl=3 [dn1, dn2, dn3, dn4]
> {noformat}
> That is because in NamenodeFsck, the repl count comes from live replicas 
> count; while the actual nodes come from LocatedBlock which include 
> decommissioned nodes.
> Another issue in NamenodeFsck is BlockPlacementPolicy's verifyBlockPlacement 
> verifies LocatedBlock that includes decommissioned nodes. However, it seems 
> better to exclude the decommissioned nodes in the verification; just like how 
> fsck excludes decommissioned nodes when it check for under replicated blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6153) Document "fileId" and "childrenNum" fields in the FileStatus Json schema

2015-04-16 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498559#comment-14498559
 ] 

Akira AJISAKA commented on HDFS-6153:
-

Thanks [~szetszwo] for pointing out.

bq. This change is already in some releases. How do we fix it?
2.5.X and 2.6.X are already released, so I'm thinking we should fix the 
document and commit it to trunk, branch-2, branch-2.7, branch-2.6, and 
branch-2.5.

> Document "fileId" and "childrenNum" fields in the FileStatus Json schema
> 
>
> Key: HDFS-6153
> URL: https://issues.apache.org/jira/browse/HDFS-6153
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, webhdfs
>Affects Versions: 2.3.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
> Fix For: 2.5.0
>
> Attachments: HDFS-6153.patch, HDFS-6153.patch
>
>
> Now WebHDFS returns FileStatus Json objects include "fileId" and 
> "childrenNum" fields but these fields are not documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6153) Document "fileId" and "childrenNum" fields in the FileStatus Json schema

2015-04-16 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498574#comment-14498574
 ] 

Tsz Wo Nicholas Sze commented on HDFS-6153:
---

Actually, I think we should revert the patch since fileId and childrenNum are 
only in HdfsFileStatus but not FileStatus.  WebHDFS REST API should support 
general FileSystem API.  Otherwise, other projects like httpfs won't work.

> Document "fileId" and "childrenNum" fields in the FileStatus Json schema
> 
>
> Key: HDFS-6153
> URL: https://issues.apache.org/jira/browse/HDFS-6153
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, webhdfs
>Affects Versions: 2.3.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
> Fix For: 2.5.0
>
> Attachments: HDFS-6153.patch, HDFS-6153.patch
>
>
> Now WebHDFS returns FileStatus Json objects include "fileId" and 
> "childrenNum" fields but these fields are not documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8163) Using monotonicNow for block report scheduling causes test failures on recently restarted systems

2015-04-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDFS-8163:
---

Assignee: Arpit Agarwal

> Using monotonicNow for block report scheduling causes test failures on 
> recently restarted systems
> -
>
> Key: HDFS-8163
> URL: https://issues.apache.org/jira/browse/HDFS-8163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.1
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
>
> {{BPServiceActor#blockReport}} has the following check:
> {code}
>   List blockReport() throws IOException {
> // send block report if timer has expired.
> final long startTime = monotonicNow();
> if (startTime - lastBlockReport <= dnConf.blockReportInterval) {
>   return null;
> }
> {code}
> Many tests trigger an immediate block report via 
> {{BPServiceActor#triggerBlockReportForTests}} which sets {{lastBlockReport = 
> 0}}. However if the machine was restarted recently then startTime may be less 
> than {{dnConf.blockReportInterval}} and the block report is not sent.
> {{Time#monotonicNow}} uses {{System#nanoTime}} which represents time elapsed 
> since an arbitrary origin. The time should be used only for comparison with 
> other values returned by {{System#nanoTime}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-8163) Using monotonicNow for block report scheduling causes test failures on recently restarted systems

2015-04-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-8163 started by Arpit Agarwal.
---
> Using monotonicNow for block report scheduling causes test failures on 
> recently restarted systems
> -
>
> Key: HDFS-8163
> URL: https://issues.apache.org/jira/browse/HDFS-8163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.1
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
>
> {{BPServiceActor#blockReport}} has the following check:
> {code}
>   List blockReport() throws IOException {
> // send block report if timer has expired.
> final long startTime = monotonicNow();
> if (startTime - lastBlockReport <= dnConf.blockReportInterval) {
>   return null;
> }
> {code}
> Many tests trigger an immediate block report via 
> {{BPServiceActor#triggerBlockReportForTests}} which sets {{lastBlockReport = 
> 0}}. However if the machine was restarted recently then startTime may be less 
> than {{dnConf.blockReportInterval}} and the block report is not sent.
> {{Time#monotonicNow}} uses {{System#nanoTime}} which represents time elapsed 
> since an arbitrary origin. The time should be used only for comparison with 
> other values returned by {{System#nanoTime}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5995) TestFSEditLogLoader#testValidateEditLogWithCorruptBody gets OutOfMemoryError and dumps heap.

2015-04-16 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498597#comment-14498597
 ] 

Chris Nauroth commented on HDFS-5995:
-

This is no longer happening.  I'm resolving this as a duplicate of HDFS-6038.  
That patch introduced an explicit length field in the serialization of edit log 
ops, similar to what I described in my earlier comments here.  As a side effect 
of that patch, we no longer see {{OutOfMemoryError}} when 
{{TestFSEditLogLoader#testValidateEditLogWithCorruptBody}} runs.  Thank you to 
[~jingzhao] for that patch.

> TestFSEditLogLoader#testValidateEditLogWithCorruptBody gets OutOfMemoryError 
> and dumps heap.
> 
>
> Key: HDFS-5995
> URL: https://issues.apache.org/jira/browse/HDFS-5995
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: namenode, test
>Affects Versions: 2.4.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HDFS-5995.1.patch
>
>
> {{TestFSEditLogLoader#testValidateEditLogWithCorruptBody}} is experiencing 
> {{OutOfMemoryError}} and dumping heap since the merge of HDFS-4685.  This 
> doesn't actually cause the test to fail, because it's a failure test that 
> corrupts an edit log intentionally.  Still, this might cause confusion if 
> someone reviews the build logs and thinks this is a more serious problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5995) TestFSEditLogLoader#testValidateEditLogWithCorruptBody gets OutOfMemoryError and dumps heap.

2015-04-16 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5995:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> TestFSEditLogLoader#testValidateEditLogWithCorruptBody gets OutOfMemoryError 
> and dumps heap.
> 
>
> Key: HDFS-5995
> URL: https://issues.apache.org/jira/browse/HDFS-5995
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: namenode, test
>Affects Versions: 2.4.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HDFS-5995.1.patch
>
>
> {{TestFSEditLogLoader#testValidateEditLogWithCorruptBody}} is experiencing 
> {{OutOfMemoryError}} and dumping heap since the merge of HDFS-4685.  This 
> doesn't actually cause the test to fail, because it's a failure test that 
> corrupts an edit log intentionally.  Still, this might cause confusion if 
> someone reviews the build logs and thinks this is a more serious problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6153) Document "fileId" and "childrenNum" fields in the FileStatus Json schema

2015-04-16 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498613#comment-14498613
 ] 

Akira AJISAKA commented on HDFS-6153:
-

bq. Actually, I think we should revert the patch since fileId and childrenNum 
are only in HdfsFileStatus but not FileStatus. 
I'm okay to revert this patch. However, I'm thinking it would be better for 
users to document the fields as optional, such as symlink field. What do you 
think?

> Document "fileId" and "childrenNum" fields in the FileStatus Json schema
> 
>
> Key: HDFS-6153
> URL: https://issues.apache.org/jira/browse/HDFS-6153
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, webhdfs
>Affects Versions: 2.3.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
> Fix For: 2.5.0
>
> Attachments: HDFS-6153.patch, HDFS-6153.patch
>
>
> Now WebHDFS returns FileStatus Json objects include "fileId" and 
> "childrenNum" fields but these fields are not documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7994) Detect if resevered EC Block ID is already used

2015-04-16 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-7994:
--
Hadoop Flags: Reviewed

+1 patch looks good.

> Detect if resevered EC Block ID is already used
> ---
>
> Key: HDFS-7994
> URL: https://issues.apache.org/jira/browse/HDFS-7994
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Hui Zheng
> Attachments: HDFS-7994_001.patch, HDFS-7994_002.patch
>
>
> Since random block IDs were supported by some early version of HDFS, the 
> block ID reserved for EC blocks could be already used by some existing blocks 
> in a cluster. During NameNode startup, it detects if there are reserved EC 
> block IDs used by non-EC blocks. If it is the case, NameNode will do an 
> additional blocksMap lookup when there is a miss in a blockGroupsMap lookup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7994) Detect if resevered EC Block ID is already used

2015-04-16 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-7994:
--
Parent Issue: HDFS-7285  (was: HDFS-8031)

> Detect if resevered EC Block ID is already used
> ---
>
> Key: HDFS-7994
> URL: https://issues.apache.org/jira/browse/HDFS-7994
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Hui Zheng
> Attachments: HDFS-7994_001.patch, HDFS-7994_002.patch
>
>
> Since random block IDs were supported by some early version of HDFS, the 
> block ID reserved for EC blocks could be already used by some existing blocks 
> in a cluster. During NameNode startup, it detects if there are reserved EC 
> block IDs used by non-EC blocks. If it is the case, NameNode will do an 
> additional blocksMap lookup when there is a miss in a blockGroupsMap lookup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6153) Document "fileId" and "childrenNum" fields in the FileStatus Json schema

2015-04-16 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498630#comment-14498630
 ] 

Tsz Wo Nicholas Sze commented on HDFS-6153:
---

symlink is in FileStatus.  So it is fine.

fileId is not yet a public API; see HDFS-7878.  So we should not expose it in 
WebHDFS.

> Document "fileId" and "childrenNum" fields in the FileStatus Json schema
> 
>
> Key: HDFS-6153
> URL: https://issues.apache.org/jira/browse/HDFS-6153
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, webhdfs
>Affects Versions: 2.3.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
> Fix For: 2.5.0
>
> Attachments: HDFS-6153.patch, HDFS-6153.patch
>
>
> Now WebHDFS returns FileStatus Json objects include "fileId" and 
> "childrenNum" fields but these fields are not documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8153) Error Message points to wrong parent directory in case of path component name length error

2015-04-16 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498633#comment-14498633
 ] 

Jitendra Nath Pandey commented on HDFS-8153:


+1. I will commit it shortly.

> Error Message points to wrong parent directory in case of path component name 
> length error
> --
>
> Key: HDFS-8153
> URL: https://issues.apache.org/jira/browse/HDFS-8153
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.5.2
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8153.001.patch
>
>
> If the name component length is greater than the permitted length, the error 
> message points to wrong parent directory for mkdir and touchz.
> Here are examples where the parent directory name is in error message. In 
> this example dfs.namenode.fs-limits.max-component-length is set to 19.
> {code}
> hdfs dfs -mkdir /user/hrt_qa/FileNameLength/really_big_name_dir01
> mkdir: The maximum path component name limit of really_big_name_dir01 in 
> directory /user/hrt_qa/ is exceeded: limit=19 length=21
> {code}
> The expected value for the directory was _/user/hrt_qa/FileNameLength_. The 
> same behavior is observed for touchz
> {code}
> hdfs dfs -touchz /user/hrt_qa/FileNameLength/really_big_name_0004
> touchz: The maximum path component name limit of really_big_name_0004 in 
> directory /user/hrt_qa/ is exceeded: limit=19 length=20
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8153) Error Message points to wrong parent directory in case of path component name length error

2015-04-16 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498636#comment-14498636
 ] 

Jitendra Nath Pandey commented on HDFS-8153:


I will commit it after test results come out satisfactorily.

> Error Message points to wrong parent directory in case of path component name 
> length error
> --
>
> Key: HDFS-8153
> URL: https://issues.apache.org/jira/browse/HDFS-8153
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.5.2
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8153.001.patch
>
>
> If the name component length is greater than the permitted length, the error 
> message points to wrong parent directory for mkdir and touchz.
> Here are examples where the parent directory name is in error message. In 
> this example dfs.namenode.fs-limits.max-component-length is set to 19.
> {code}
> hdfs dfs -mkdir /user/hrt_qa/FileNameLength/really_big_name_dir01
> mkdir: The maximum path component name limit of really_big_name_dir01 in 
> directory /user/hrt_qa/ is exceeded: limit=19 length=21
> {code}
> The expected value for the directory was _/user/hrt_qa/FileNameLength_. The 
> same behavior is observed for touchz
> {code}
> hdfs dfs -touchz /user/hrt_qa/FileNameLength/really_big_name_0004
> touchz: The maximum path component name limit of really_big_name_0004 in 
> directory /user/hrt_qa/ is exceeded: limit=19 length=20
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8082) Separate the client read conf from DFSConfigKeys

2015-04-16 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498639#comment-14498639
 ] 

Haohui Mai commented on HDFS-8082:
--

Looks good. +1

> Separate the client read conf from DFSConfigKeys
> 
>
> Key: HDFS-8082
> URL: https://issues.apache.org/jira/browse/HDFS-8082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h8082_20150413.patch, h8082_20150413b.patch
>
>
> A part of HDFS-8050, move dfs.client.read.* conf from DFSConfigKeys to a new 
> class HdfsClientConfigKeys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7994) Detect if resevered EC Block ID is already used

2015-04-16 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HDFS-7994.
---
   Resolution: Fixed
Fix Version/s: HDFS-7285

I have committed this.  Thanks, Hui!

> Detect if resevered EC Block ID is already used
> ---
>
> Key: HDFS-7994
> URL: https://issues.apache.org/jira/browse/HDFS-7994
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Hui Zheng
> Fix For: HDFS-7285
>
> Attachments: HDFS-7994_001.patch, HDFS-7994_002.patch
>
>
> Since random block IDs were supported by some early version of HDFS, the 
> block ID reserved for EC blocks could be already used by some existing blocks 
> in a cluster. During NameNode startup, it detects if there are reserved EC 
> block IDs used by non-EC blocks. If it is the case, NameNode will do an 
> additional blocksMap lookup when there is a miss in a blockGroupsMap lookup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8088) Reduce the number of HTrace spans generated by HDFS reads

2015-04-16 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498655#comment-14498655
 ] 

Billie Rinaldi commented on HDFS-8088:
--

To add to Josh's data, I ran some tests with and without the HDFS-8026 patch 
(both started from a clean Accumulo instance).  The patch definitely reduced 
the 0ms spans; I saw about 4x improvement.  There are still a lot of 0ms spans, 
though.
{noformat}
Before HDFS-8026:
tserver:DFSOutputStream#writeChunk={type='HDFS', nonzeroCount=5224, 
zeroCount=2564098, numTraces=163, log10SpanLength=[2564098, 5114, 85, 24, 1, 0, 
0]}
After HDFS-8026:
tserver:DFSOutputStream#write={type='HDFS', nonzeroCount=15263, 
zeroCount=2383993, numTraces=667, log10SpanLength=[2383993, 15037, 172, 52, 2, 
0, 0]}
{noformat}

> Reduce the number of HTrace spans generated by HDFS reads
> -
>
> Key: HDFS-8088
> URL: https://issues.apache.org/jira/browse/HDFS-8088
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-8088.001.patch
>
>
> HDFS generates too many trace spans on read right now.  Every call to read() 
> we make generates its own span, which is not very practical for things like 
> HBase or Accumulo that do many such reads as part of a single operation.  
> Instead of tracing every call to read(), we should only trace the cases where 
> we refill the buffer inside a BlockReader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8082) Separate the client read conf from DFSConfigKeys

2015-04-16 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-8082:
--
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Huihao for reviewing.

I have committed this.

> Separate the client read conf from DFSConfigKeys
> 
>
> Key: HDFS-8082
> URL: https://issues.apache.org/jira/browse/HDFS-8082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.8.0
>
> Attachments: h8082_20150413.patch, h8082_20150413b.patch
>
>
> A part of HDFS-8050, move dfs.client.read.* conf from DFSConfigKeys to a new 
> class HdfsClientConfigKeys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8082) Separate the client read conf from DFSConfigKeys

2015-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498681#comment-14498681
 ] 

Hudson commented on HDFS-8082:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7599 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7599/])
HDFS-8082. Move dfs.client.read.*, dfs.client.short.circuit.*, 
dfs.client.mmap.* and dfs.client.hedged.read.* conf from DFSConfigKeys to 
HdfsClientConfigKeys. (szetszwo: rev 75bbcc8bf3fa1daf54f56868dae737f6da12ab1f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestParallelShortCircuitReadUnCached.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitLocalRead.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCacheRevocation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestParallelShortCircuitLegacyRead.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderLocal.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestParallelRead.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestParallelShortCircuitRead.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracingShortCircuitLocalRead.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocalLegacy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestParallelShortCircuitReadNoChecksum.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderLocalLegacy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestEnhancedByteBufferAccess.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestParallelUnixDomainRead.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestUnbuffer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java


> Separate the client read conf from DFSConfigKeys
> 
>
> Key: HDFS-8082
> URL: https://issues.apache.org/jira/browse/HDFS-8082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.8.0
>
> Attachments: h8082_20150413.patch, h8082_20150413b.patch
>
>
> A part of HDFS-8050, move dfs.client.read.* conf from DFSConfigKeys to a new 
> class HdfsClientConfigKeys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8165) Move GRANDFATHER_GENERATION_STAMP and GRANDFATER_INODE_ID to hdfs-client

2015-04-16 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-8165:


 Summary: Move GRANDFATHER_GENERATION_STAMP and GRANDFATER_INODE_ID 
to hdfs-client
 Key: HDFS-8165
 URL: https://issues.apache.org/jira/browse/HDFS-8165
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai


Some RPC messages (e.g., {{HdfsFileStatus}}) refer these two constants. This 
jira proposes to move them to the hdfs-client module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8165) Move GRANDFATHER_GENERATION_STAMP and GRANDFATER_INODE_ID to hdfs-client

2015-04-16 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8165:
-
Attachment: HDFS-8165.000.patch

> Move GRANDFATHER_GENERATION_STAMP and GRANDFATER_INODE_ID to hdfs-client
> 
>
> Key: HDFS-8165
> URL: https://issues.apache.org/jira/browse/HDFS-8165
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8165.000.patch
>
>
> Some RPC messages (e.g., {{HdfsFileStatus}}) refer these two constants. This 
> jira proposes to move them to the hdfs-client module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8165) Move GRANDFATHER_GENERATION_STAMP and GRANDFATER_INODE_ID to hdfs-client

2015-04-16 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8165:
-
Status: Patch Available  (was: Open)

> Move GRANDFATHER_GENERATION_STAMP and GRANDFATER_INODE_ID to hdfs-client
> 
>
> Key: HDFS-8165
> URL: https://issues.apache.org/jira/browse/HDFS-8165
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8165.000.patch
>
>
> Some RPC messages (e.g., {{HdfsFileStatus}}) refer these two constants. This 
> jira proposes to move them to the hdfs-client module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8162) Stack trace routed to standard out

2015-04-16 Thread Kiran Kumar M R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar M R reassigned HDFS-8162:
-

Assignee: Kiran Kumar M R

> Stack trace routed to standard out
> --
>
> Key: HDFS-8162
> URL: https://issues.apache.org/jira/browse/HDFS-8162
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs
>Affects Versions: 2.5.2
>Reporter: Rod
>Assignee: Kiran Kumar M R
>Priority: Minor
>
> Calling hdfsOpenFile() can generate a stacktrace printout to standard out, 
> which can be problematic for caller program which is making use of standard 
> out. libhdfs stacktraces should be routed to standard error.
> Example of stacktrace:
> WARN  [main] hdfs.BlockReaderFactory 
> (BlockReaderFactory.java:getRemoteBlockReaderFromTcp(693)) - I/O error 
> constructing remote block reader.
> org.apache.hadoop.net.ConnectTimeoutException: 6 millis timeout while 
> waiting for channel to be ready for connect. ch : 
> java.nio.channels.SocketChannel[connection-pending remote=/x.x.x.10:50010]
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533)
>   at 
> org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3101)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:755)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:670)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:337)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:576)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:800)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:854)
>   at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:143)
> 2015-04-16 10:32:13,946 WARN  [main] hdfs.DFSClient 
> (DFSInputStream.java:blockSeekTo(612)) - Failed to connect to /x.x.x.10:50010 
> for block, add to deadNodes and continue. 
> org.apache.hadoop.net.ConnectTimeoutException: 6 millis timeout while 
> waiting for channel to be ready for connect. ch : 
> java.nio.channels.SocketChannel[connection-pending remote=/x.x.x.10:50010]
> org.apache.hadoop.net.ConnectTimeoutException: 6 millis timeout while 
> waiting for channel to be ready for connect. ch : 
> java.nio.channels.SocketChannel[connection-pending remote=/x.x.x.10:50010]
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533)
>   at 
> org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3101)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:755)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:670)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:337)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:576)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:800)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:854)
>   at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:143)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8159) [HDFS-Quota] Verification is not done while setting dir namequota and size

2015-04-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-8159.
-
Resolution: Not A Problem

Hi [~jagadesh.kiran],

Allen is right and this is the correct behavior for the reasons he described.

I am resolving the Jira. If you have a specific concern with this behavior feel 
free to respond here with more or on the mailing list.

(And thank you for the well written report).

> [HDFS-Quota] Verification is not done while setting dir namequota and size
> --
>
> Key: HDFS-8159
> URL: https://issues.apache.org/jira/browse/HDFS-8159
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.6.0
> Environment: Suse 11 SP3
>Reporter: Jagadesh Kiran N
>Priority: Minor
>
> Name Quota and space quota is not verifying when setting a new value to a 
> directory which already has subdirectories or contents.
> Below are the steps to re-produce the cases:
> *+Case-1+*
> Step-1) Create a New folder 
> hdfs dfs -mkdir /test
> Step-2) Create sub folders
> hdfs dfs -mkdir /test/one
> hdfs dfs -mkdir /test/two
> hdfs dfs -mkdir /test/three
> Step-3) Set Name Quota as two 
> hdfs dfsadmin  -setQuota 2 /test
> Step-3) Quota will be set with out the validating the dirs 
> +Output:+ Eventhough name quota value is lower than the existing number of 
> dirs, its not validating and allowing to set the new value.
> +Suggestion:+ Validate the name quota against the number of contents before 
> setting the new value.
> *+Case-2+*
> Step-1) Add any new folder or file , it will give error message
> mkdir: The NameSpace quota (directories and files) of directory /test is 
> exceeded: quota=2 file count=5
> Step-2) Clear the Quota 
> hdfs dfsadmin -clrQuota /test
> Step-3) Now Set the Size less than the folder size 
> hdfs dfsadmin -setSpaceQuota 10 /test
> +Output:+ Eventhough space quota value is less than the size of the existing 
> dir contents, its not validating and allowing to set the new value.
> +Suggestion:+ Validate the quota against the used space before setting the 
> new value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8163) Using monotonicNow for block report scheduling causes test failures on recently restarted systems

2015-04-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8163:

Attachment: HDFS-8163.01.patch

Preliminary v01 patch for Jenkins. Not ready for review yet.

> Using monotonicNow for block report scheduling causes test failures on 
> recently restarted systems
> -
>
> Key: HDFS-8163
> URL: https://issues.apache.org/jira/browse/HDFS-8163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.1
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HDFS-8163.01.patch
>
>
> {{BPServiceActor#blockReport}} has the following check:
> {code}
>   List blockReport() throws IOException {
> // send block report if timer has expired.
> final long startTime = monotonicNow();
> if (startTime - lastBlockReport <= dnConf.blockReportInterval) {
>   return null;
> }
> {code}
> Many tests trigger an immediate block report via 
> {{BPServiceActor#triggerBlockReportForTests}} which sets {{lastBlockReport = 
> 0}}. However if the machine was restarted recently then startTime may be less 
> than {{dnConf.blockReportInterval}} and the block report is not sent.
> {{Time#monotonicNow}} uses {{System#nanoTime}} which represents time elapsed 
> since an arbitrary origin. The time should be used only for comparison with 
> other values returned by {{System#nanoTime}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >