[jira] [Updated] (HDFS-8332) DistributedFileSystem listCacheDirectives() and listCachePools() API calls should check filesystem closed

2015-05-06 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8332:
---
Target Version/s: 2.8.0
  Status: Patch Available  (was: Open)

 DistributedFileSystem listCacheDirectives() and listCachePools() API calls 
 should check filesystem closed
 -

 Key: HDFS-8332
 URL: https://issues.apache.org/jira/browse/HDFS-8332
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8332-000.patch


 I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be 
 called even after the filesystem close. Instead these calls should do 
 {{checkOpen}} and throws:
 {code}
 java.io.IOException: Filesystem closed
   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8112) Enforce authorization policy to protect administration operations for EC zone and schemas

2015-05-06 Thread Yong Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530170#comment-14530170
 ] 

Yong Zhang commented on HDFS-8112:
--

Hi [~rakeshr], are you still working on it?
I suggest that we can check if user has permission to createErasureCodingZone 
via checkPathAccess with FsAction.ALL, and getErasureCodingInfo and 
getErasureCodingZoneInfo via checkPathAccess with FsAction.READ.
all there 3 API are common operation in multiple tenant scenario.

 Enforce authorization policy to protect administration operations for EC zone 
 and schemas
 -

 Key: HDFS-8112
 URL: https://issues.apache.org/jira/browse/HDFS-8112
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Rakesh R

 We should allow to enforce authorization policy to protect administration 
 operations for EC zone and schemas as such behaviors would impact too much 
 for a system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8310) Fix TestCLI.testAll help: help for find on Windows

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530280#comment-14530280
 ] 

Hadoop QA commented on HDFS-8310:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   5m  9s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 20s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  3s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 31s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 41s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | common tests |  22m 40s | Tests passed in 
hadoop-common. |
| | |  40m 29s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12730773/HDFS-8310-002.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / a583a40 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10830/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10830/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10830/console |


This message was automatically generated.

 Fix TestCLI.testAll help: help for find on Windows
 

 Key: HDFS-8310
 URL: https://issues.apache.org/jira/browse/HDFS-8310
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.0
Reporter: Xiaoyu Yao
Assignee: Kiran Kumar M R
Priority: Minor
  Labels: BB2015-05-RFC
 Attachments: HDFS-8310-001.patch, HDFS-8310-002.patch


 The test uses RegexAcrossOutputComparator in a single regex, which does not 
 match on Windows as shown below.
 {code}
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(155)) - 
 ---
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(156)) - Test ID: [31]
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(157)) -Test Description: 
 [help: help for find]
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(158)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(162)) -   Test Commands: 
 [-help find]
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(166)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(173)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(177)) -  Comparator: 
 [RegexpAcrossOutputComparator]
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(179)) -  Comparision result:   
 [fail]
 2015-04-30 01:14:01,739 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(181)) - Expected output:   
 [-find path \.\.\. expression \.\.\. :
   Finds all files that match the specified expression and
   applies selected actions to them\. If no path is specified
   then defaults to the current working directory\. If no
   expression is specified then defaults to -print\.
   
   The following primary expressions are recognised:
 -name pattern
 -iname pattern
   Evaluates as true if the basename of the file matches the
   pattern using standard file system globbing\.
   If -iname is used then the match is case insensitive\.
   
 -print
 -print0
   Always evaluates to true. Causes the current pathname to be
   written to standard output followed by a newline. If the -print0
   expression is used then an ASCII NULL character is appended rather
   than a newline.
   
   The following operators are recognised:
 expression -a 

[jira] [Work started] (HDFS-8333) Create EC zone should not need superuser privilege

2015-05-06 Thread Yong Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-8333 started by Yong Zhang.

 Create EC zone should not need superuser privilege
 --

 Key: HDFS-8333
 URL: https://issues.apache.org/jira/browse/HDFS-8333
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yong Zhang
Assignee: Yong Zhang

 create EC zone should not need superuser privilege, for example, in multiple 
 tenant scenario, common users only manage their own directory and 
 subdirectory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8321) CacheDirectives and CachePool operations should throw RetriableException in safemode

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530210#comment-14530210
 ] 

Hadoop QA commented on HDFS-8321:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 41s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 37s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 11s | The applied patch generated  1 
new checkstyle issues (total was 275, now 275). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  1s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 170m 24s | Tests failed in hadoop-hdfs. |
| | | 213m 13s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.tracing.TestTraceAdmin |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12730720/HDFS-8321.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / a583a40 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10827/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10827/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10827/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10827/console |


This message was automatically generated.

 CacheDirectives and CachePool operations should throw RetriableException in 
 safemode
 

 Key: HDFS-8321
 URL: https://issues.apache.org/jira/browse/HDFS-8321
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
  Labels: BB2015-05-TBR
 Attachments: HDFS-8321.000.patch, HDFS-8321.001.patch, 
 HDFS-8321.002.patch


 Operations such as {{addCacheDirectives()}} throws {{SafeModeException}} when 
 the NN is in safemode:
 {code}
   if (isInSafeMode()) {
 throw new SafeModeException(
 Cannot add cache directive, safeMode);
   }
 {code}
 While other NN operations throws {{RetriableException}} when HA is enabled:
 {code}
   void checkNameNodeSafeMode(String errorMsg)
   throws RetriableException, SafeModeException {
 if (isInSafeMode()) {
   SafeModeException se = new SafeModeException(errorMsg, safeMode);
   if (haEnabled  haContext != null
haContext.getState().getServiceState() == HAServiceState.ACTIVE
shouldRetrySafeMode(this.safeMode)) {
 throw new RetriableException(se);
   } else {
 throw se;
   }
 }
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7621) Erasure Coding: update the Balancer/Mover data migration logic

2015-05-06 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530154#comment-14530154
 ] 

Walter Su commented on HDFS-7621:
-

002 patch is ready for review.

 Erasure Coding: update the Balancer/Mover data migration logic
 --

 Key: HDFS-7621
 URL: https://issues.apache.org/jira/browse/HDFS-7621
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Walter Su
 Attachments: HDFS-7621.001.patch, HDFS-7621.002.patch


 Currently the Balancer/Mover only considers the distribution of replicas of 
 the same block during data migration: the migration cannot decrease the 
 number of racks. With EC the Balancer and Mover should also take into account 
 the distribution of blocks belonging to the same block group.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work stopped] (HDFS-8333) Create EC zone should not need superuser privilege

2015-05-06 Thread Yong Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-8333 stopped by Yong Zhang.

 Create EC zone should not need superuser privilege
 --

 Key: HDFS-8333
 URL: https://issues.apache.org/jira/browse/HDFS-8333
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yong Zhang
Assignee: Yong Zhang

 create EC zone should not need superuser privilege, for example, in multiple 
 tenant scenario, common users only manage their own directory and 
 subdirectory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8332) DistributedFileSystem listCacheDirectives() and listCachePools() API calls should check filesystem closed

2015-05-06 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8332:
---
Attachment: HDFS-8332-000.patch

 DistributedFileSystem listCacheDirectives() and listCachePools() API calls 
 should check filesystem closed
 -

 Key: HDFS-8332
 URL: https://issues.apache.org/jira/browse/HDFS-8332
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8332-000.patch


 I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be 
 called even after the filesystem close. Instead these calls should do 
 {{checkOpen}} and throws:
 {code}
 java.io.IOException: Filesystem closed
   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5640) Add snapshot methods to FileContext.

2015-05-06 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530276#comment-14530276
 ] 

Rakesh R commented on HDFS-5640:


Following checkstyle comment and test case failure is unrelated to my patch.
{code}
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java:1:
 File length is 2,718 lines (max allowed is 2,000).
{code}

 Add snapshot methods to FileContext.
 

 Key: HDFS-5640
 URL: https://issues.apache.org/jira/browse/HDFS-5640
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, snapshots
Affects Versions: 3.0.0, 2.2.0
Reporter: Chris Nauroth
Assignee: Rakesh R
  Labels: BB2015-05-TBR
 Attachments: HDFS-5640-001.patch, HDFS-5640-002.patch, 
 HDFS-5640-003.patch, HDFS-5640-004.patch, HDFS-5640-005.patch


 Currently, methods related to HDFS snapshots are defined on {{FileSystem}}.  
 For feature parity, these methods need to be added to {{FileContext}}.  This 
 would also require updating {{AbstractFileSystem}} and the {{Hdfs}} subclass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3384) DataStreamer thread should be closed immediatly when failed to setup a PipelineForAppendOrRecovery

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530275#comment-14530275
 ] 

Hadoop QA commented on HDFS-3384:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 27s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 26s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 30s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 12s | The applied patch generated  3 
new checkstyle issues (total was 92, now 94). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  2s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 167m 43s | Tests failed in hadoop-hdfs. |
| | | 210m 12s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.tracing.TestTraceAdmin |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12730731/HDFS-3384-3.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / a583a40 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10829/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10829/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10829/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10829/console |


This message was automatically generated.

 DataStreamer thread should be closed immediatly when failed to setup a 
 PipelineForAppendOrRecovery
 --

 Key: HDFS-3384
 URL: https://issues.apache.org/jira/browse/HDFS-3384
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.0.0-alpha
Reporter: Brahma Reddy Battula
Assignee: amith
  Labels: BB2015-05-TBR
 Attachments: HDFS-3384-3.patch, HDFS-3384.patch, HDFS-3384_2.patch, 
 HDFS-3384_2.patch, HDFS-3384_2.patch


 Scenraio:
 =
 write a file
 corrupt block manually
 call append..
 {noformat}
 2012-04-19 09:33:10,776 INFO  hdfs.DFSClient 
 (DFSOutputStream.java:createBlockOutputStream(1059)) - Exception in 
 createBlockOutputStream
 java.io.EOFException: Premature EOF: no length prefix available
   at 
 org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:162)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1039)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:939)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 2012-04-19 09:33:10,807 WARN  hdfs.DFSClient (DFSOutputStream.java:run(549)) 
 - DataStreamer Exception
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:510)
 2012-04-19 09:33:10,807 WARN  hdfs.DFSClient 
 (DFSOutputStream.java:hflush(1511)) - Error while syncing
 java.io.IOException: All datanodes 10.18.40.20:50010 are bad. Aborting...
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:908)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 java.io.IOException: All datanodes 10.18.40.20:50010 are bad. Aborting...
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:908)
   at 
 

[jira] [Commented] (HDFS-8332) DistributedFileSystem listCacheDirectives() and listCachePools() API calls should check filesystem closed

2015-05-06 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530134#comment-14530134
 ] 

Rakesh R commented on HDFS-8332:


Attached patch where it checks the file system is open to perform the {{cache}} 
operations.

 DistributedFileSystem listCacheDirectives() and listCachePools() API calls 
 should check filesystem closed
 -

 Key: HDFS-8332
 URL: https://issues.apache.org/jira/browse/HDFS-8332
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8332-000.patch


 I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be 
 called even after the filesystem close. Instead these calls should do 
 {{checkOpen}} and throws:
 {code}
 java.io.IOException: Filesystem closed
   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5640) Add snapshot methods to FileContext.

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530155#comment-14530155
 ] 

Hadoop QA commented on HDFS-5640:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 31s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 27s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 36s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 13s | The applied patch generated  1 
new checkstyle issues (total was 67, now 67). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 42s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | common tests |  22m 49s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 167m 32s | Tests failed in hadoop-hdfs. |
| | | 231m 52s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.tracing.TestTraceAdmin |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12730711/HDFS-5640-005.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / a583a40 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10826/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10826/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10826/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10826/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10826/console |


This message was automatically generated.

 Add snapshot methods to FileContext.
 

 Key: HDFS-5640
 URL: https://issues.apache.org/jira/browse/HDFS-5640
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, snapshots
Affects Versions: 3.0.0, 2.2.0
Reporter: Chris Nauroth
Assignee: Rakesh R
  Labels: BB2015-05-TBR
 Attachments: HDFS-5640-001.patch, HDFS-5640-002.patch, 
 HDFS-5640-003.patch, HDFS-5640-004.patch, HDFS-5640-005.patch


 Currently, methods related to HDFS snapshots are defined on {{FileSystem}}.  
 For feature parity, these methods need to be added to {{FileContext}}.  This 
 would also require updating {{AbstractFileSystem}} and the {{Hdfs}} subclass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8327) Compute storage type quotas in INodeFile.computeQuotaDeltaForTruncate()

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530232#comment-14530232
 ] 

Hadoop QA commented on HDFS-8327:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 18s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 45s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  9s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 15s | The applied patch generated  2 
new checkstyle issues (total was 380, now 381). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  8s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 18s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 167m 49s | Tests failed in hadoop-hdfs. |
| | | 212m 19s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.tracing.TestTraceAdmin |
|   | hadoop.hdfs.server.namenode.TestCommitBlockSynchronization |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12730725/HDFS-8327.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / a583a40 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10828/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10828/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10828/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10828/console |


This message was automatically generated.

 Compute storage type quotas in INodeFile.computeQuotaDeltaForTruncate()
 ---

 Key: HDFS-8327
 URL: https://issues.apache.org/jira/browse/HDFS-8327
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
  Labels: BB2015-05-TBR
 Attachments: HDFS-8327.000.patch, HDFS-8327.001.patch, 
 HDFS-8327.002.patch


 To simplify the code {{INodeFile.computeQuotaDeltaForTruncate()}} can compute 
 the storage type quotas as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8331) Erasure Coding: Create FileStatus isErasureCoded() method

2015-05-06 Thread Yong Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14529987#comment-14529987
 ] 

Yong Zhang commented on HDFS-8331:
--

Hi, [~rakeshr], I thinks this api is duplicated by HDFS-8289

 Erasure Coding: Create FileStatus isErasureCoded() method
 -

 Key: HDFS-8331
 URL: https://issues.apache.org/jira/browse/HDFS-8331
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R

 The idea of this jira is to discuss the need of 
 {{FileStatus#isErasureCoded()}} API. This is just an initial thought, 
 presently the use case/necessity of this is not clear now. Probably will 
 revisit this once the feature is getting matured.  
 Thanks [~umamaheswararao], [~vinayrpet] , [~zhz] for the offline discussions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8289) DFSStripedOutputStream uses an additional rpc all to getErasureCodingInfo

2015-05-06 Thread Yong Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yong Zhang reassigned HDFS-8289:


Assignee: Yong Zhang  (was: Jing Zhao)

 DFSStripedOutputStream uses an additional rpc all to getErasureCodingInfo
 -

 Key: HDFS-8289
 URL: https://issues.apache.org/jira/browse/HDFS-8289
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Yong Zhang

 {code}
 // ECInfo is restored from NN just before writing striped files.
 ecInfo = dfsClient.getErasureCodingInfo(src);
 {code}
 The rpc call above can be avoided by adding ECSchema to HdfsFileStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7621) Erasure Coding: update the Balancer/Mover data migration logic

2015-05-06 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7621:

Attachment: HDFS-7621.002.patch

 Erasure Coding: update the Balancer/Mover data migration logic
 --

 Key: HDFS-7621
 URL: https://issues.apache.org/jira/browse/HDFS-7621
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Walter Su
 Attachments: HDFS-7621.001.patch, HDFS-7621.002.patch


 Currently the Balancer/Mover only considers the distribution of replicas of 
 the same block during data migration: the migration cannot decrease the 
 number of racks. With EC the Balancer and Mover should also take into account 
 the distribution of blocks belonging to the same block group.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8310) Fix TestCLI.testAll help: help for find on Windows

2015-05-06 Thread Kiran Kumar M R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar M R updated HDFS-8310:
--
Labels: BB2015-05-RFC  (was: BB2015-05-TBR)

 Fix TestCLI.testAll help: help for find on Windows
 

 Key: HDFS-8310
 URL: https://issues.apache.org/jira/browse/HDFS-8310
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.0
Reporter: Xiaoyu Yao
Assignee: Kiran Kumar M R
Priority: Minor
  Labels: BB2015-05-RFC
 Attachments: HDFS-8310-001.patch, HDFS-8310-002.patch


 The test uses RegexAcrossOutputComparator in a single regex, which does not 
 match on Windows as shown below.
 {code}
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(155)) - 
 ---
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(156)) - Test ID: [31]
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(157)) -Test Description: 
 [help: help for find]
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(158)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(162)) -   Test Commands: 
 [-help find]
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(166)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(173)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(177)) -  Comparator: 
 [RegexpAcrossOutputComparator]
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(179)) -  Comparision result:   
 [fail]
 2015-04-30 01:14:01,739 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(181)) - Expected output:   
 [-find path \.\.\. expression \.\.\. :
   Finds all files that match the specified expression and
   applies selected actions to them\. If no path is specified
   then defaults to the current working directory\. If no
   expression is specified then defaults to -print\.
   
   The following primary expressions are recognised:
 -name pattern
 -iname pattern
   Evaluates as true if the basename of the file matches the
   pattern using standard file system globbing\.
   If -iname is used then the match is case insensitive\.
   
 -print
 -print0
   Always evaluates to true. Causes the current pathname to be
   written to standard output followed by a newline. If the -print0
   expression is used then an ASCII NULL character is appended rather
   than a newline.
   
   The following operators are recognised:
 expression -a expression
 expression -and expression
 expression expression
   Logical AND operator for joining two expressions\. Returns
   true if both child expressions return true\. Implied by the
   juxtaposition of two expressions and so does not need to be
   explicitly specified\. The second expression will not be
   applied if the first fails\.
 ]
 2015-04-30 01:14:01,739 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(183)) -   Actual output:   
 [-find path ... expression ... :
   Finds all files that match the specified expression and
   applies selected actions to them. If no path is specified
   then defaults to the current working directory. If no
   expression is specified then defaults to -print.
   
   The following primary expressions are recognised:
 -name pattern
 -iname pattern
   Evaluates as true if the basename of the file matches the
   pattern using standard file system globbing.
   If -iname is used then the match is case insensitive.
   
 -print
 -print0
   Always evaluates to true. Causes the current pathname to be
   written to standard output followed by a newline. If the -print0
   expression is used then an ASCII NULL character is appended rather
   than a newline.
   
   The following operators are recognised:
 expression -a expression
 expression -and expression
 expression expression
   Logical AND operator for joining two expressions. Returns
   true if both child expressions return true. Implied by the
   juxtaposition of two expressions and so does not need to be
   explicitly specified. The second expression will not be
   applied if the first fails.
 ]
 {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8289) DFSStripedOutputStream uses an additional rpc all to getErasureCodingInfo

2015-05-06 Thread Yong Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yong Zhang updated HDFS-8289:
-
Attachment: HDFS-8289.000.patch

Initial patch, please review

 DFSStripedOutputStream uses an additional rpc all to getErasureCodingInfo
 -

 Key: HDFS-8289
 URL: https://issues.apache.org/jira/browse/HDFS-8289
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Yong Zhang
 Attachments: HDFS-8289.000.patch


 {code}
 // ECInfo is restored from NN just before writing striped files.
 ecInfo = dfsClient.getErasureCodingInfo(src);
 {code}
 The rpc call above can be avoided by adding ECSchema to HdfsFileStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3384) DataStreamer thread should be closed immediatly when failed to setup a PipelineForAppendOrRecovery

2015-05-06 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14529989#comment-14529989
 ] 

Uma Maheswara Rao G commented on HDFS-3384:
---

Sorry, for taking longer in pushing this. I missed this JIRA in committing. 
Since amith is not active now, let me produce new patch based on latest changes.

 DataStreamer thread should be closed immediatly when failed to setup a 
 PipelineForAppendOrRecovery
 --

 Key: HDFS-3384
 URL: https://issues.apache.org/jira/browse/HDFS-3384
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.0.0-alpha
Reporter: Brahma Reddy Battula
Assignee: amith
  Labels: BB2015-05-TBR
 Attachments: HDFS-3384.patch, HDFS-3384_2.patch, HDFS-3384_2.patch, 
 HDFS-3384_2.patch


 Scenraio:
 =
 write a file
 corrupt block manually
 call append..
 {noformat}
 2012-04-19 09:33:10,776 INFO  hdfs.DFSClient 
 (DFSOutputStream.java:createBlockOutputStream(1059)) - Exception in 
 createBlockOutputStream
 java.io.EOFException: Premature EOF: no length prefix available
   at 
 org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:162)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1039)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:939)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 2012-04-19 09:33:10,807 WARN  hdfs.DFSClient (DFSOutputStream.java:run(549)) 
 - DataStreamer Exception
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:510)
 2012-04-19 09:33:10,807 WARN  hdfs.DFSClient 
 (DFSOutputStream.java:hflush(1511)) - Error while syncing
 java.io.IOException: All datanodes 10.18.40.20:50010 are bad. Aborting...
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:908)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 java.io.IOException: All datanodes 10.18.40.20:50010 are bad. Aborting...
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:908)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8333) Create EC zone should not need superuser privilege

2015-05-06 Thread Yong Zhang (JIRA)
Yong Zhang created HDFS-8333:


 Summary: Create EC zone should not need superuser privilege
 Key: HDFS-8333
 URL: https://issues.apache.org/jira/browse/HDFS-8333
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yong Zhang
Assignee: Yong Zhang


create EC zone should not need superuser privilege, for example, in multiple 
tenant scenario, common users only manage their own directory and subdirectory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3384) DataStreamer thread should be closed immediatly when failed to setup a PipelineForAppendOrRecovery

2015-05-06 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-3384:
--
Attachment: HDFS-3384-3.patch

Attached new patch on latest trunk!

 DataStreamer thread should be closed immediatly when failed to setup a 
 PipelineForAppendOrRecovery
 --

 Key: HDFS-3384
 URL: https://issues.apache.org/jira/browse/HDFS-3384
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.0.0-alpha
Reporter: Brahma Reddy Battula
Assignee: amith
  Labels: BB2015-05-TBR
 Attachments: HDFS-3384-3.patch, HDFS-3384.patch, HDFS-3384_2.patch, 
 HDFS-3384_2.patch, HDFS-3384_2.patch


 Scenraio:
 =
 write a file
 corrupt block manually
 call append..
 {noformat}
 2012-04-19 09:33:10,776 INFO  hdfs.DFSClient 
 (DFSOutputStream.java:createBlockOutputStream(1059)) - Exception in 
 createBlockOutputStream
 java.io.EOFException: Premature EOF: no length prefix available
   at 
 org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:162)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1039)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:939)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 2012-04-19 09:33:10,807 WARN  hdfs.DFSClient (DFSOutputStream.java:run(549)) 
 - DataStreamer Exception
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:510)
 2012-04-19 09:33:10,807 WARN  hdfs.DFSClient 
 (DFSOutputStream.java:hflush(1511)) - Error while syncing
 java.io.IOException: All datanodes 10.18.40.20:50010 are bad. Aborting...
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:908)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 java.io.IOException: All datanodes 10.18.40.20:50010 are bad. Aborting...
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:908)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8331) Erasure Coding: Create FileStatus isErasureCoded() method

2015-05-06 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530007#comment-14530007
 ] 

Rakesh R commented on HDFS-8331:


Thanks [~zhangyongxyz] for the interest. I have gone through HDFS-8289. IIUC 
that is talking about {{HdfsFileStatus#getECSchema}} API but here this jira 
talks completely different API {{FileStatus#isErasureCoded()}} which is similar 
to {{FileStatus#isEncrypted()}}. I think the scope of these two jira issues are 
different. Does this makes sense to you?

 Erasure Coding: Create FileStatus isErasureCoded() method
 -

 Key: HDFS-8331
 URL: https://issues.apache.org/jira/browse/HDFS-8331
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R

 The idea of this jira is to discuss the need of 
 {{FileStatus#isErasureCoded()}} API. This is just an initial thought, 
 presently the use case/necessity of this is not clear now. Probably will 
 revisit this once the feature is getting matured.  
 Thanks [~umamaheswararao], [~vinayrpet] , [~zhz] for the offline discussions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8332) DistributedFileSystem listCacheDirectives() and listCachePools() API calls should check filesystem closed

2015-05-06 Thread Rakesh R (JIRA)
Rakesh R created HDFS-8332:
--

 Summary: DistributedFileSystem listCacheDirectives() and 
listCachePools() API calls should check filesystem closed
 Key: HDFS-8332
 URL: https://issues.apache.org/jira/browse/HDFS-8332
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R


I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be 
called even after the filesystem close. Instead these calls should do 
{{checkOpen}} and throws:
{code}
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8310) Fix TestCLI.testAll help: help for find on Windows

2015-05-06 Thread Kiran Kumar M R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar M R updated HDFS-8310:
--
Attachment: HDFS-8310-002.patch

 Fix TestCLI.testAll help: help for find on Windows
 

 Key: HDFS-8310
 URL: https://issues.apache.org/jira/browse/HDFS-8310
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.0
Reporter: Xiaoyu Yao
Assignee: Kiran Kumar M R
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-8310-001.patch, HDFS-8310-002.patch


 The test uses RegexAcrossOutputComparator in a single regex, which does not 
 match on Windows as shown below.
 {code}
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(155)) - 
 ---
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(156)) - Test ID: [31]
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(157)) -Test Description: 
 [help: help for find]
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(158)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(162)) -   Test Commands: 
 [-help find]
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(166)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(173)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(177)) -  Comparator: 
 [RegexpAcrossOutputComparator]
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(179)) -  Comparision result:   
 [fail]
 2015-04-30 01:14:01,739 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(181)) - Expected output:   
 [-find path \.\.\. expression \.\.\. :
   Finds all files that match the specified expression and
   applies selected actions to them\. If no path is specified
   then defaults to the current working directory\. If no
   expression is specified then defaults to -print\.
   
   The following primary expressions are recognised:
 -name pattern
 -iname pattern
   Evaluates as true if the basename of the file matches the
   pattern using standard file system globbing\.
   If -iname is used then the match is case insensitive\.
   
 -print
 -print0
   Always evaluates to true. Causes the current pathname to be
   written to standard output followed by a newline. If the -print0
   expression is used then an ASCII NULL character is appended rather
   than a newline.
   
   The following operators are recognised:
 expression -a expression
 expression -and expression
 expression expression
   Logical AND operator for joining two expressions\. Returns
   true if both child expressions return true\. Implied by the
   juxtaposition of two expressions and so does not need to be
   explicitly specified\. The second expression will not be
   applied if the first fails\.
 ]
 2015-04-30 01:14:01,739 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(183)) -   Actual output:   
 [-find path ... expression ... :
   Finds all files that match the specified expression and
   applies selected actions to them. If no path is specified
   then defaults to the current working directory. If no
   expression is specified then defaults to -print.
   
   The following primary expressions are recognised:
 -name pattern
 -iname pattern
   Evaluates as true if the basename of the file matches the
   pattern using standard file system globbing.
   If -iname is used then the match is case insensitive.
   
 -print
 -print0
   Always evaluates to true. Causes the current pathname to be
   written to standard output followed by a newline. If the -print0
   expression is used then an ASCII NULL character is appended rather
   than a newline.
   
   The following operators are recognised:
 expression -a expression
 expression -and expression
 expression expression
   Logical AND operator for joining two expressions. Returns
   true if both child expressions return true. Implied by the
   juxtaposition of two expressions and so does not need to be
   explicitly specified. The second expression will not be
   applied if the first fails.
 ]
 {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8310) Fix TestCLI.testAll help: help for find on Windows

2015-05-06 Thread Kiran Kumar M R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530125#comment-14530125
 ] 

Kiran Kumar M R commented on HDFS-8310:
---

Thanks for review Xiaoyu, I have added space and attached patch. 

 Fix TestCLI.testAll help: help for find on Windows
 

 Key: HDFS-8310
 URL: https://issues.apache.org/jira/browse/HDFS-8310
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.0
Reporter: Xiaoyu Yao
Assignee: Kiran Kumar M R
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-8310-001.patch, HDFS-8310-002.patch


 The test uses RegexAcrossOutputComparator in a single regex, which does not 
 match on Windows as shown below.
 {code}
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(155)) - 
 ---
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(156)) - Test ID: [31]
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(157)) -Test Description: 
 [help: help for find]
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(158)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(162)) -   Test Commands: 
 [-help find]
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(166)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(173)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(177)) -  Comparator: 
 [RegexpAcrossOutputComparator]
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(179)) -  Comparision result:   
 [fail]
 2015-04-30 01:14:01,739 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(181)) - Expected output:   
 [-find path \.\.\. expression \.\.\. :
   Finds all files that match the specified expression and
   applies selected actions to them\. If no path is specified
   then defaults to the current working directory\. If no
   expression is specified then defaults to -print\.
   
   The following primary expressions are recognised:
 -name pattern
 -iname pattern
   Evaluates as true if the basename of the file matches the
   pattern using standard file system globbing\.
   If -iname is used then the match is case insensitive\.
   
 -print
 -print0
   Always evaluates to true. Causes the current pathname to be
   written to standard output followed by a newline. If the -print0
   expression is used then an ASCII NULL character is appended rather
   than a newline.
   
   The following operators are recognised:
 expression -a expression
 expression -and expression
 expression expression
   Logical AND operator for joining two expressions\. Returns
   true if both child expressions return true\. Implied by the
   juxtaposition of two expressions and so does not need to be
   explicitly specified\. The second expression will not be
   applied if the first fails\.
 ]
 2015-04-30 01:14:01,739 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(183)) -   Actual output:   
 [-find path ... expression ... :
   Finds all files that match the specified expression and
   applies selected actions to them. If no path is specified
   then defaults to the current working directory. If no
   expression is specified then defaults to -print.
   
   The following primary expressions are recognised:
 -name pattern
 -iname pattern
   Evaluates as true if the basename of the file matches the
   pattern using standard file system globbing.
   If -iname is used then the match is case insensitive.
   
 -print
 -print0
   Always evaluates to true. Causes the current pathname to be
   written to standard output followed by a newline. If the -print0
   expression is used then an ASCII NULL character is appended rather
   than a newline.
   
   The following operators are recognised:
 expression -a expression
 expression -and expression
 expression expression
   Logical AND operator for joining two expressions. Returns
   true if both child expressions return true. Implied by the
   juxtaposition of two expressions and so does not need to be
   explicitly specified. The second expression will not be
   applied if the first fails.
 ]
 {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8331) Erasure Coding: Create FileStatus isErasureCoded() method

2015-05-06 Thread Yong Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530013#comment-14530013
 ] 

Yong Zhang commented on HDFS-8331:
--

OK, thanks

 Erasure Coding: Create FileStatus isErasureCoded() method
 -

 Key: HDFS-8331
 URL: https://issues.apache.org/jira/browse/HDFS-8331
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R

 The idea of this jira is to discuss the need of 
 {{FileStatus#isErasureCoded()}} API. This is just an initial thought, 
 presently the use case/necessity of this is not clear now. Probably will 
 revisit this once the feature is getting matured.  
 Thanks [~umamaheswararao], [~vinayrpet] , [~zhz] for the offline discussions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7980) Incremental BlockReport will dramatically slow down the startup of a namenode

2015-05-06 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530441#comment-14530441
 ] 

Walter Su commented on HDFS-7980:
-

reposted 004 patch.

 Incremental BlockReport will dramatically slow down the startup of  a namenode
 --

 Key: HDFS-7980
 URL: https://issues.apache.org/jira/browse/HDFS-7980
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hui Zheng
Assignee: Walter Su
  Labels: BB2015-05-TBR
 Attachments: HDFS-7980.001.patch, HDFS-7980.002.patch, 
 HDFS-7980.003.patch, HDFS-7980.004.patch, HDFS-7980.004.repost.patch


 In the current implementation the datanode will call the 
 reportReceivedDeletedBlocks() method that is a IncrementalBlockReport before 
 calling the bpNamenode.blockReport() method. So in a large(several thousands 
 of datanodes) and busy cluster it will slow down(more than one hour) the 
 startup of namenode. 
 {code}
 ListDatanodeCommand blockReport() throws IOException {
 // send block report if timer has expired.
 final long startTime = now();
 if (startTime - lastBlockReport = dnConf.blockReportInterval) {
   return null;
 }
 final ArrayListDatanodeCommand cmds = new ArrayListDatanodeCommand();
 // Flush any block information that precedes the block report. Otherwise
 // we have a chance that we will miss the delHint information
 // or we will report an RBW replica after the BlockReport already reports
 // a FINALIZED one.
 reportReceivedDeletedBlocks();
 lastDeletedReport = startTime;
 .
 // Send the reports to the NN.
 int numReportsSent = 0;
 int numRPCs = 0;
 boolean success = false;
 long brSendStartTime = now();
 try {
   if (totalBlockCount  dnConf.blockReportSplitThreshold) {
 // Below split threshold, send all reports in a single message.
 DatanodeCommand cmd = bpNamenode.blockReport(
 bpRegistration, bpos.getBlockPoolId(), reports);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8037) WebHDFS: CheckAccess silently accepts certain malformed FsActions

2015-05-06 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8037:

Attachment: HDFS-8037.003.patch

 WebHDFS: CheckAccess silently accepts certain malformed FsActions
 -

 Key: HDFS-8037
 URL: https://issues.apache.org/jira/browse/HDFS-8037
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.6.0
Reporter: Jake Low
Assignee: Walter Su
Priority: Minor
  Labels: BB2015-05-TBR, easyfix, newbie
 Attachments: HDFS-8037.001.patch, HDFS-8037.002.patch, 
 HDFS-8037.003.patch


 WebHDFS's {{CHECKACCESS}} operation accepts a parameter called {{fsaction}}, 
 which represents the type(s) of access to check for.
 According to the documentation, and also the source code, the domain of 
 {{fsaction}} is the set of strings matched by the regex {{\[rwx-\]{3\}}}. 
 This domain is wider than the set of valid {{FsAction}} objects, because it 
 doesn't guarantee sensible ordering of access types. For example, the strings 
 {{rxw}} and {{--r}} are valid {{fsaction}} parameter values, but don't 
 correspond to valid {{FsAction}} instances.
 The result is that WebHDFS silently accepts {{fsaction}} parameter values 
 which don't match any valid {{FsAction}} instance, but doesn't actually 
 perform any permissions checking in this case.
 For example, here's a {{CHECKACCESS}} call where we request {{rw-}} access 
 on a file which we only have permission to read and execute. It raises an 
 exception, as it should.
 {code:none}
 curl -i -X GET 
 http://localhost:50070/webhdfs/v1/myfile?op=CHECKACCESSuser.name=nobodyfsaction=r-x;
 HTTP/1.1 403 Forbidden
 Content-Type: application/json
 {
   RemoteException: {
 exception: AccessControlException,
 javaClassName: org.apache.hadoop.security.AccessControlException,
 message: Permission denied: user=nobody, access=READ_WRITE, 
 inode=\\/myfile\:root:supergroup:drwxr-xr-x
   }
 }
 {code}
 But if we instead request {{r-w}} access, the call appears to succeed:
 {code:none}
 curl -i -X GET 
 http://localhost:50070/webhdfs/v1/myfile?op=CHECKACCESSuser.name=nobodyfsaction=r-w;
 HTTP/1.1 200 OK
 Content-Length: 0
 {code}
 As I see it, the fix would be to change the regex pattern in 
 {{FsActionParam}} to something like {{\[r-\]\[w-\]\[x-\]}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7310) Mover can give first priority to local DN if it has target storage type available in local DN

2015-05-06 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530370#comment-14530370
 ] 

Uma Maheswara Rao G commented on HDFS-7310:
---

Thank you Nicholas for helping Surendra on fix and review. Nice finding 
Surendra! .- Thanks

 Mover can give first priority to local DN if it has target storage type 
 available in local DN
 -

 Key: HDFS-7310
 URL: https://issues.apache.org/jira/browse/HDFS-7310
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer  mover
Affects Versions: 3.0.0
Reporter: Uma Maheswara Rao G
Assignee: Vinayakumar B
 Fix For: 2.7.0

 Attachments: HDFS-7310-001.patch, HDFS-7310-002.patch, 
 HDFS-7310-003.patch, HDFS-7310-004.patch


 Currently Mover logic may move blocks to any DN which had target storage 
 type. But if the src DN has target storage type then mover can give highest 
 priority to local DN. If local DN does not contains target storage type, then 
 it can assign to any DN as the current logic does.
   This is a thought, have not go through the code fully yet.
 Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8037) WebHDFS: CheckAccess silently accepts certain malformed FsActions

2015-05-06 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530446#comment-14530446
 ] 

Walter Su commented on HDFS-8037:
-

003 patch add unit test. Please review.

 WebHDFS: CheckAccess silently accepts certain malformed FsActions
 -

 Key: HDFS-8037
 URL: https://issues.apache.org/jira/browse/HDFS-8037
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.6.0
Reporter: Jake Low
Assignee: Walter Su
Priority: Minor
  Labels: BB2015-05-TBR, easyfix, newbie
 Attachments: HDFS-8037.001.patch, HDFS-8037.002.patch, 
 HDFS-8037.003.patch


 WebHDFS's {{CHECKACCESS}} operation accepts a parameter called {{fsaction}}, 
 which represents the type(s) of access to check for.
 According to the documentation, and also the source code, the domain of 
 {{fsaction}} is the set of strings matched by the regex {{\[rwx-\]{3\}}}. 
 This domain is wider than the set of valid {{FsAction}} objects, because it 
 doesn't guarantee sensible ordering of access types. For example, the strings 
 {{rxw}} and {{--r}} are valid {{fsaction}} parameter values, but don't 
 correspond to valid {{FsAction}} instances.
 The result is that WebHDFS silently accepts {{fsaction}} parameter values 
 which don't match any valid {{FsAction}} instance, but doesn't actually 
 perform any permissions checking in this case.
 For example, here's a {{CHECKACCESS}} call where we request {{rw-}} access 
 on a file which we only have permission to read and execute. It raises an 
 exception, as it should.
 {code:none}
 curl -i -X GET 
 http://localhost:50070/webhdfs/v1/myfile?op=CHECKACCESSuser.name=nobodyfsaction=r-x;
 HTTP/1.1 403 Forbidden
 Content-Type: application/json
 {
   RemoteException: {
 exception: AccessControlException,
 javaClassName: org.apache.hadoop.security.AccessControlException,
 message: Permission denied: user=nobody, access=READ_WRITE, 
 inode=\\/myfile\:root:supergroup:drwxr-xr-x
   }
 }
 {code}
 But if we instead request {{r-w}} access, the call appears to succeed:
 {code:none}
 curl -i -X GET 
 http://localhost:50070/webhdfs/v1/myfile?op=CHECKACCESSuser.name=nobodyfsaction=r-w;
 HTTP/1.1 200 OK
 Content-Length: 0
 {code}
 As I see it, the fix would be to change the regex pattern in 
 {{FsActionParam}} to something like {{\[r-\]\[w-\]\[x-\]}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7980) Incremental BlockReport will dramatically slow down the startup of a namenode

2015-05-06 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7980:

Attachment: HDFS-7980.004.repost.patch

 Incremental BlockReport will dramatically slow down the startup of  a namenode
 --

 Key: HDFS-7980
 URL: https://issues.apache.org/jira/browse/HDFS-7980
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hui Zheng
Assignee: Walter Su
  Labels: BB2015-05-TBR
 Attachments: HDFS-7980.001.patch, HDFS-7980.002.patch, 
 HDFS-7980.003.patch, HDFS-7980.004.patch, HDFS-7980.004.repost.patch


 In the current implementation the datanode will call the 
 reportReceivedDeletedBlocks() method that is a IncrementalBlockReport before 
 calling the bpNamenode.blockReport() method. So in a large(several thousands 
 of datanodes) and busy cluster it will slow down(more than one hour) the 
 startup of namenode. 
 {code}
 ListDatanodeCommand blockReport() throws IOException {
 // send block report if timer has expired.
 final long startTime = now();
 if (startTime - lastBlockReport = dnConf.blockReportInterval) {
   return null;
 }
 final ArrayListDatanodeCommand cmds = new ArrayListDatanodeCommand();
 // Flush any block information that precedes the block report. Otherwise
 // we have a chance that we will miss the delHint information
 // or we will report an RBW replica after the BlockReport already reports
 // a FINALIZED one.
 reportReceivedDeletedBlocks();
 lastDeletedReport = startTime;
 .
 // Send the reports to the NN.
 int numReportsSent = 0;
 int numRPCs = 0;
 boolean success = false;
 long brSendStartTime = now();
 try {
   if (totalBlockCount  dnConf.blockReportSplitThreshold) {
 // Below split threshold, send all reports in a single message.
 DatanodeCommand cmd = bpNamenode.blockReport(
 bpRegistration, bpos.getBlockPoolId(), reports);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8219) setStoragePolicy with folder behavior is different after cluster restart

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530501#comment-14530501
 ] 

Hudson commented on HDFS-8219:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #919 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/919/])
HDFS-8219. setStoragePolicy with folder behavior is different after cluster 
restart. (surendra singh lilhore via Xiaoyu Yao) (xyao: rev 
0100b155019496d077f958904de7d385697d65d9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java


 setStoragePolicy with folder behavior is different after cluster restart
 

 Key: HDFS-8219
 URL: https://issues.apache.org/jira/browse/HDFS-8219
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Peter Shi
Assignee: surendra singh lilhore
  Labels: BB2015-05-RFC
 Fix For: 2.8.0

 Attachments: HDFS-8219.patch, HDFS-8219.unittest-norepro.patch


 Reproduce steps.
 1) mkdir named /temp
 2) put one file A under /temp
 3) change /temp storage policy to COLD
 4) use -getStoragePolicy to query file A's storage policy, it is same with 
 /temp
 5) change /temp folder storage policy again, will see file A's storage policy 
 keep same with parent folder.
 then restart the cluster.
 do 3) 4) again, will find file A's storage policy is not change while parent 
 folder's storage policy changes. It behaves different.
 As i debugged, found the code:
 in INodeFile.getStoragePolicyID
 {code}
   public byte getStoragePolicyID() {
 byte id = getLocalStoragePolicyID();
 if (id == BLOCK_STORAGE_POLICY_ID_UNSPECIFIED) {
   return this.getParent() != null ?
   this.getParent().getStoragePolicyID() : id;
 }
 return id;
   }
 {code}
 If the file do not have its storage policy, it will use parent's. But after 
 cluster restart, the file turns to have its own storage policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8314) Move HdfsServerConstants#IO_FILE_BUFFER_SIZE and SMALL_BUFFER_SIZE to the users

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530492#comment-14530492
 ] 

Hudson commented on HDFS-8314:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #919 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/919/])
HDFS-8314. Move HdfsServerConstants#IO_FILE_BUFFER_SIZE and SMALL_BUFFER_SIZE 
to the users. Contributed by Li Lu. (wheat9: rev 
4da8490b512a33a255ed27309860859388d7c168)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskAsyncLazyPersistService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java


 Move HdfsServerConstants#IO_FILE_BUFFER_SIZE and SMALL_BUFFER_SIZE to the 
 users
 ---

 Key: HDFS-8314
 URL: https://issues.apache.org/jira/browse/HDFS-8314
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Li Lu
 Fix For: 2.8.0

 Attachments: HDFS-8314-trunk.001.patch, HDFS-8314-trunk.002.patch, 
 HDFS-8314-trunk.003.patch, HDFS-8314-trunk.004.patch


 Currently HdfsServerConstants reads the configuration and to set the value of 
 IO_FILE_BUFFER_SIZE and SMALL_BUFFER_SIZE, thus they are configurable instead 
 of being constants.
 This jira proposes to move these two variables to the users in the 
 upper-level so that HdfsServerConstants only stores constant values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8305) HDFS INotify: the destination field of RenameOp should always end with the file name

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530498#comment-14530498
 ] 

Hudson commented on HDFS-8305:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #919 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/919/])
HDFS-8305: HDFS INotify: the destination field of RenameOp should always end 
with the file name (cmccabe) (cmccabe: rev 
fcd4cb751665adb241081e42b3403c3856b6c6fe)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 HDFS INotify: the destination field of RenameOp should always end with the 
 file name
 

 Key: HDFS-8305
 URL: https://issues.apache.org/jira/browse/HDFS-8305
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.7.1

 Attachments: HDFS-8305.001.patch, HDFS-8305.002.patch


 HDFS INotify: the destination field of RenameOp should always end with the 
 file name rather than sometimes being a directory name.  Previously, in some 
 cases when using the old rename, this was not the case.  The format of 
 OP_EDIT_LOG_RENAME_OLD allows moving /f to /d/f to be represented as 
 RENAME(src=/f, dst=/d) or RENAME(src=/f, dst=/d/f). This change makes HDFS 
 always use the latter form. This, in turn, ensures that inotify will always 
 be able to consider the dst field as the full destination file name. This is 
 a compatible change since we aren't removing the ability to handle the first 
 form during edit log replay... we just no longer generate it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7847) Modify NNThroughputBenchmark to be able to operate on a remote NameNode

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530500#comment-14530500
 ] 

Hudson commented on HDFS-7847:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #919 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/919/])
HDFS-7847. Modify NNThroughputBenchmark to be able to operate on a remote 
NameNode (Charles Lamb via Colin P. McCabe) (cmccabe: rev 
ffce9a3413277a69444fcb890460c885de56db69)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java


 Modify NNThroughputBenchmark to be able to operate on a remote NameNode
 ---

 Key: HDFS-7847
 URL: https://issues.apache.org/jira/browse/HDFS-7847
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Charles Lamb
 Fix For: 2.8.0

 Attachments: HDFS-7847.000.patch, HDFS-7847.001.patch, 
 HDFS-7847.002.patch, HDFS-7847.003.patch, HDFS-7847.004.patch, 
 HDFS-7847.005.patch, make_blocks.tar.gz


 Modify NNThroughputBenchmark to be able to operate on a NN that is not in 
 process. A followon Jira will modify it some more to allow quantifying native 
 and java heap sizes, and some latency numbers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7758) Retire FsDatasetSpi#getVolumes() and use FsDatasetSpi#getVolumeRefs() instead

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530499#comment-14530499
 ] 

Hudson commented on HDFS-7758:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #919 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/919/])
HDFS-7758. Retire FsDatasetSpi#getVolumes() and use 
FsDatasetSpi#getVolumeRefs() instead (Lei (Eddy) Xu via Colin P. McCabe) 
(cmccabe: rev 24d3a2d4fdd836ac9a5bc755a7fb9354f7a582b1)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeReference.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBlockReports.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestDatanodeRestart.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestStorageMover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockHasMultipleReplicasOnSameDN.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTriggerBlockReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSClusterWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestWriteToReplica.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestRbwSpaceReservation.java


 Retire FsDatasetSpi#getVolumes() and use FsDatasetSpi#getVolumeRefs() instead
 -

 Key: HDFS-7758
 URL: https://issues.apache.org/jira/browse/HDFS-7758
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Fix For: 2.8.0

 Attachments: HDFS-7758.000.patch, HDFS-7758.001.patch, 
 HDFS-7758.002.patch, HDFS-7758.003.patch, HDFS-7758.004.patch, 
 HDFS-7758.005.patch, HDFS-7758.006.patch, HDFS-7758.007.patch, 
 HDFS-7758.008.patch, HDFS-7758.010.patch


 HDFS-7496 introduced reference-counting  the volume instances being used to 
 prevent race condition when hot swapping a volume.
 However, {{FsDatasetSpi#getVolumes()}} can still leak the volume instance 
 without increasing its reference count. In this JIRA, we retire the 
 {{FsDatasetSpi#getVolumes()}} and propose {{FsDatasetSpi#getVolumeRefs()}} 
 and etc. method to access {{FsVolume}}. Thus it makes sure that the consumer 
 of {{FsVolume}} always has correct reference count.



[jira] [Commented] (HDFS-8113) NullPointerException in BlockInfoContiguous causes block report failure

2015-05-06 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530591#comment-14530591
 ] 

Walter Su commented on HDFS-8113:
-

patch is good.

Hi, [~chengbing.liu]. Have you tried restart NN? I think so. Fsimage saves 
Files, and block Ids belonging to some files.
when fsimage is loaded, before first block report, and NN in safe mode. Each 
block should belong to some file. Because stored BlockInfo is created according 
to INodeFile proto. It's impossible NN has orphan blocks. should we try find 
null bc here?
When first block reports finished, I think we should try looking for null bc 
again.


 NullPointerException in BlockInfoContiguous causes block report failure
 ---

 Key: HDFS-8113
 URL: https://issues.apache.org/jira/browse/HDFS-8113
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Chengbing Liu
Assignee: Chengbing Liu
  Labels: BB2015-05-TBR
 Attachments: HDFS-8113.02.patch, HDFS-8113.patch


 The following copy constructor can throw NullPointerException if {{bc}} is 
 null.
 {code}
   protected BlockInfoContiguous(BlockInfoContiguous from) {
 this(from, from.bc.getBlockReplication());
 this.bc = from.bc;
   }
 {code}
 We have observed that some DataNodes keeps failing doing block reports with 
 NameNode. The stacktrace is as follows. Though we are not using the latest 
 version, the problem still exists.
 {quote}
 2015-03-08 19:28:13,442 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 RemoteException in offerService
 org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
 java.lang.NullPointerException
 at org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.(BlockInfo.java:80)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockToMarkCorrupt.(BlockManager.java:1696)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.checkReplicaCorrupt(BlockManager.java:2185)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReportedBlock(BlockManager.java:2047)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiff(BlockManager.java:1950)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1823)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1750)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1069)
 at 
 org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:152)
 at 
 org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:26382)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1623)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7621) Erasure Coding: update the Balancer/Mover data migration logic

2015-05-06 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7621:

Labels: HDFS-7285  (was: )

 Erasure Coding: update the Balancer/Mover data migration logic
 --

 Key: HDFS-7621
 URL: https://issues.apache.org/jira/browse/HDFS-7621
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Walter Su
  Labels: HDFS-7285
 Attachments: HDFS-7621.001.patch, HDFS-7621.002.patch


 Currently the Balancer/Mover only considers the distribution of replicas of 
 the same block during data migration: the migration cannot decrease the 
 number of racks. With EC the Balancer and Mover should also take into account 
 the distribution of blocks belonging to the same block group.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8112) Enforce authorization policy to protect administration operations for EC zone and schemas

2015-05-06 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530515#comment-14530515
 ] 

Rakesh R commented on HDFS-8112:


Thank you [~zhangyongxyz] for the comments and bringing up the use case.

IIUC you are saying ErasureCoding APIs can check user permission against the 
ACLs of the FSDirectory. Also, we can define the File system actions(r, w, etc) 
as per EC operations. When raising this jira [~drankye]'s idea is to enforce 
protection policy at the protocol layer [Hadoop Service Level 
Authorization|https://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-common/ServiceLevelAuth.html]
 which ensures only privileged users/admins to be able to perform the 
operations. Initially we thought all DFS commands for EC should be in client 
protocol for this discussion. But on a second thought, there may come new APIs 
in other protocol as well. So we have decided to take up this jira later(could 
leave for other issues or discussions) and is the reason I didn't give much 
focus on this jira. I could see today you have raised HDFS-8333 to discuss the 
Create EC zone API user privileges. Probably we could listen the thoughts from 
others and take up this task accordingly.

 Enforce authorization policy to protect administration operations for EC zone 
 and schemas
 -

 Key: HDFS-8112
 URL: https://issues.apache.org/jira/browse/HDFS-8112
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Rakesh R

 We should allow to enforce authorization policy to protect administration 
 operations for EC zone and schemas as such behaviors would impact too much 
 for a system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8112) Enforce authorization policy to protect administration operations for EC zone and schemas

2015-05-06 Thread Yong Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530546#comment-14530546
 ] 

Yong Zhang commented on HDFS-8112:
--

Thank you [~rakeshr] to clear the background about this jira.
These days I work on append feature and want a ec file to test the append, and 
find creating ec zone need superuser privilege which does not meet user 
scenario, so I submit HDFS-8333, but then find you have worked on it, so want 
to discuss with you.


 Enforce authorization policy to protect administration operations for EC zone 
 and schemas
 -

 Key: HDFS-8112
 URL: https://issues.apache.org/jira/browse/HDFS-8112
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Rakesh R

 We should allow to enforce authorization policy to protect administration 
 operations for EC zone and schemas as such behaviors would impact too much 
 for a system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8296) BlockManager.getUnderReplicatedBlocksCount() is not giving correct count if namenode in safe mode.

2015-05-06 Thread surendra singh lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530585#comment-14530585
 ] 

surendra singh lilhore commented on HDFS-8296:
--

bq. BlockManager.java:1: File length is 3,821 lines (max allowed is 2,000).

Checkstyle is because of file length...

  BlockManager.getUnderReplicatedBlocksCount() is not giving correct count if 
 namenode in safe mode.
 ---

 Key: HDFS-8296
 URL: https://issues.apache.org/jira/browse/HDFS-8296
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: surendra singh lilhore
Assignee: surendra singh lilhore
  Labels: BB2015-05-TBR
 Attachments: HDFS-8296.patch


 {{underReplicatedBlocksCount}} update by the {{updateState()}} API.
 {code}
  void updateState() {
 pendingReplicationBlocksCount = pendingReplications.size();
 underReplicatedBlocksCount = neededReplications.size();
 corruptReplicaBlocksCount = corruptReplicas.size();
   }
  {code}
  but this will not call when NN in safe mode. This is happening because 
 computeDatanodeWork() we will return 0 if NN in safe mode 
  {code}
   int computeDatanodeWork() {
.
 if (namesystem.isInSafeMode()) {
   return 0;
 }
 
 
 this.updateState();
 
 
   }
  {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7847) Modify NNThroughputBenchmark to be able to operate on a remote NameNode

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530455#comment-14530455
 ] 

Hudson commented on HDFS-7847:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #186 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/186/])
HDFS-7847. Modify NNThroughputBenchmark to be able to operate on a remote 
NameNode (Charles Lamb via Colin P. McCabe) (cmccabe: rev 
ffce9a3413277a69444fcb890460c885de56db69)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java


 Modify NNThroughputBenchmark to be able to operate on a remote NameNode
 ---

 Key: HDFS-7847
 URL: https://issues.apache.org/jira/browse/HDFS-7847
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Charles Lamb
 Fix For: 2.8.0

 Attachments: HDFS-7847.000.patch, HDFS-7847.001.patch, 
 HDFS-7847.002.patch, HDFS-7847.003.patch, HDFS-7847.004.patch, 
 HDFS-7847.005.patch, make_blocks.tar.gz


 Modify NNThroughputBenchmark to be able to operate on a NN that is not in 
 process. A followon Jira will modify it some more to allow quantifying native 
 and java heap sizes, and some latency numbers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8305) HDFS INotify: the destination field of RenameOp should always end with the file name

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530466#comment-14530466
 ] 

Hudson commented on HDFS-8305:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #186 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/186/])
HDFS-8305: HDFS INotify: the destination field of RenameOp should always end 
with the file name (cmccabe) (cmccabe: rev 
fcd4cb751665adb241081e42b3403c3856b6c6fe)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java


 HDFS INotify: the destination field of RenameOp should always end with the 
 file name
 

 Key: HDFS-8305
 URL: https://issues.apache.org/jira/browse/HDFS-8305
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.7.1

 Attachments: HDFS-8305.001.patch, HDFS-8305.002.patch


 HDFS INotify: the destination field of RenameOp should always end with the 
 file name rather than sometimes being a directory name.  Previously, in some 
 cases when using the old rename, this was not the case.  The format of 
 OP_EDIT_LOG_RENAME_OLD allows moving /f to /d/f to be represented as 
 RENAME(src=/f, dst=/d) or RENAME(src=/f, dst=/d/f). This change makes HDFS 
 always use the latter form. This, in turn, ensures that inotify will always 
 be able to consider the dst field as the full destination file name. This is 
 a compatible change since we aren't removing the ability to handle the first 
 form during edit log replay... we just no longer generate it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8219) setStoragePolicy with folder behavior is different after cluster restart

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530458#comment-14530458
 ] 

Hudson commented on HDFS-8219:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #186 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/186/])
HDFS-8219. setStoragePolicy with folder behavior is different after cluster 
restart. (surendra singh lilhore via Xiaoyu Yao) (xyao: rev 
0100b155019496d077f958904de7d385697d65d9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java


 setStoragePolicy with folder behavior is different after cluster restart
 

 Key: HDFS-8219
 URL: https://issues.apache.org/jira/browse/HDFS-8219
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Peter Shi
Assignee: surendra singh lilhore
  Labels: BB2015-05-RFC
 Fix For: 2.8.0

 Attachments: HDFS-8219.patch, HDFS-8219.unittest-norepro.patch


 Reproduce steps.
 1) mkdir named /temp
 2) put one file A under /temp
 3) change /temp storage policy to COLD
 4) use -getStoragePolicy to query file A's storage policy, it is same with 
 /temp
 5) change /temp folder storage policy again, will see file A's storage policy 
 keep same with parent folder.
 then restart the cluster.
 do 3) 4) again, will find file A's storage policy is not change while parent 
 folder's storage policy changes. It behaves different.
 As i debugged, found the code:
 in INodeFile.getStoragePolicyID
 {code}
   public byte getStoragePolicyID() {
 byte id = getLocalStoragePolicyID();
 if (id == BLOCK_STORAGE_POLICY_ID_UNSPECIFIED) {
   return this.getParent() != null ?
   this.getParent().getStoragePolicyID() : id;
 }
 return id;
   }
 {code}
 If the file do not have its storage policy, it will use parent's. But after 
 cluster restart, the file turns to have its own storage policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8289) DFSStripedOutputStream uses an additional rpc all to getErasureCodingInfo

2015-05-06 Thread Yong Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yong Zhang updated HDFS-8289:
-
Attachment: HDFS-8289.001.patch

 DFSStripedOutputStream uses an additional rpc all to getErasureCodingInfo
 -

 Key: HDFS-8289
 URL: https://issues.apache.org/jira/browse/HDFS-8289
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Yong Zhang
 Attachments: HDFS-8289.000.patch, HDFS-8289.001.patch


 {code}
 // ECInfo is restored from NN just before writing striped files.
 ecInfo = dfsClient.getErasureCodingInfo(src);
 {code}
 The rpc call above can be avoided by adding ECSchema to HdfsFileStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8332) DistributedFileSystem listCacheDirectives() and listCachePools() API calls should check filesystem closed

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530409#comment-14530409
 ] 

Hadoop QA commented on HDFS-8332:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 49s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  4s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 23s | The applied patch generated  5 
new checkstyle issues (total was 498, now 499). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  8s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 23s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  99m 19s | Tests failed in hadoop-hdfs. |
| | | 143m 51s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestParallelShortCircuitReadNoChecksum |
|   | hadoop.hdfs.TestDFSFinalize |
| Timed out tests | org.apache.hadoop.hdfs.TestFSOutputSummer |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12730775/HDFS-8332-000.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / a583a40 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10831/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10831/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10831/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10831/console |


This message was automatically generated.

 DistributedFileSystem listCacheDirectives() and listCachePools() API calls 
 should check filesystem closed
 -

 Key: HDFS-8332
 URL: https://issues.apache.org/jira/browse/HDFS-8332
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8332-000.patch


 I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be 
 called even after the filesystem close. Instead these calls should do 
 {{checkOpen}} and throws:
 {code}
 java.io.IOException: Filesystem closed
   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8314) Move HdfsServerConstants#IO_FILE_BUFFER_SIZE and SMALL_BUFFER_SIZE to the users

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530459#comment-14530459
 ] 

Hudson commented on HDFS-8314:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #186 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/186/])
HDFS-8314. Move HdfsServerConstants#IO_FILE_BUFFER_SIZE and SMALL_BUFFER_SIZE 
to the users. Contributed by Li Lu. (wheat9: rev 
4da8490b512a33a255ed27309860859388d7c168)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskAsyncLazyPersistService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java


 Move HdfsServerConstants#IO_FILE_BUFFER_SIZE and SMALL_BUFFER_SIZE to the 
 users
 ---

 Key: HDFS-8314
 URL: https://issues.apache.org/jira/browse/HDFS-8314
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Li Lu
 Fix For: 2.8.0

 Attachments: HDFS-8314-trunk.001.patch, HDFS-8314-trunk.002.patch, 
 HDFS-8314-trunk.003.patch, HDFS-8314-trunk.004.patch


 Currently HdfsServerConstants reads the configuration and to set the value of 
 IO_FILE_BUFFER_SIZE and SMALL_BUFFER_SIZE, thus they are configurable instead 
 of being constants.
 This jira proposes to move these two variables to the users in the 
 upper-level so that HdfsServerConstants only stores constant values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7758) Retire FsDatasetSpi#getVolumes() and use FsDatasetSpi#getVolumeRefs() instead

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530465#comment-14530465
 ] 

Hudson commented on HDFS-7758:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #186 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/186/])
HDFS-7758. Retire FsDatasetSpi#getVolumes() and use 
FsDatasetSpi#getVolumeRefs() instead (Lei (Eddy) Xu via Colin P. McCabe) 
(cmccabe: rev 24d3a2d4fdd836ac9a5bc755a7fb9354f7a582b1)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestRbwSpaceReservation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestStorageMover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBlockReports.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestWriteToReplica.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeReference.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSClusterWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTriggerBlockReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestDatanodeRestart.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockHasMultipleReplicasOnSameDN.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java


 Retire FsDatasetSpi#getVolumes() and use FsDatasetSpi#getVolumeRefs() instead
 -

 Key: HDFS-7758
 URL: https://issues.apache.org/jira/browse/HDFS-7758
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Fix For: 2.8.0

 Attachments: HDFS-7758.000.patch, HDFS-7758.001.patch, 
 HDFS-7758.002.patch, HDFS-7758.003.patch, HDFS-7758.004.patch, 
 HDFS-7758.005.patch, HDFS-7758.006.patch, HDFS-7758.007.patch, 
 HDFS-7758.008.patch, HDFS-7758.010.patch


 HDFS-7496 introduced reference-counting  the volume instances being used to 
 prevent race condition when hot swapping a volume.
 However, {{FsDatasetSpi#getVolumes()}} can still leak the volume instance 
 without increasing its reference count. In this JIRA, we retire the 
 {{FsDatasetSpi#getVolumes()}} and propose {{FsDatasetSpi#getVolumeRefs()}} 
 and etc. method to access {{FsVolume}}. Thus it makes sure that the consumer 
 of {{FsVolume}} always has correct 

[jira] [Updated] (HDFS-8333) Create EC zone should not need superuser privilege

2015-05-06 Thread Yong Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yong Zhang updated HDFS-8333:
-
Attachment: HDFS-8333.000.patch

Initial patch

 Create EC zone should not need superuser privilege
 --

 Key: HDFS-8333
 URL: https://issues.apache.org/jira/browse/HDFS-8333
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yong Zhang
Assignee: Yong Zhang
 Attachments: HDFS-8333.000.patch


 create EC zone should not need superuser privilege, for example, in multiple 
 tenant scenario, common users only manage their own directory and 
 subdirectory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8332) DistributedFileSystem listCacheDirectives() and listCachePools() API calls should check filesystem closed

2015-05-06 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530532#comment-14530532
 ] 

Rakesh R commented on HDFS-8332:


Following checkstyle warnings are unrelated to my patch.
{code}
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java:711:41:
 'blocks' hides a field.
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java:717:
 Line is longer than 80 characters (found 85).
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java:711:41:
 'blocks' hides a field.
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java:717:
 Line is longer than 80 characters (found 85).
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:1:
 File length is 3,218 lines (max allowed is 2,000).
{code}

Also, jenkins complains about few test case failures. It looks like these are 
unrelated to the patch.

 DistributedFileSystem listCacheDirectives() and listCachePools() API calls 
 should check filesystem closed
 -

 Key: HDFS-8332
 URL: https://issues.apache.org/jira/browse/HDFS-8332
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8332-000.patch


 I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be 
 called even after the filesystem close. Instead these calls should do 
 {{checkOpen}} and throws:
 {code}
 java.io.IOException: Filesystem closed
   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8332) DistributedFileSystem listCacheDirectives() and listCachePools() API calls should check filesystem closed

2015-05-06 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8332:
---
Labels: BB2015-05-TBR  (was: )

 DistributedFileSystem listCacheDirectives() and listCachePools() API calls 
 should check filesystem closed
 -

 Key: HDFS-8332
 URL: https://issues.apache.org/jira/browse/HDFS-8332
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
  Labels: BB2015-05-TBR
 Attachments: HDFS-8332-000.patch


 I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be 
 called even after the filesystem close. Instead these calls should do 
 {{checkOpen}} and throws:
 {code}
 java.io.IOException: Filesystem closed
   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8310) Fix TestCLI.testAll help: help for find on Windows

2015-05-06 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530801#comment-14530801
 ] 

Akira AJISAKA commented on HDFS-8310:
-

+1

 Fix TestCLI.testAll help: help for find on Windows
 

 Key: HDFS-8310
 URL: https://issues.apache.org/jira/browse/HDFS-8310
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.0
Reporter: Xiaoyu Yao
Assignee: Kiran Kumar M R
Priority: Minor
  Labels: BB2015-05-RFC
 Attachments: HDFS-8310-001.patch, HDFS-8310-002.patch


 The test uses RegexAcrossOutputComparator in a single regex, which does not 
 match on Windows as shown below.
 {code}
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(155)) - 
 ---
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(156)) - Test ID: [31]
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(157)) -Test Description: 
 [help: help for find]
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(158)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(162)) -   Test Commands: 
 [-help find]
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(166)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(173)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(177)) -  Comparator: 
 [RegexpAcrossOutputComparator]
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(179)) -  Comparision result:   
 [fail]
 2015-04-30 01:14:01,739 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(181)) - Expected output:   
 [-find path \.\.\. expression \.\.\. :
   Finds all files that match the specified expression and
   applies selected actions to them\. If no path is specified
   then defaults to the current working directory\. If no
   expression is specified then defaults to -print\.
   
   The following primary expressions are recognised:
 -name pattern
 -iname pattern
   Evaluates as true if the basename of the file matches the
   pattern using standard file system globbing\.
   If -iname is used then the match is case insensitive\.
   
 -print
 -print0
   Always evaluates to true. Causes the current pathname to be
   written to standard output followed by a newline. If the -print0
   expression is used then an ASCII NULL character is appended rather
   than a newline.
   
   The following operators are recognised:
 expression -a expression
 expression -and expression
 expression expression
   Logical AND operator for joining two expressions\. Returns
   true if both child expressions return true\. Implied by the
   juxtaposition of two expressions and so does not need to be
   explicitly specified\. The second expression will not be
   applied if the first fails\.
 ]
 2015-04-30 01:14:01,739 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(183)) -   Actual output:   
 [-find path ... expression ... :
   Finds all files that match the specified expression and
   applies selected actions to them. If no path is specified
   then defaults to the current working directory. If no
   expression is specified then defaults to -print.
   
   The following primary expressions are recognised:
 -name pattern
 -iname pattern
   Evaluates as true if the basename of the file matches the
   pattern using standard file system globbing.
   If -iname is used then the match is case insensitive.
   
 -print
 -print0
   Always evaluates to true. Causes the current pathname to be
   written to standard output followed by a newline. If the -print0
   expression is used then an ASCII NULL character is appended rather
   than a newline.
   
   The following operators are recognised:
 expression -a expression
 expression -and expression
 expression expression
   Logical AND operator for joining two expressions. Returns
   true if both child expressions return true. Implied by the
   juxtaposition of two expressions and so does not need to be
   explicitly specified. The second expression will not be
   applied if the first fails.
 ]
 {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8219) setStoragePolicy with folder behavior is different after cluster restart

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530878#comment-14530878
 ] 

Hudson commented on HDFS-8219:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2135 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2135/])
HDFS-8219. setStoragePolicy with folder behavior is different after cluster 
restart. (surendra singh lilhore via Xiaoyu Yao) (xyao: rev 
0100b155019496d077f958904de7d385697d65d9)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java


 setStoragePolicy with folder behavior is different after cluster restart
 

 Key: HDFS-8219
 URL: https://issues.apache.org/jira/browse/HDFS-8219
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Peter Shi
Assignee: surendra singh lilhore
  Labels: BB2015-05-RFC
 Fix For: 2.8.0

 Attachments: HDFS-8219.patch, HDFS-8219.unittest-norepro.patch


 Reproduce steps.
 1) mkdir named /temp
 2) put one file A under /temp
 3) change /temp storage policy to COLD
 4) use -getStoragePolicy to query file A's storage policy, it is same with 
 /temp
 5) change /temp folder storage policy again, will see file A's storage policy 
 keep same with parent folder.
 then restart the cluster.
 do 3) 4) again, will find file A's storage policy is not change while parent 
 folder's storage policy changes. It behaves different.
 As i debugged, found the code:
 in INodeFile.getStoragePolicyID
 {code}
   public byte getStoragePolicyID() {
 byte id = getLocalStoragePolicyID();
 if (id == BLOCK_STORAGE_POLICY_ID_UNSPECIFIED) {
   return this.getParent() != null ?
   this.getParent().getStoragePolicyID() : id;
 }
 return id;
   }
 {code}
 If the file do not have its storage policy, it will use parent's. But after 
 cluster restart, the file turns to have its own storage policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7758) Retire FsDatasetSpi#getVolumes() and use FsDatasetSpi#getVolumeRefs() instead

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530876#comment-14530876
 ] 

Hudson commented on HDFS-7758:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2135 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2135/])
HDFS-7758. Retire FsDatasetSpi#getVolumes() and use 
FsDatasetSpi#getVolumeRefs() instead (Lei (Eddy) Xu via Colin P. McCabe) 
(cmccabe: rev 24d3a2d4fdd836ac9a5bc755a7fb9354f7a582b1)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSClusterWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestRbwSpaceReservation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestStorageMover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTriggerBlockReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeReference.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestWriteToReplica.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestDatanodeRestart.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockHasMultipleReplicasOnSameDN.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBlockReports.java


 Retire FsDatasetSpi#getVolumes() and use FsDatasetSpi#getVolumeRefs() instead
 -

 Key: HDFS-7758
 URL: https://issues.apache.org/jira/browse/HDFS-7758
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Fix For: 2.8.0

 Attachments: HDFS-7758.000.patch, HDFS-7758.001.patch, 
 HDFS-7758.002.patch, HDFS-7758.003.patch, HDFS-7758.004.patch, 
 HDFS-7758.005.patch, HDFS-7758.006.patch, HDFS-7758.007.patch, 
 HDFS-7758.008.patch, HDFS-7758.010.patch


 HDFS-7496 introduced reference-counting  the volume instances being used to 
 prevent race condition when hot swapping a volume.
 However, {{FsDatasetSpi#getVolumes()}} can still leak the volume instance 
 without increasing its reference count. In this JIRA, we retire the 
 {{FsDatasetSpi#getVolumes()}} and propose {{FsDatasetSpi#getVolumeRefs()}} 
 and etc. method to access {{FsVolume}}. Thus it makes sure that the consumer 
 of {{FsVolume}} always has correct 

[jira] [Commented] (HDFS-7847) Modify NNThroughputBenchmark to be able to operate on a remote NameNode

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530877#comment-14530877
 ] 

Hudson commented on HDFS-7847:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2135 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2135/])
HDFS-7847. Modify NNThroughputBenchmark to be able to operate on a remote 
NameNode (Charles Lamb via Colin P. McCabe) (cmccabe: rev 
ffce9a3413277a69444fcb890460c885de56db69)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java


 Modify NNThroughputBenchmark to be able to operate on a remote NameNode
 ---

 Key: HDFS-7847
 URL: https://issues.apache.org/jira/browse/HDFS-7847
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Charles Lamb
 Fix For: 2.8.0

 Attachments: HDFS-7847.000.patch, HDFS-7847.001.patch, 
 HDFS-7847.002.patch, HDFS-7847.003.patch, HDFS-7847.004.patch, 
 HDFS-7847.005.patch, make_blocks.tar.gz


 Modify NNThroughputBenchmark to be able to operate on a NN that is not in 
 process. A followon Jira will modify it some more to allow quantifying native 
 and java heap sizes, and some latency numbers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8305) HDFS INotify: the destination field of RenameOp should always end with the file name

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530875#comment-14530875
 ] 

Hudson commented on HDFS-8305:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2135 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2135/])
HDFS-8305: HDFS INotify: the destination field of RenameOp should always end 
with the file name (cmccabe) (cmccabe: rev 
fcd4cb751665adb241081e42b3403c3856b6c6fe)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java


 HDFS INotify: the destination field of RenameOp should always end with the 
 file name
 

 Key: HDFS-8305
 URL: https://issues.apache.org/jira/browse/HDFS-8305
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.7.1

 Attachments: HDFS-8305.001.patch, HDFS-8305.002.patch


 HDFS INotify: the destination field of RenameOp should always end with the 
 file name rather than sometimes being a directory name.  Previously, in some 
 cases when using the old rename, this was not the case.  The format of 
 OP_EDIT_LOG_RENAME_OLD allows moving /f to /d/f to be represented as 
 RENAME(src=/f, dst=/d) or RENAME(src=/f, dst=/d/f). This change makes HDFS 
 always use the latter form. This, in turn, ensures that inotify will always 
 be able to consider the dst field as the full destination file name. This is 
 a compatible change since we aren't removing the ability to handle the first 
 form during edit log replay... we just no longer generate it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8314) Move HdfsServerConstants#IO_FILE_BUFFER_SIZE and SMALL_BUFFER_SIZE to the users

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530869#comment-14530869
 ] 

Hudson commented on HDFS-8314:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2135 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2135/])
HDFS-8314. Move HdfsServerConstants#IO_FILE_BUFFER_SIZE and SMALL_BUFFER_SIZE 
to the users. Contributed by Li Lu. (wheat9: rev 
4da8490b512a33a255ed27309860859388d7c168)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskAsyncLazyPersistService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java


 Move HdfsServerConstants#IO_FILE_BUFFER_SIZE and SMALL_BUFFER_SIZE to the 
 users
 ---

 Key: HDFS-8314
 URL: https://issues.apache.org/jira/browse/HDFS-8314
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Li Lu
 Fix For: 2.8.0

 Attachments: HDFS-8314-trunk.001.patch, HDFS-8314-trunk.002.patch, 
 HDFS-8314-trunk.003.patch, HDFS-8314-trunk.004.patch


 Currently HdfsServerConstants reads the configuration and to set the value of 
 IO_FILE_BUFFER_SIZE and SMALL_BUFFER_SIZE, thus they are configurable instead 
 of being constants.
 This jira proposes to move these two variables to the users in the 
 upper-level so that HdfsServerConstants only stores constant values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8277) Safemode enter fails when Standby NameNode is down

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531507#comment-14531507
 ] 

Hadoop QA commented on HDFS-8277:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 41s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 27s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 40s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 13s | The applied patch generated  3 
new checkstyle issues (total was 587, now 587). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  5s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 169m  7s | Tests failed in hadoop-hdfs. |
| | | 212m  0s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.tracing.TestTraceAdmin |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12730903/HDFS-8277_4.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 185e63a |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10835/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10835/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10835/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10835/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10835/console |


This message was automatically generated.

 Safemode enter fails when Standby NameNode is down
 --

 Key: HDFS-8277
 URL: https://issues.apache.org/jira/browse/HDFS-8277
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, HDFS, namenode
Affects Versions: 2.6.0
 Environment: HDP 2.2.0
Reporter: Hari Sekhon
Assignee: surendra singh lilhore
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-8277.patch, HDFS-8277_1.patch, HDFS-8277_2.patch, 
 HDFS-8277_3.patch, HDFS-8277_4.patch


 HDFS fails to enter safemode when the Standby NameNode is down (eg. due to 
 AMBARI-10536).
 {code}hdfs dfsadmin -safemode enter
 safemode: Call From nn2/x.x.x.x to nn1:8020 failed on connection exception: 
 java.net.ConnectException: Connection refused; For more details see:  
 http://wiki.apache.org/hadoop/ConnectionRefused{code}
 This appears to be a bug in that it's not trying both NameNodes like the 
 standard hdfs client code does, and is instead stopping after getting a 
 connection refused from nn1 which is down. I verified normal hadoop fs writes 
 and reads via cli did work at this time, using nn2. I happened to run this 
 command as the hdfs user on nn2 which was the surviving Active NameNode.
 After I re-bootstrapped the Standby NN to fix it the command worked as 
 expected again.
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8334) Erasure coding: rename DFSStripedInputStream related test classes

2015-05-06 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8334:

Attachment: HDFS-8334-HDFS-7285.0.patch

 Erasure coding: rename DFSStripedInputStream related test classes
 -

 Key: HDFS-8334
 URL: https://issues.apache.org/jira/browse/HDFS-8334
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8334-HDFS-7285.0.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8334) Erasure coding: rename DFSStripedInputStream related test classes

2015-05-06 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8334:

Status: Patch Available  (was: Open)

 Erasure coding: rename DFSStripedInputStream related test classes
 -

 Key: HDFS-8334
 URL: https://issues.apache.org/jira/browse/HDFS-8334
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8334-HDFS-7285.0.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8246) Get HDFS file name based on block pool id and block id

2015-05-06 Thread feng xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531663#comment-14531663
 ] 

feng xu commented on HDFS-8246:
---

Thank you Colin, could you elaborate on your review comment? For example how 
could I use the feature with HDFS C API and shell command?

 Get HDFS file name based on block pool id and block id
 --

 Key: HDFS-8246
 URL: https://issues.apache.org/jira/browse/HDFS-8246
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: HDFS, hdfs-client, namenode
Reporter: feng xu
Assignee: feng xu
  Labels: BB2015-05-TBR
 Attachments: HDFS-8246.0.patch


 This feature provides HDFS shell command and C/Java API to retrieve HDFS file 
 name based on block pool id and block id.
 1. The Java API in class DistributedFileSystem
 public String getFileName(String poolId, long blockId) throws IOException
 2. The C API in hdfs.c
 char* hdfsGetFileName(hdfsFS fs, const char* poolId, int64_t blockId)
 3. The HDFS shell command 
  hdfs dfs [generic options] -fn poolId blockId
 This feature is useful if you have HDFS block file name in local file system 
 and want to  find out the related HDFS file name in HDFS name space 
 (http://stackoverflow.com/questions/10881449/how-to-find-file-from-blockname-in-hdfs-hadoop).
   Each HDFS block file name in local file system contains both block pool id 
 and block id, for sample HDFS block file name 
 /hdfs/1/hadoop/hdfs/data/current/BP-97622798-10.3.11.84-1428081035160/current/finalized/subdir0/subdir0/blk_1073741825,
   the block pool id is BP-97622798-10.3.11.84-1428081035160 and the block id 
 is 1073741825. The block  pool id is uniquely related to a HDFS name 
 node/name space,  and the block id is uniquely related to a HDFS file within 
 a HDFS name node/name space, so the combination of block pool id and a block 
 id is uniquely related a HDFS file name. 
 The shell command and C/Java API do not map the block pool id to name node, 
 so it’s user’s responsibility to talk to the correct name node in federation 
 environment that has multiple name nodes. The block pool id is used by name 
 node to check if the user is talking with the correct name node.
 The implementation is straightforward. The client request to get HDFS file 
 name reaches the new method String getFileName(String poolId, long blockId) 
 in FSNamesystem in name node through RPC,  and the new method does the 
 followings,
 (1)   Validate the block pool id.
 (2)   Create Block  based on the block id.
 (3)   Get BlockInfoContiguous from Block.
 (4)   Get BlockCollection from BlockInfoContiguous.
 (5)   Get file name from BlockCollection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6757) Simplify lease manager with INodeID

2015-05-06 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531562#comment-14531562
 ] 

Jing Zhao commented on HDFS-6757:
-

Thanks for continuing the work here, Haohui. Some comments on the latest 013 
patch:
# Any reason to use {{o1.lastUpdate}} but use {{o2.getLastUpdate()}} in the
  follwoing code?
{code}
+  private final PriorityQueueLease sortedLeases = new 
PriorityQueueLease(512,
+  new ComparatorLease() {
+@Override
+public int compare(Lease o1, Lease o2) {
+  return Long.signum(o1.lastUpdate - o2.getLastUpdate());
+}
+  });
{code}
# Let's keep this Precondition check in {{getNumUnderConstructionBlocks}} 
considering this patch is a big change.
{code}
-  Preconditions.checkState(cons.isUnderConstruction());
{code}
# Similarly let's put this check into {{serializeFilesUCSection}} or print a 
warning msg if the inode is not UC.
{code}
Preconditions.checkState(node.isUnderConstruction());
{code}
# What is the difference between {{removeLeases}} and {{removeLeaseById}}?
# We'd better add a check to make sure the iip and p are both valid,
  i.e., starting from root, to prevent a wrong path written
  into the CloseOp editlog.
{code}
INodesInPath iip = INodesInPath.fromINode(fsd.getInode(id));
p = iip.getPath();
boolean completed = fsnamesystem.internalReleaseLease(
leaseToCheck, p, iip,
HdfsServerConstants.NAMENODE_LEASE_HOLDER);
{code}
# Any reason to adding this IOException for loading editlog?
{code}
  if (file.isUnderConstruction()) {
if (fsNamesys.leaseManager.getLease(file) == null) {
  throw new IOException(UC file  + path +  does not have a  +
  corresponding lease);
}
{code}
# One question is, after we record INode id in lease manager, do we still want
  to record full paths of UC files in FSImage?
# In {{INodeFile#destroyAndCollectBlocks}}, which may also be called when
  deleting a snapshot, {{removedUCFiles}} may be null? If yes, we need a unit 
test to catch the case.
{code}
FileUnderConstructionFeature uc = getFileUnderConstructionFeature();
if (uc != null) {
  removedUCFiles.add(getId());
}
{code}
# When {{INodeFile#cleanSubtree}} is called because the file or its ancestor is 
deleted,
   even if the file is in snapshot, we should check its UC state and update 
removedUCFiles accordingly.
# The changes in FileDiff is not needed.
# Maybe we can add extra tests for the delete and rename operations when UC 
files are involved. Specifically, a test on the internal lease recovery for 
renamed UC files may be useful.
# Nit: the following code needs reformat:
{code}
+  private final HashMapLong, Lease leasesById = new
+  HashMap();
{code}
and
{code}
LOG.debug(Lease recovery for inode  + id +  is complete.  +
File +
 closed.);
{code}
and
{code}
-   *
-   * @param bsps
+   *  @param bsps
{code}

 Simplify lease manager with INodeID
 ---

 Key: HDFS-6757
 URL: https://issues.apache.org/jira/browse/HDFS-6757
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
  Labels: BB2015-05-TBR
 Attachments: HDFS-6757.000.patch, HDFS-6757.001.patch, 
 HDFS-6757.002.patch, HDFS-6757.003.patch, HDFS-6757.004.patch, 
 HDFS-6757.005.patch, HDFS-6757.006.patch, HDFS-6757.007.patch, 
 HDFS-6757.008.patch, HDFS-6757.009.patch, HDFS-6757.010.patch, 
 HDFS-6757.011.patch, HDFS-6757.012.patch, HDFS-6757.013.patch


 Currently the lease manager records leases based on path instead of inode 
 ids. Therefore, the lease manager needs to carefully keep track of the path 
 of active leases during renames and deletes. This can be a non-trivial task.
 This jira proposes to simplify the logic by tracking leases using inodeids 
 instead of paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8334) Erasure coding: rename DFSStripedInputStream related test classes

2015-05-06 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8334:

Description: Our current TestDFSStripedInputStream is actually the 
end-2-end test, and should be named TestWriteReadStripedFile, and should 
eventually subclass TestWriteRead. Current TestReadStripedFile is actually the 
internal unit testing class for DFSStripedInputStream

 Erasure coding: rename DFSStripedInputStream related test classes
 -

 Key: HDFS-8334
 URL: https://issues.apache.org/jira/browse/HDFS-8334
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8334-HDFS-7285.0.patch


 Our current TestDFSStripedInputStream is actually the end-2-end test, and 
 should be named TestWriteReadStripedFile, and should eventually subclass 
 TestWriteRead. Current TestReadStripedFile is actually the internal unit 
 testing class for DFSStripedInputStream



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7678) Erasure coding: DFSInputStream with decode functionality (pread)

2015-05-06 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531620#comment-14531620
 ] 

Jing Zhao commented on HDFS-7678:
-

Thanks Zhe!

bq. So my proposal is that we can use stripe as a read unit. But a stripe is 
not necessarily dataBlkNum cells. Instead, it may cover multiple cells on each 
internal block, as long as it has the same span on all internal blocks.

Actually this is what I mean in my last comment :) We do not need to read a 
complete stripe for pread because of the reason you mentioned.

 Erasure coding: DFSInputStream with decode functionality (pread)
 

 Key: HDFS-7678
 URL: https://issues.apache.org/jira/browse/HDFS-7678
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Li Bo
Assignee: Zhe Zhang
 Attachments: BlockGroupReader.patch, HDFS-7678-HDFS-7285.002.patch, 
 HDFS-7678-HDFS-7285.003.patch, HDFS-7678-HDFS-7285.004.patch, 
 HDFS-7678-HDFS-7285.005.patch, HDFS-7678-HDFS-7285.006.patch, 
 HDFS-7678-HDFS-7285.007.patch, HDFS-7678-HDFS-7285.008.patch, 
 HDFS-7678-HDFS-7285.009.patch, HDFS-7678-HDFS-7285.010.patch, 
 HDFS-7678.000.patch, HDFS-7678.001.patch


 A block group reader will read data from BlockGroup no matter in striping 
 layout or contiguous layout. The corrupt blocks can be known before 
 reading(told by namenode), or just be found during reading. The block group 
 reader needs to do decoding work when some blocks are found corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2484) checkLease should throw FileNotFoundException when file does not exist

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531676#comment-14531676
 ] 

Hudson commented on HDFS-2484:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7751 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7751/])
HDFS-2484. checkLease should throw FileNotFoundException when file does not 
exist. Contributed by Rakesh R. (shv: rev 
c75cfa29cfc527242837d80962688aa53c111e72)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 checkLease should throw FileNotFoundException when file does not exist
 --

 Key: HDFS-2484
 URL: https://issues.apache.org/jira/browse/HDFS-2484
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.22.0, 2.0.0-alpha
Reporter: Konstantin Shvachko
Assignee: Rakesh R
  Labels: BB2015-05-TBR
 Fix For: 2.8.0

 Attachments: HDFS-2484.00.patch, HDFS-2484.01.patch, 
 HDFS-2484.02.patch


 When file is deleted during its creation {{FSNamesystem.checkLease(String 
 src, String holder)}} throws {{LeaseExpiredException}}. It would be more 
 informative if it thrown {{FileNotFoundException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8289) DFSStripedOutputStream uses an additional rpc all to getErasureCodingInfo

2015-05-06 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531675#comment-14531675
 ] 

Jing Zhao commented on HDFS-8289:
-

Thanks for working on this, Yong! Your patch looks good to me. Some minors:
# We can also include this ECSchema information in the result of the 
{{getFileInfo}} call.
# After adding ECSchema into HdfsFileStatus, we now can use it to decide which 
outputstream to create in {{DFSOutputStream#newStreamForCreate}}:
{code}
  if(stat.getReplication() == 0) {
out = new DFSStripedOutputStream(dfsClient, src, stat,
flag, progress, checksum, favoredNodes);
  } else {
out = new DFSOutputStream(dfsClient, src, stat,
flag, progress, checksum, favoredNodes);
  }
{code}
# Similarly we can call {{getFileInfo}} in {{DFSClient#open}} to see which 
inputstream to create.
# Any reason to move ECSchemaProto definition from erasurecoding.proto to 
hdfs.proto?
# Let's add some unit tests about this change.

 DFSStripedOutputStream uses an additional rpc all to getErasureCodingInfo
 -

 Key: HDFS-8289
 URL: https://issues.apache.org/jira/browse/HDFS-8289
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Yong Zhang
 Attachments: HDFS-8289.000.patch, HDFS-8289.001.patch


 {code}
 // ECInfo is restored from NN just before writing striped files.
 ecInfo = dfsClient.getErasureCodingInfo(src);
 {code}
 The rpc call above can be avoided by adding ECSchema to HdfsFileStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8289) DFSStripedOutputStream uses an additional rpc all to getErasureCodingInfo

2015-05-06 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531694#comment-14531694
 ] 

Jing Zhao commented on HDFS-8289:
-

Ahh, I see why you move ECSchemaProto into hdfs.proto. Please skip comment #4.

 DFSStripedOutputStream uses an additional rpc all to getErasureCodingInfo
 -

 Key: HDFS-8289
 URL: https://issues.apache.org/jira/browse/HDFS-8289
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Yong Zhang
 Attachments: HDFS-8289.000.patch, HDFS-8289.001.patch


 {code}
 // ECInfo is restored from NN just before writing striped files.
 ecInfo = dfsClient.getErasureCodingInfo(src);
 {code}
 The rpc call above can be avoided by adding ECSchema to HdfsFileStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7672) Erasure Coding: consolidate streamer coordination logic and handle failure when writing striped blocks

2015-05-06 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531700#comment-14531700
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7672:
---

 shouldLocateFollowingBlock doesn't look good but since it's inherited I'll 
 file another JIRA to address it.

I found that we actually could check if block==null but not need another 
variable.  I will fix it in HDFS-8323.

 Erasure Coding: consolidate streamer coordination logic and handle failure 
 when writing striped blocks
 --

 Key: HDFS-7672
 URL: https://issues.apache.org/jira/browse/HDFS-7672
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: HDFS-7285

 Attachments: h7672_20150504.patch, h7672_20150504b.patch, 
 h7672_20150504c.patch, h7672_20150505.patch, h7672_20150505b.patch


 In *stripping* case, for (6, 3)-Reed-Solomon, a client writes to 6 data 
 blocks and 3 parity blocks concurrently.  We need to handle datanode or 
 network failures when writing a EC BlockGroup.
 We also refactor the existing code in DFSStripedOutputStream and 
 StripedDataStreamer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7672) Erasure Coding: consolidate streamer coordination logic and handle failure when writing striped blocks

2015-05-06 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531724#comment-14531724
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7672:
---

They will be aligned other, it won't get the next located block.

 Erasure Coding: consolidate streamer coordination logic and handle failure 
 when writing striped blocks
 --

 Key: HDFS-7672
 URL: https://issues.apache.org/jira/browse/HDFS-7672
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: HDFS-7285

 Attachments: h7672_20150504.patch, h7672_20150504b.patch, 
 h7672_20150504c.patch, h7672_20150505.patch, h7672_20150505b.patch


 In *stripping* case, for (6, 3)-Reed-Solomon, a client writes to 6 data 
 blocks and 3 parity blocks concurrently.  We need to handle datanode or 
 network failures when writing a EC BlockGroup.
 We also refactor the existing code in DFSStripedOutputStream and 
 StripedDataStreamer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8334) Erasure coding: rename DFSStripedInputStream related test classes

2015-05-06 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531734#comment-14531734
 ] 

Jing Zhao commented on HDFS-8334:
-

Yeah, this makes sense to me. +1 for the patch.

 Erasure coding: rename DFSStripedInputStream related test classes
 -

 Key: HDFS-8334
 URL: https://issues.apache.org/jira/browse/HDFS-8334
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8334-HDFS-7285.0.patch


 Our current TestDFSStripedInputStream is actually the end-2-end test, and 
 should be named TestWriteReadStripedFile, and should eventually subclass 
 TestWriteRead. Current TestReadStripedFile is actually the internal unit 
 testing class for DFSStripedInputStream



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7678) Erasure coding: DFSInputStream with decode functionality (pread)

2015-05-06 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531703#comment-14531703
 ] 

Jing Zhao commented on HDFS-7678:
-

bq. To reduce dependency on codec and output stream, I will keep adding tests 
to TestReadStripedFile

I think to make the test simple, we can use end-to-end tests like 
TestDFSStripedInputStream. But the test setting, like which and how many DNs to 
shutdown, and which offset/length to read, should be created according to the 
internal logic so that we can cover all the cases. 

bq. would be great if you can review HDFS-8334 first

Sure, will do.

 Erasure coding: DFSInputStream with decode functionality (pread)
 

 Key: HDFS-7678
 URL: https://issues.apache.org/jira/browse/HDFS-7678
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Li Bo
Assignee: Zhe Zhang
 Attachments: BlockGroupReader.patch, HDFS-7678-HDFS-7285.002.patch, 
 HDFS-7678-HDFS-7285.003.patch, HDFS-7678-HDFS-7285.004.patch, 
 HDFS-7678-HDFS-7285.005.patch, HDFS-7678-HDFS-7285.006.patch, 
 HDFS-7678-HDFS-7285.007.patch, HDFS-7678-HDFS-7285.008.patch, 
 HDFS-7678-HDFS-7285.009.patch, HDFS-7678-HDFS-7285.010.patch, 
 HDFS-7678.000.patch, HDFS-7678.001.patch


 A block group reader will read data from BlockGroup no matter in striping 
 layout or contiguous layout. The corrupt blocks can be known before 
 reading(told by namenode), or just be found during reading. The block group 
 reader needs to do decoding work when some blocks are found corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7678) Erasure coding: DFSInputStream with decode functionality (pread)

2015-05-06 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531718#comment-14531718
 ] 

Zhe Zhang commented on HDFS-7678:
-

bq. I think to make the test simple, we can use end-to-end tests like 
TestDFSStripedInputStream. 
End-to-end testing is indeed simpler; in order to fake a striped file we have 
to add a considerable chunk of new code (metadata first, then fake blocks on 
DN). But I think it's worth it because it makes the test faster, and also more 
accurately test the input stream logic. The content-dismatch failures are 
pretty hard to debug if we go through all levels (writing, codec); I just spent 
several good hours to find out the above test failure is a codec level issue :( 
Once we have this isolated test ready we don't need to worry about future bugs 
in output stream and codec.

 Erasure coding: DFSInputStream with decode functionality (pread)
 

 Key: HDFS-7678
 URL: https://issues.apache.org/jira/browse/HDFS-7678
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Li Bo
Assignee: Zhe Zhang
 Attachments: BlockGroupReader.patch, HDFS-7678-HDFS-7285.002.patch, 
 HDFS-7678-HDFS-7285.003.patch, HDFS-7678-HDFS-7285.004.patch, 
 HDFS-7678-HDFS-7285.005.patch, HDFS-7678-HDFS-7285.006.patch, 
 HDFS-7678-HDFS-7285.007.patch, HDFS-7678-HDFS-7285.008.patch, 
 HDFS-7678-HDFS-7285.009.patch, HDFS-7678-HDFS-7285.010.patch, 
 HDFS-7678.000.patch, HDFS-7678.001.patch


 A block group reader will read data from BlockGroup no matter in striping 
 layout or contiguous layout. The corrupt blocks can be known before 
 reading(told by namenode), or just be found during reading. The block group 
 reader needs to do decoding work when some blocks are found corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8334) Erasure coding: rename DFSStripedInputStream related test classes

2015-05-06 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531730#comment-14531730
 ] 

Zhe Zhang commented on HDFS-8334:
-

Good question.

bq. after we have the writing EC file functionality
Both striped writing and reading logics are fairly complex (as well as codec). 
I think we should have true unit testing for all 3, as well as an end-to-end 
testing. 

One example is [described | 
https://issues.apache.org/jira/browse/HDFS-7678?focusedCommentId=14531718page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14531718]
 under HDFS-7678. Also, we have never verified that parity blocks are 
successfully written to DNs with correct content.

 Erasure coding: rename DFSStripedInputStream related test classes
 -

 Key: HDFS-8334
 URL: https://issues.apache.org/jira/browse/HDFS-8334
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8334-HDFS-7285.0.patch


 Our current TestDFSStripedInputStream is actually the end-2-end test, and 
 should be named TestWriteReadStripedFile, and should eventually subclass 
 TestWriteRead. Current TestReadStripedFile is actually the internal unit 
 testing class for DFSStripedInputStream



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7678) Erasure coding: DFSInputStream with decode functionality (pread)

2015-05-06 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531732#comment-14531732
 ] 

Jing Zhao commented on HDFS-7678:
-

bq. The content-dismatch failures are pretty hard to debug if we go through all 
levels (writing, codec)

I see. This makes sense to me and I felt this pain as well before :)

 Erasure coding: DFSInputStream with decode functionality (pread)
 

 Key: HDFS-7678
 URL: https://issues.apache.org/jira/browse/HDFS-7678
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Li Bo
Assignee: Zhe Zhang
 Attachments: BlockGroupReader.patch, HDFS-7678-HDFS-7285.002.patch, 
 HDFS-7678-HDFS-7285.003.patch, HDFS-7678-HDFS-7285.004.patch, 
 HDFS-7678-HDFS-7285.005.patch, HDFS-7678-HDFS-7285.006.patch, 
 HDFS-7678-HDFS-7285.007.patch, HDFS-7678-HDFS-7285.008.patch, 
 HDFS-7678-HDFS-7285.009.patch, HDFS-7678-HDFS-7285.010.patch, 
 HDFS-7678.000.patch, HDFS-7678.001.patch


 A block group reader will read data from BlockGroup no matter in striping 
 layout or contiguous layout. The corrupt blocks can be known before 
 reading(told by namenode), or just be found during reading. The block group 
 reader needs to do decoding work when some blocks are found corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8334) Erasure coding: rename DFSStripedInputStream related test classes

2015-05-06 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-8334:
---

 Summary: Erasure coding: rename DFSStripedInputStream related test 
classes
 Key: HDFS-8334
 URL: https://issues.apache.org/jira/browse/HDFS-8334
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Zhe Zhang
Assignee: Zhe Zhang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8334) Erasure coding: rename DFSStripedInputStream related test classes

2015-05-06 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531721#comment-14531721
 ] 

Jing Zhao commented on HDFS-8334:
-

Thanks for working on this, Zhe! One question is, after we have the writing EC 
file functionality, why do we still need to inject blocks and mimic block 
reports to create files in the unit tests? maybe we can use this chance to 
clean the testing code?

 Erasure coding: rename DFSStripedInputStream related test classes
 -

 Key: HDFS-8334
 URL: https://issues.apache.org/jira/browse/HDFS-8334
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8334-HDFS-7285.0.patch


 Our current TestDFSStripedInputStream is actually the end-2-end test, and 
 should be named TestWriteReadStripedFile, and should eventually subclass 
 TestWriteRead. Current TestReadStripedFile is actually the internal unit 
 testing class for DFSStripedInputStream



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8334) Erasure coding: rename DFSStripedInputStream related test classes

2015-05-06 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531737#comment-14531737
 ] 

Jing Zhao commented on HDFS-8334:
-

bq. Also, we have never verified that parity blocks are successfully written to 
DNs with correct content.

I guess the current TestDFSStripedOutputStream#verifyParity covers this?

 Erasure coding: rename DFSStripedInputStream related test classes
 -

 Key: HDFS-8334
 URL: https://issues.apache.org/jira/browse/HDFS-8334
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8334-HDFS-7285.0.patch


 Our current TestDFSStripedInputStream is actually the end-2-end test, and 
 should be named TestWriteReadStripedFile, and should eventually subclass 
 TestWriteRead. Current TestReadStripedFile is actually the internal unit 
 testing class for DFSStripedInputStream



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7678) Erasure coding: DFSInputStream with decode functionality (pread)

2015-05-06 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531592#comment-14531592
 ] 

Zhe Zhang commented on HDFS-7678:
-

Thanks Jing for the in-depth comments! The discussion is very helpful at this 
stage and I'm glad we are uncovering some of the fundamental trade-offs.

bq. The recovery + MaxPortion logic may have some issue since pread requires us 
to provide precise reading range to the BlockReader.
This is a very good catch. I added a simple fix is to sanity check the read 
range against the internal block before sending the request, and will include 
in the next rev.

bq. Currently I'm thinking it may be easier to use cell and stripe as the basic 
reading and recovery unit to avoid complexity.
As we discussed under HDFS-8281 this indeed simplifies implementation. But my 
concern is that pread should be even more random than stateful reads, and 
therefore the overhead of always reading an entire stripe will be more 
significant. Under HDFS-7782 we specifically made optimizations to avoid an 
additional memory copy from intermediate buffer to application buffer. We also 
changed the signature of {{DFSInputStream#actualGetFromOneDataNode}} for to 
avoid creating block reader multiple times. 

So my proposal is that we can use _stripe_ as a read unit. But a _stripe_ is 
not necessarily {{dataBlkNum}} cells. Instead, it may cover multiple cells on 
each internal block, as long as it has the same span on all internal blocks. 
For example (if we refer to the header of {{DFSStripedInputStream}}), a 
_stripe_ can be cell_3 ~ cell_8. And we don't always have to allocation an 
intermediate buffer. If the application read range happens to form a stripe 
(which should be fairly common), we directly read into it. I believe this 
doesn't add too much complexity to using the current {{StripeRange}} as a 
reading unit, but please let me know your opinion. For example, if we apply 
this idea to the current patch, we would first calculate {{maxPortion}} as full 
cells. We then always read {{maxPortion}} even before detecting any failures. 
Finally, we use a buffer if the pread range is smaller than the {{maxPortion}} 
stripe.

bq. In the following code, suppose we already have one failure and one success 
reading tasks before, xxx it will still be better to avoid the unnecessary 
operations here.
The first {{if}} and first {{else if}} are used to get out of the loop quickly 
when we are surely successful or failed. I can change it to 3 independent if 
statements.

bq. But I strongly suggest to have a separate jira to test all the internal 
logic inside of DFSStripedInputStream and fix possible bugs.
This is a great point. A related minor issue is our current 
{{TestDFSStripedInputStream}} is actually the end-2-end test, and should be 
named {{TestWriteReadStripedFile}}, and should eventually subclass 
{{TestWriteRead}}. Current {{TestReadStripedFile}} is actually the internal 
unit testing class for {{DFSStripedInputStream}}. I filed HDFS-8334 to address 
this. Another related issue is that we never tested the parity data writing 
logic in {{DFSStripedOutputStream}}. 
{{TestDFSStripedOutputStream#verifyParity}} doesn't actually fetch the stored 
parity blocks.

I agree with the other issues and will address in the next rev.

 Erasure coding: DFSInputStream with decode functionality (pread)
 

 Key: HDFS-7678
 URL: https://issues.apache.org/jira/browse/HDFS-7678
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Li Bo
Assignee: Zhe Zhang
 Attachments: BlockGroupReader.patch, HDFS-7678-HDFS-7285.002.patch, 
 HDFS-7678-HDFS-7285.003.patch, HDFS-7678-HDFS-7285.004.patch, 
 HDFS-7678-HDFS-7285.005.patch, HDFS-7678-HDFS-7285.006.patch, 
 HDFS-7678-HDFS-7285.007.patch, HDFS-7678-HDFS-7285.008.patch, 
 HDFS-7678-HDFS-7285.009.patch, HDFS-7678-HDFS-7285.010.patch, 
 HDFS-7678.000.patch, HDFS-7678.001.patch


 A block group reader will read data from BlockGroup no matter in striping 
 layout or contiguous layout. The corrupt blocks can be known before 
 reading(told by namenode), or just be found during reading. The block group 
 reader needs to do decoding work when some blocks are found corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-8320) Erasure coding: consolidate striping-related terminologies

2015-05-06 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-8320 started by Zhe Zhang.
---
 Erasure coding: consolidate striping-related terminologies
 --

 Key: HDFS-8320
 URL: https://issues.apache.org/jira/browse/HDFS-8320
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang

 Right now we are doing striping-based I/O in a number of places:
 # Client output stream (HDFS-7889)
 # Client input stream
 #* pread (HDFS-7782, HDFS-7678)
 #* stateful read (HDFS-8033, HDFS-8281, HDFS-8319)
 # DN reconstruction (HDFS-7348)
 In each place we use one or multiple of the following terminologies:
 # Cell
 # Stripe
 # Block group
 # Internal block
 # Chunk
 This JIRA aims to systematically define these terminologies in relation with 
 each other and in the context of the containing file. For example, a cell 
 belong to stripe _i_ and internal block _j_ can be indexed as {{(i, j)}} and 
 its logical index _k_ in the file can be calculated.
 With the above consolidation, hopefully we can further consolidate striping 
 I/O codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8335) FSNamesystem/FSDirStatAndListingOp getFileInfo and getListingInt construct FSPermissionChecker regardless of isPermissionEnabled()

2015-05-06 Thread David Bryson (JIRA)
David Bryson created HDFS-8335:
--

 Summary: FSNamesystem/FSDirStatAndListingOp getFileInfo and 
getListingInt construct FSPermissionChecker regardless of isPermissionEnabled()
 Key: HDFS-8335
 URL: https://issues.apache.org/jira/browse/HDFS-8335
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0, 2.6.0, 2.5.0, 3.0.0, 2.8.0
Reporter: David Bryson






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2484) checkLease should throw FileNotFoundException when file does not exist

2015-05-06 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531623#comment-14531623
 ] 

Konstantin Shvachko commented on HDFS-2484:
---

What is BB2015-05-TBR any way - some serial number, of what?

 checkLease should throw FileNotFoundException when file does not exist
 --

 Key: HDFS-2484
 URL: https://issues.apache.org/jira/browse/HDFS-2484
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.22.0, 2.0.0-alpha
Reporter: Konstantin Shvachko
Assignee: Rakesh R
  Labels: BB2015-05-TBR
 Attachments: HDFS-2484.00.patch, HDFS-2484.01.patch, 
 HDFS-2484.02.patch


 When file is deleted during its creation {{FSNamesystem.checkLease(String 
 src, String holder)}} throws {{LeaseExpiredException}}. It would be more 
 informative if it thrown {{FileNotFoundException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-2484) checkLease should throw FileNotFoundException when file does not exist

2015-05-06 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2484:
--
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I just committed this. Thank you Rakesh.

 checkLease should throw FileNotFoundException when file does not exist
 --

 Key: HDFS-2484
 URL: https://issues.apache.org/jira/browse/HDFS-2484
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.22.0, 2.0.0-alpha
Reporter: Konstantin Shvachko
Assignee: Rakesh R
  Labels: BB2015-05-TBR
 Fix For: 2.8.0

 Attachments: HDFS-2484.00.patch, HDFS-2484.01.patch, 
 HDFS-2484.02.patch


 When file is deleted during its creation {{FSNamesystem.checkLease(String 
 src, String holder)}} throws {{LeaseExpiredException}}. It would be more 
 informative if it thrown {{FileNotFoundException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7678) Erasure coding: DFSInputStream with decode functionality (pread)

2015-05-06 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531692#comment-14531692
 ] 

Zhe Zhang commented on HDFS-7678:
-

Thanks for the feedback Jing. 

It turns out that the test will fail because because on the codec level we can 
only decode with all parity blocks present. HADOOP-11566 will address it. To 
reduce dependency on codec and output stream, I will keep adding tests to 
{{TestReadStripedFile}} (would be great if you can review HDFS-8334 first) 
instead of the end-to-end test.

 Erasure coding: DFSInputStream with decode functionality (pread)
 

 Key: HDFS-7678
 URL: https://issues.apache.org/jira/browse/HDFS-7678
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Li Bo
Assignee: Zhe Zhang
 Attachments: BlockGroupReader.patch, HDFS-7678-HDFS-7285.002.patch, 
 HDFS-7678-HDFS-7285.003.patch, HDFS-7678-HDFS-7285.004.patch, 
 HDFS-7678-HDFS-7285.005.patch, HDFS-7678-HDFS-7285.006.patch, 
 HDFS-7678-HDFS-7285.007.patch, HDFS-7678-HDFS-7285.008.patch, 
 HDFS-7678-HDFS-7285.009.patch, HDFS-7678-HDFS-7285.010.patch, 
 HDFS-7678.000.patch, HDFS-7678.001.patch


 A block group reader will read data from BlockGroup no matter in striping 
 layout or contiguous layout. The corrupt blocks can be known before 
 reading(told by namenode), or just be found during reading. The block group 
 reader needs to do decoding work when some blocks are found corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7672) Erasure Coding: consolidate streamer coordination logic and handle failure when writing striped blocks

2015-05-06 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531708#comment-14531708
 ] 

Zhe Zhang commented on HDFS-7672:
-

Thanks Nicholas. Actually it's also weird to maintain a list of {{endBlock}}. 
With fast and slow writers we are not guaranteed that the head {{endBlock}} of 
each streamer's queue are aligned on the same block group.

 Erasure Coding: consolidate streamer coordination logic and handle failure 
 when writing striped blocks
 --

 Key: HDFS-7672
 URL: https://issues.apache.org/jira/browse/HDFS-7672
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: HDFS-7285

 Attachments: h7672_20150504.patch, h7672_20150504b.patch, 
 h7672_20150504c.patch, h7672_20150505.patch, h7672_20150505b.patch


 In *stripping* case, for (6, 3)-Reed-Solomon, a client writes to 6 data 
 blocks and 3 parity blocks concurrently.  We need to handle datanode or 
 network failures when writing a EC BlockGroup.
 We also refactor the existing code in DFSStripedOutputStream and 
 StripedDataStreamer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8310) Fix TestCLI.testAll 'help: help for find' on Windows

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530997#comment-14530997
 ] 

Hudson commented on HDFS-8310:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7746 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7746/])
HDFS-8310. Fix TestCLI.testAll 'help: help for find' on Windows. (Kiran Kumar M 
R via Xiaoyu Yao) (xyao: rev 7a26d174aff9535f7a60711bee586e225891b383)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/RegexpAcrossOutputComparator.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Fix TestCLI.testAll 'help: help for find' on Windows
 

 Key: HDFS-8310
 URL: https://issues.apache.org/jira/browse/HDFS-8310
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.0
Reporter: Xiaoyu Yao
Assignee: Kiran Kumar M R
Priority: Minor
  Labels: BB2015-05-RFC
 Fix For: 2.8.0

 Attachments: HDFS-8310-001.patch, HDFS-8310-002.patch


 The test uses RegexAcrossOutputComparator in a single regex, which does not 
 match on Windows as shown below.
 {code}
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(155)) - 
 ---
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(156)) - Test ID: [31]
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(157)) -Test Description: 
 [help: help for find]
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(158)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(162)) -   Test Commands: 
 [-help find]
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(166)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(173)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(177)) -  Comparator: 
 [RegexpAcrossOutputComparator]
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(179)) -  Comparision result:   
 [fail]
 2015-04-30 01:14:01,739 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(181)) - Expected output:   
 [-find path \.\.\. expression \.\.\. :
   Finds all files that match the specified expression and
   applies selected actions to them\. If no path is specified
   then defaults to the current working directory\. If no
   expression is specified then defaults to -print\.
   
   The following primary expressions are recognised:
 -name pattern
 -iname pattern
   Evaluates as true if the basename of the file matches the
   pattern using standard file system globbing\.
   If -iname is used then the match is case insensitive\.
   
 -print
 -print0
   Always evaluates to true. Causes the current pathname to be
   written to standard output followed by a newline. If the -print0
   expression is used then an ASCII NULL character is appended rather
   than a newline.
   
   The following operators are recognised:
 expression -a expression
 expression -and expression
 expression expression
   Logical AND operator for joining two expressions\. Returns
   true if both child expressions return true\. Implied by the
   juxtaposition of two expressions and so does not need to be
   explicitly specified\. The second expression will not be
   applied if the first fails\.
 ]
 2015-04-30 01:14:01,739 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(183)) -   Actual output:   
 [-find path ... expression ... :
   Finds all files that match the specified expression and
   applies selected actions to them. If no path is specified
   then defaults to the current working directory. If no
   expression is specified then defaults to -print.
   
   The following primary expressions are recognised:
 -name pattern
 -iname pattern
   Evaluates as true if the basename of the file matches the
   pattern using standard file system globbing.
   If -iname is used then the match is case insensitive.
   
 -print
 -print0
   Always evaluates to true. Causes the current pathname to be
   written to standard output followed by a newline. If the -print0
   expression is used then an ASCII NULL character is appended rather
   than a newline.
   
   The following operators are recognised:
 expression -a expression
 expression -and expression
 expression expression
   Logical AND operator for joining two expressions. Returns
   true if both 

[jira] [Commented] (HDFS-8332) DistributedFileSystem listCacheDirectives() and listCachePools() API calls should check filesystem closed

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531489#comment-14531489
 ] 

Hadoop QA commented on HDFS-8332:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 59s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 46s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 46s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 14s | The applied patch generated  9 
new checkstyle issues (total was 616, now 617). |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 31s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  5s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 19s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 168m  4s | Tests failed in hadoop-hdfs. |
| | | 211m 59s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.tracing.TestTraceAdmin |
|   | hadoop.hdfs.TestRollingUpgradeRollback |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12730900/HDFS-8332-001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 185e63a |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10834/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10834/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10834/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10834/console |


This message was automatically generated.

 DistributedFileSystem listCacheDirectives() and listCachePools() API calls 
 should check filesystem closed
 -

 Key: HDFS-8332
 URL: https://issues.apache.org/jira/browse/HDFS-8332
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
  Labels: BB2015-05-TBR
 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch


 I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be 
 called even after the filesystem close. Instead these calls should do 
 {{checkOpen}} and throws:
 {code}
 java.io.IOException: Filesystem closed
   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7833) DataNode reconfiguration does not recalculate valid volumes required, based on configured failed volumes tolerated.

2015-05-06 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-7833:

Attachment: HDFS-7833.003.patch

Hi, [~cnauroth] Thanks for looking into this. I rebased the patch in v003.

 DataNode reconfiguration does not recalculate valid volumes required, based 
 on configured failed volumes tolerated.
 ---

 Key: HDFS-7833
 URL: https://issues.apache.org/jira/browse/HDFS-7833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Chris Nauroth
Assignee: Lei (Eddy) Xu
  Labels: BB2015-05-TBR
 Attachments: HDFS-7833.000.patch, HDFS-7833.001.patch, 
 HDFS-7833.002.patch, HDFS-7833.003.patch


 DataNode reconfiguration never recalculates 
 {{FsDatasetImpl#validVolsRequired}}.  This may cause incorrect behavior of 
 the {{dfs.datanode.failed.volumes.tolerated}} property if reconfiguration 
 causes the DataNode to run with a different total number of volumes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-05-06 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531297#comment-14531297
 ] 

Jesse Yates commented on HDFS-6440:
---

More comments, as I actually get back into the code:
{quote}
In StandbyCheckpointer#doCheckpoint, unless I'm missing something, I don't 
think the variable ie can ever be non-null, and yet we check for whether or 
not it's null later in the method to determine if we should shut down.
{quote}
It can either be an InterruptedException or an IOException when transfering the 
checkpoint. Interrupted (ie) thrown if we are interrupted while waiting the 
any checkpoint to complete. IOE if there is an execution exception when doing 
the checkpoint. 

After we get out of waiting for the uploads, if we got an ioe or an ie then 
we force the rest of the threads that we started for the image transfer to quit 
by shutting down the threadpool (and then forcibly shutting it down shortly 
after that). We do checks again for each exception to ensure we throw the right 
one back up.

We could wrap the exceptions into a parent exception and then just throw that 
back up to the caller (resulting in less checks), but I didn't want to change 
the method signature b/c the interrupted means something very different from 
ioe.

Can do whatever you want there though, don't really matter to me.
We need to make sure either exception is rethrown 

 Support more than 2 NameNodes
 -

 Key: HDFS-6440
 URL: https://issues.apache.org/jira/browse/HDFS-6440
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: auto-failover, ha, namenode
Affects Versions: 2.4.0
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: Multiple-Standby-NameNodes_V1.pdf, 
 hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
 hdfs-6440-trunk-v1.patch, hdfs-multiple-snn-trunk-v0.patch


 Most of the work is already done to support more than 2 NameNodes (one 
 active, one standby). This would be the last bit to support running multiple 
 _standby_ NameNodes; one of the standbys should be available for fail-over.
 Mostly, this is a matter of updating how we parse configurations, some 
 complexity around managing the checkpointing, and updating a whole lot of 
 tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8277) Safemode enter fails when Standby NameNode is down

2015-05-06 Thread surendra singh lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

surendra singh lilhore updated HDFS-8277:
-
Attachment: HDFS-8277_4.patch

 Safemode enter fails when Standby NameNode is down
 --

 Key: HDFS-8277
 URL: https://issues.apache.org/jira/browse/HDFS-8277
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, HDFS, namenode
Affects Versions: 2.6.0
 Environment: HDP 2.2.0
Reporter: Hari Sekhon
Assignee: surendra singh lilhore
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-8277.patch, HDFS-8277_1.patch, HDFS-8277_2.patch, 
 HDFS-8277_3.patch, HDFS-8277_4.patch


 HDFS fails to enter safemode when the Standby NameNode is down (eg. due to 
 AMBARI-10536).
 {code}hdfs dfsadmin -safemode enter
 safemode: Call From nn2/x.x.x.x to nn1:8020 failed on connection exception: 
 java.net.ConnectException: Connection refused; For more details see:  
 http://wiki.apache.org/hadoop/ConnectionRefused{code}
 This appears to be a bug in that it's not trying both NameNodes like the 
 standard hdfs client code does, and is instead stopping after getting a 
 connection refused from nn1 which is down. I verified normal hadoop fs writes 
 and reads via cli did work at this time, using nn2. I happened to run this 
 command as the hdfs user on nn2 which was the surviving Active NameNode.
 After I re-bootstrapped the Standby NN to fix it the command worked as 
 expected again.
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7833) DataNode reconfiguration does not recalculate valid volumes required, based on configured failed volumes tolerated.

2015-05-06 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531135#comment-14531135
 ] 

Chris Nauroth commented on HDFS-7833:
-

Hello [~eddyxu].  I apologize for the delay.  Unfortunately, the patch needs a 
rebase now.  Would you please upload a v003?  I'll prioritize reviewing it.

The new test looks like what I had in mind.  I'll review in detail after the 
rebase.  Thanks for incorporating the feedback.

 DataNode reconfiguration does not recalculate valid volumes required, based 
 on configured failed volumes tolerated.
 ---

 Key: HDFS-7833
 URL: https://issues.apache.org/jira/browse/HDFS-7833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Chris Nauroth
Assignee: Lei (Eddy) Xu
  Labels: BB2015-05-TBR
 Attachments: HDFS-7833.000.patch, HDFS-7833.001.patch, 
 HDFS-7833.002.patch


 DataNode reconfiguration never recalculates 
 {{FsDatasetImpl#validVolsRequired}}.  This may cause incorrect behavior of 
 the {{dfs.datanode.failed.volumes.tolerated}} property if reconfiguration 
 causes the DataNode to run with a different total number of volumes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7980) Incremental BlockReport will dramatically slow down the startup of a namenode

2015-05-06 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531148#comment-14531148
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7980:
---

The patch looks good.  Just a question on the test:
{code}
// blk_42 is finalized.
long receivedBlockId = 42;  // arbitrary
BlockInfoContiguous receivedBlock = addBlockToBM(receivedBlockId);
{code}
Why adding the block to BM directly before the incremental and full reports?

 Incremental BlockReport will dramatically slow down the startup of  a namenode
 --

 Key: HDFS-7980
 URL: https://issues.apache.org/jira/browse/HDFS-7980
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hui Zheng
Assignee: Walter Su
  Labels: BB2015-05-TBR
 Attachments: HDFS-7980.001.patch, HDFS-7980.002.patch, 
 HDFS-7980.003.patch, HDFS-7980.004.patch, HDFS-7980.004.repost.patch


 In the current implementation the datanode will call the 
 reportReceivedDeletedBlocks() method that is a IncrementalBlockReport before 
 calling the bpNamenode.blockReport() method. So in a large(several thousands 
 of datanodes) and busy cluster it will slow down(more than one hour) the 
 startup of namenode. 
 {code}
 ListDatanodeCommand blockReport() throws IOException {
 // send block report if timer has expired.
 final long startTime = now();
 if (startTime - lastBlockReport = dnConf.blockReportInterval) {
   return null;
 }
 final ArrayListDatanodeCommand cmds = new ArrayListDatanodeCommand();
 // Flush any block information that precedes the block report. Otherwise
 // we have a chance that we will miss the delHint information
 // or we will report an RBW replica after the BlockReport already reports
 // a FINALIZED one.
 reportReceivedDeletedBlocks();
 lastDeletedReport = startTime;
 .
 // Send the reports to the NN.
 int numReportsSent = 0;
 int numRPCs = 0;
 boolean success = false;
 long brSendStartTime = now();
 try {
   if (totalBlockCount  dnConf.blockReportSplitThreshold) {
 // Below split threshold, send all reports in a single message.
 DatanodeCommand cmd = bpNamenode.blockReport(
 bpRegistration, bpos.getBlockPoolId(), reports);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-05-06 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531238#comment-14531238
 ] 

Jesse Yates commented on HDFS-6440:
---

[~atm] thanks for the feedback. I'm working on rebasing on trunk and addressing 
your comments (hopefully a patch by tomorrow), but a couple of 
comments/questions first:

bq. Rolling upgrades/downgrades/rollbacks.

I'm not sure how we would test this when needing to change the structure of the 
FS to support more than 2 NNs. Would you recommend (1) recognizing the old 
layout and then (2) transfering it into the new layout? The reason this seems 
silly (to me) is that the layout is only enforced by the way the minicluster is 
used/setup, rather than the way things would actually be run. By moving things 
into the appropriate directories per-nn, but keeping everything else below that 
the same, I think we keep the same upgrade properties but don't need to do the 
above contrived/synthetic upgrade.

bq. What's a fresh cluster vs. a running cluster in this sense?

Maybe some salesforce terminology leak here. Fresh would be one where you 
just formatted the primary NN and are bootstrapping the other NNs from that 
layout. Running would be when bringing up a SNN after some sort of failure 
and it has an unformatted fs - then it can pull from any node in the cluster. 
As an SNN it would then be able to catch up by tailing the ANN.

I'll update the comment.

bq. is changing the value of FAILOVER_SEED going to do anything, given that 
it's only ever read at the static initialization of the failoverRandom?

Yes, it for when there is an error and you want to run the exact sequence of 
failovers again in the test. Minor helper, but can be useful when trying to 
track down ordering dependency issues (which there shoudn't be, but sometimes 
these things can creep in).


Otherwise, everything else seems completely reasonable. Thanks!

 Support more than 2 NameNodes
 -

 Key: HDFS-6440
 URL: https://issues.apache.org/jira/browse/HDFS-6440
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: auto-failover, ha, namenode
Affects Versions: 2.4.0
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: Multiple-Standby-NameNodes_V1.pdf, 
 hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
 hdfs-6440-trunk-v1.patch, hdfs-multiple-snn-trunk-v0.patch


 Most of the work is already done to support more than 2 NameNodes (one 
 active, one standby). This would be the last bit to support running multiple 
 _standby_ NameNodes; one of the standbys should be available for fail-over.
 Mostly, this is a matter of updating how we parse configurations, some 
 complexity around managing the checkpointing, and updating a whole lot of 
 tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8332) DistributedFileSystem listCacheDirectives() and listCachePools() API calls should check filesystem closed

2015-05-06 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531091#comment-14531091
 ] 

Rakesh R commented on HDFS-8332:


Attached another patch covers the following APIs other than 
{{listCacheDirectives}} and {{listCachePools}}. Please have a look at it. 
Thanks!
{code}
getBlockSize
getServerDefaults
reportBadBlocks
getBlockLocations
getBlockStorageLocations
createSymlink
setReplication
setStoragePolicy
getStoragePolicies
setSafeMode
refreshNodes
metaSave
setBalancerBandwidth
finalizeUpgrade
rollingUpgrade
getInotifyEventStream
getInotifyEventStream(x)
saveNamespace
rollEdits
restoreFailedStorage
getContentSummary
setQuota
setQuotaByStorageType
{code}

 DistributedFileSystem listCacheDirectives() and listCachePools() API calls 
 should check filesystem closed
 -

 Key: HDFS-8332
 URL: https://issues.apache.org/jira/browse/HDFS-8332
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
  Labels: BB2015-05-TBR
 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch


 I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be 
 called even after the filesystem close. Instead these calls should do 
 {{checkOpen}} and throws:
 {code}
 java.io.IOException: Filesystem closed
   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7980) Incremental BlockReport will dramatically slow down the startup of a namenode

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531086#comment-14531086
 ] 

Hadoop QA commented on HDFS-7980:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 32s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 37s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 15s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  4s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 176m 48s | Tests failed in hadoop-hdfs. |
| | | 219m 37s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.tracing.TestTraceAdmin |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12730824/HDFS-7980.004.repost.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / a583a40 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10833/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10833/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10833/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10833/console |


This message was automatically generated.

 Incremental BlockReport will dramatically slow down the startup of  a namenode
 --

 Key: HDFS-7980
 URL: https://issues.apache.org/jira/browse/HDFS-7980
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hui Zheng
Assignee: Walter Su
  Labels: BB2015-05-TBR
 Attachments: HDFS-7980.001.patch, HDFS-7980.002.patch, 
 HDFS-7980.003.patch, HDFS-7980.004.patch, HDFS-7980.004.repost.patch


 In the current implementation the datanode will call the 
 reportReceivedDeletedBlocks() method that is a IncrementalBlockReport before 
 calling the bpNamenode.blockReport() method. So in a large(several thousands 
 of datanodes) and busy cluster it will slow down(more than one hour) the 
 startup of namenode. 
 {code}
 ListDatanodeCommand blockReport() throws IOException {
 // send block report if timer has expired.
 final long startTime = now();
 if (startTime - lastBlockReport = dnConf.blockReportInterval) {
   return null;
 }
 final ArrayListDatanodeCommand cmds = new ArrayListDatanodeCommand();
 // Flush any block information that precedes the block report. Otherwise
 // we have a chance that we will miss the delHint information
 // or we will report an RBW replica after the BlockReport already reports
 // a FINALIZED one.
 reportReceivedDeletedBlocks();
 lastDeletedReport = startTime;
 .
 // Send the reports to the NN.
 int numReportsSent = 0;
 int numRPCs = 0;
 boolean success = false;
 long brSendStartTime = now();
 try {
   if (totalBlockCount  dnConf.blockReportSplitThreshold) {
 // Below split threshold, send all reports in a single message.
 DatanodeCommand cmd = bpNamenode.blockReport(
 bpRegistration, bpos.getBlockPoolId(), reports);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8310) Fix TestCLI.testAll 'help: help for find' on Windows

2015-05-06 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8310:
-
Summary: Fix TestCLI.testAll 'help: help for find' on Windows  (was: Fix 
TestCLI.testAll help: help for find on Windows)

 Fix TestCLI.testAll 'help: help for find' on Windows
 

 Key: HDFS-8310
 URL: https://issues.apache.org/jira/browse/HDFS-8310
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.0
Reporter: Xiaoyu Yao
Assignee: Kiran Kumar M R
Priority: Minor
  Labels: BB2015-05-RFC
 Attachments: HDFS-8310-001.patch, HDFS-8310-002.patch


 The test uses RegexAcrossOutputComparator in a single regex, which does not 
 match on Windows as shown below.
 {code}
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(155)) - 
 ---
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(156)) - Test ID: [31]
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(157)) -Test Description: 
 [help: help for find]
 2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(158)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(162)) -   Test Commands: 
 [-help find]
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(166)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(173)) - 
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(177)) -  Comparator: 
 [RegexpAcrossOutputComparator]
 2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(179)) -  Comparision result:   
 [fail]
 2015-04-30 01:14:01,739 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(181)) - Expected output:   
 [-find path \.\.\. expression \.\.\. :
   Finds all files that match the specified expression and
   applies selected actions to them\. If no path is specified
   then defaults to the current working directory\. If no
   expression is specified then defaults to -print\.
   
   The following primary expressions are recognised:
 -name pattern
 -iname pattern
   Evaluates as true if the basename of the file matches the
   pattern using standard file system globbing\.
   If -iname is used then the match is case insensitive\.
   
 -print
 -print0
   Always evaluates to true. Causes the current pathname to be
   written to standard output followed by a newline. If the -print0
   expression is used then an ASCII NULL character is appended rather
   than a newline.
   
   The following operators are recognised:
 expression -a expression
 expression -and expression
 expression expression
   Logical AND operator for joining two expressions\. Returns
   true if both child expressions return true\. Implied by the
   juxtaposition of two expressions and so does not need to be
   explicitly specified\. The second expression will not be
   applied if the first fails\.
 ]
 2015-04-30 01:14:01,739 INFO  cli.CLITestHelper 
 (CLITestHelper.java:displayResults(183)) -   Actual output:   
 [-find path ... expression ... :
   Finds all files that match the specified expression and
   applies selected actions to them. If no path is specified
   then defaults to the current working directory. If no
   expression is specified then defaults to -print.
   
   The following primary expressions are recognised:
 -name pattern
 -iname pattern
   Evaluates as true if the basename of the file matches the
   pattern using standard file system globbing.
   If -iname is used then the match is case insensitive.
   
 -print
 -print0
   Always evaluates to true. Causes the current pathname to be
   written to standard output followed by a newline. If the -print0
   expression is used then an ASCII NULL character is appended rather
   than a newline.
   
   The following operators are recognised:
 expression -a expression
 expression -and expression
 expression expression
   Logical AND operator for joining two expressions. Returns
   true if both child expressions return true. Implied by the
   juxtaposition of two expressions and so does not need to be
   explicitly specified. The second expression will not be
   applied if the first fails.
 ]
 {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8332) DistributedFileSystem listCacheDirectives() and listCachePools() API calls should check filesystem closed

2015-05-06 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530837#comment-14530837
 ] 

Chris Nauroth commented on HDFS-8332:
-

Hello [~rakeshr].  Nice find!  Thank you for providing a patch.  It looks good. 
 I verified locally that the tests pass, and I agree that the checkstyle 
warnings are unrelated.

We're very inconsistent about this logic in {{DFSClient}}, even looking beyond 
these 2 methods.  See below for a list of the methods I spotted that don't call 
{{checkOpen}}.

Are you interested in providing a patch that covers all of these, or are you 
specifically interested in patching just {{listCacheDirectives}} and 
{{listCachePools}}?  I'm fine with either approach.  If you just want get the 
current patch committed, then I can file a separate jira for a comprehensive 
fix across all of these methods.  Please let me know how you'd like to proceed.

{code}
getBlockSize
getServerDefaults
reportBadBlocks
getLocatedBlocks
getBlockLocations
getBlockStorageLocations
createSymlink
getLinkTarget
setReplication
setStoragePolicy
getStoragePolicies
setSafeMode
listCacheDirectives
refreshNodes
metaSave
setBalancerBandwidth
finalizeUpgrade
getInotifyEventStream
{code}


 DistributedFileSystem listCacheDirectives() and listCachePools() API calls 
 should check filesystem closed
 -

 Key: HDFS-8332
 URL: https://issues.apache.org/jira/browse/HDFS-8332
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
  Labels: BB2015-05-TBR
 Attachments: HDFS-8332-000.patch


 I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be 
 called even after the filesystem close. Instead these calls should do 
 {{checkOpen}} and throws:
 {code}
 java.io.IOException: Filesystem closed
   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8332) DistributedFileSystem listCacheDirectives() and listCachePools() API calls should check filesystem closed

2015-05-06 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531031#comment-14531031
 ] 

Rakesh R commented on HDFS-8332:


Thanks a lot [~cnauroth] for the comments and pointing out the list of APIs to 
be covered. I'll try to do the modifications as part of this jira.

 DistributedFileSystem listCacheDirectives() and listCachePools() API calls 
 should check filesystem closed
 -

 Key: HDFS-8332
 URL: https://issues.apache.org/jira/browse/HDFS-8332
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
  Labels: BB2015-05-TBR
 Attachments: HDFS-8332-000.patch


 I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be 
 called even after the filesystem close. Instead these calls should do 
 {{checkOpen}} and throws:
 {code}
 java.io.IOException: Filesystem closed
   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files

2015-05-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531057#comment-14531057
 ] 

Arpit Agarwal edited comment on HDFS-8157 at 5/6/15 6:01 PM:
-

Thanks for the review. This was a very preliminary patch. I'll post an updated 
patch when I get some more time to work on this. I might just post a 
consolidated patch for this and HDFS-8192.

bq. Maybe I am missing something, but I don't understand the purpose behind 
releaseRoundDown. Why would we round down to a page size when allocating or 
releasing memory?
For the common case when the finalized length is not a multiple of the page 
size. e.g. Initial reservation = 16KB, page size = 4KB. The block is finalized 
at 11KB. We want to release round_down(16 - 11) and not round up.


was (Author: arpitagarwal):
Thanks for the review. This was a very preliminary patch. I'll post an updated 
patch when I get some more time to work on this. I might just post a 
consolidated patch for this and HDFS-8192.

bq. Maybe I am missing something, but I don't understand the purpose behind 
releaseRoundDown. Why would we round down to a page size when allocating or 
releasing memory?
For the common case when the finalized length is not a multiple of the page 
size. e.g. Initial reservation = 16KB, page size = 4KB. A The block is 
finalized at 11KB. We want to release round_down(16 - 11) and not round up.

 Writes to RAM DISK reserve locked memory for block files
 

 Key: HDFS-8157
 URL: https://issues.apache.org/jira/browse/HDFS-8157
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
  Labels: BB2015-05-TBR
 Attachments: HDFS-8157.01.patch


 Per discussion on HDFS-6919, the first step is that writes to RAM disk will 
 reserve locked memory via the FsDatasetCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files

2015-05-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531057#comment-14531057
 ] 

Arpit Agarwal commented on HDFS-8157:
-

Thanks for the review. This was a very preliminary patch. I'll post an updated 
patch when I get some more time to work on this. I might just post a 
consolidated patch for this and HDFS-8192.

bq. Maybe I am missing something, but I don't understand the purpose behind 
releaseRoundDown. Why would we round down to a page size when allocating or 
releasing memory?
For the common case when the finalized length is not a multiple of the page 
size. e.g. Initial reservation = 16KB, page size = 4KB. A The block is 
finalized at 11KB. We want to release round_down(16 - 11) and not round up.

 Writes to RAM DISK reserve locked memory for block files
 

 Key: HDFS-8157
 URL: https://issues.apache.org/jira/browse/HDFS-8157
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
  Labels: BB2015-05-TBR
 Attachments: HDFS-8157.01.patch


 Per discussion on HDFS-6919, the first step is that writes to RAM disk will 
 reserve locked memory via the FsDatasetCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8305) HDFS INotify: the destination field of RenameOp should always end with the file name

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530902#comment-14530902
 ] 

Hudson commented on HDFS-8305:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #186 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/186/])
HDFS-8305: HDFS INotify: the destination field of RenameOp should always end 
with the file name (cmccabe) (cmccabe: rev 
fcd4cb751665adb241081e42b3403c3856b6c6fe)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java


 HDFS INotify: the destination field of RenameOp should always end with the 
 file name
 

 Key: HDFS-8305
 URL: https://issues.apache.org/jira/browse/HDFS-8305
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.7.1

 Attachments: HDFS-8305.001.patch, HDFS-8305.002.patch


 HDFS INotify: the destination field of RenameOp should always end with the 
 file name rather than sometimes being a directory name.  Previously, in some 
 cases when using the old rename, this was not the case.  The format of 
 OP_EDIT_LOG_RENAME_OLD allows moving /f to /d/f to be represented as 
 RENAME(src=/f, dst=/d) or RENAME(src=/f, dst=/d/f). This change makes HDFS 
 always use the latter form. This, in turn, ensures that inotify will always 
 be able to consider the dst field as the full destination file name. This is 
 a compatible change since we aren't removing the ability to handle the first 
 form during edit log replay... we just no longer generate it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >