[jira] [Commented] (HDFS-9255) Consolidate block recovery related implementation into a single class

2015-10-21 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966342#comment-14966342
 ] 

Walter Su commented on HDFS-9255:
-

The changes in {{commitBlockSynchronization}} is to utilize 
{{DatanodeManager.getDatanodeStorageInfos(..)}} which knows 
{{DatanodeID.EMPTY_DATANODE_ID}}. But in fact {{EMPTY_DATANODE_ID}} only used 
for striped file.
I meant to separte all non-related changes of HDFS-9173 to here. Sorry for the 
confusing, I still hope to include the changes in here.

> Consolidate block recovery related implementation into a single class
> -
>
> Key: HDFS-9255
> URL: https://issues.apache.org/jira/browse/HDFS-9255
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Attachments: HDFS-9255.01.patch, HDFS-9255.02.patch, 
> HDFS-9255.03.patch, HDFS-9255.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-21 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966343#comment-14966343
 ] 

Brahma Reddy Battula commented on HDFS-8647:


{{TestRecoverStripedFile}} failure and {{Pre-patch Findbugs warnings}} are 
unrelated. 

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch, HDFS-8647-007.patch, 
> HDFS-8647-008.patch, HDFS-8647-009.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8287) DFSStripedOutputStream.writeChunk should not wait for writing parity

2015-10-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966360#comment-14966360
 ] 

Hadoop QA commented on HDFS-8287:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m  5s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m 38s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m  3s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 44s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 11s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 25s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests |   0m 30s | Tests passed in 
hadoop-hdfs-client. |
| | |  49m 18s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767729/HDFS-8287.15.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0c4af0f |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13103/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13103/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13103/console |


This message was automatically generated.

> DFSStripedOutputStream.writeChunk should not wait for writing parity 
> -
>
> Key: HDFS-8287
> URL: https://issues.apache.org/jira/browse/HDFS-8287
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Kai Sasaki
> Attachments: HDFS-8287-HDFS-7285.00.patch, 
> HDFS-8287-HDFS-7285.01.patch, HDFS-8287-HDFS-7285.02.patch, 
> HDFS-8287-HDFS-7285.03.patch, HDFS-8287-HDFS-7285.04.patch, 
> HDFS-8287-HDFS-7285.05.patch, HDFS-8287-HDFS-7285.06.patch, 
> HDFS-8287-HDFS-7285.07.patch, HDFS-8287-HDFS-7285.08.patch, 
> HDFS-8287-HDFS-7285.09.patch, HDFS-8287-HDFS-7285.10.patch, 
> HDFS-8287-HDFS-7285.11.patch, HDFS-8287-HDFS-7285.WIP.patch, 
> HDFS-8287-performance-report.pdf, HDFS-8287.12.patch, HDFS-8287.13.patch, 
> HDFS-8287.14.patch, HDFS-8287.15.patch, h8287_20150911.patch, jstack-dump.txt
>
>
> When a stripping cell is full, writeChunk computes and generates parity 
> packets.  It sequentially calls waitAndQueuePacket so that user client cannot 
> continue to write data until it finishes.
> We should allow user client to continue writing instead but not blocking it 
> when writing parity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7653) Block Readers and Writers used in both client side and datanode side

2015-10-21 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo resolved HDFS-7653.
-
  Resolution: Won't Fix
Release Note: The design and implementation have changed a lot after this 
issue created. Due to the complexity and difference of client and datanode side 
read/write, we'll not solve this issue at current stage.  

> Block Readers and Writers used in both client side and datanode side
> 
>
> Key: HDFS-7653
> URL: https://issues.apache.org/jira/browse/HDFS-7653
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: BlockReadersWriters.patch
>
>
> There're a lot of block read/write operations in HDFS-EC, for example, when 
> client writes a file in striping layout, client has to write several blocks 
> to several different datanodes; if a datanode wants to do an 
> encoding/decoding task, it has to read several blocks from itself and other 
> datanodes, and writes one or more blocks to itself or other datanodes.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7679) Erasure Coding: unifying common constructs like coding work, block reader and block writer across client and DataNode

2015-10-21 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo resolved HDFS-7679.
-
Resolution: Won't Fix

> Erasure Coding: unifying common constructs like coding work, block reader and 
> block writer across client and DataNode
> -
>
> Key: HDFS-7679
> URL: https://issues.apache.org/jira/browse/HDFS-7679
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: ECEncodeDecodeFramework.patch
>
>
> Based on the work done, we will have similar constructs like coding work, 
> local/remote block reader/writer in both client and DataNode side, so it's 
> possible to refactor the codes further and unify these constructs to 
> eliminate possible duplicate codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7781) Use block erasure coder in client stripping

2015-10-21 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo resolved HDFS-7781.
-
Resolution: Won't Fix

> Use block erasure coder in client stripping
> ---
>
> Key: HDFS-7781
> URL: https://issues.apache.org/jira/browse/HDFS-7781
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Fix For: HDFS-7285
>
>
> In HDFS-7729, raw erasure coder is used in order not to depend on the 
> {{ErasureCoder}} api  defined in HDFS-7662 or even {{ErasureCodec}} api 
> defined in HDFS-7337 since they're still upcoming.
> This is a follow up issue to work on that when the high level constructs are 
> available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9273) ACLs on root directory may be lost after NN restart

2015-10-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966385#comment-14966385
 ] 

Hadoop QA commented on HDFS-9273:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m 51s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m  2s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 23s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 22s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 27s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests |  51m  8s | Tests passed in hadoop-hdfs. 
|
| | |  98m  0s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767705/HDFS-9273.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0c4af0f |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13101/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13101/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13101/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13101/console |


This message was automatically generated.

> ACLs on root directory may be lost after NN restart
> ---
>
> Key: HDFS-9273
> URL: https://issues.apache.org/jira/browse/HDFS-9273
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9273.001.patch
>
>
> After restarting namenode, the ACLs on the root directory ("/") may be lost 
> if it's rolled over to fsimage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9226) MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils

2015-10-21 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966407#comment-14966407
 ] 

Masatake Iwasaki commented on HDFS-9226:


{code}
277 Mockito.doAnswer(new Answer() {
278   @Override
279   public DatanodeRegistration answer(InvocationOnMock invocation)
280   throws Throwable {
281 return (DatanodeRegistration) invocation.getArguments()[0];
282   }
283 
}).when(namenode).registerDatanode(Mockito.any(DatanodeRegistration.class));
{code}
[~arpitagarwal], this code added to DataNodeTestUtils by HDFS-8953 seemed to 
make Mockito loaded on loading DataNodeTestUtils. Though Mockito has been used 
for a long time by DataNodeTestUtils, on demand class loading of JVM had not 
needed it in the code path of MiniDFSCluster before HDFS-8953 came in.

Test in HADOOP-12477 suceeded without mockito dependency by commenting out 
above.


> MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils
> -
>
> Key: HDFS-9226
> URL: https://issues.apache.org/jira/browse/HDFS-9226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS, test
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HDFS-9226.001.patch, HDFS-9226.002.patch, 
> HDFS-9226.003.patch, HDFS-9226.004.patch, HDFS-9226.005.patch
>
>
> Noticed a test failure when attempting to run Accumulo unit tests against 
> 2.8.0-SNAPSHOT:
> {noformat}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDF

[jira] [Commented] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966416#comment-14966416
 ] 

Hadoop QA commented on HDFS-9241:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  20m  3s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 57s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 29s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 29s | The applied patch generated  
91 new checkstyle issues (total was 104, now 194). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 34s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests |  50m 18s | Tests passed in hadoop-hdfs. 
|
| {color:green}+1{color} | hdfs tests |   0m 32s | Tests passed in 
hadoop-hdfs-client. |
| | | 102m 18s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767728/HDFS-9241.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0c4af0f |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13102/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/13102/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13102/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13102/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13102/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13102/console |


This message was automatically generated.

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HDFS-9241.000.patch, HDFS-9241.001.patch
>
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9270) TestShortCircuitLocalRead should not leave socket after unit test

2015-10-21 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966437#comment-14966437
 ] 

Masatake Iwasaki commented on HDFS-9270:


Thanks, [~cmccabe].

> TestShortCircuitLocalRead should not leave socket after unit test
> -
>
> Key: HDFS-9270
> URL: https://issues.apache.org/jira/browse/HDFS-9270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9270.001.patch
>
>
> Unix domain sockets created by TestShortCircuitLocalRead and 
> TestTracingShortCircuitLocalRead are not removed before finishing the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8411) Add bytes count metrics to datanode for ECWorker

2015-10-21 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-8411:

Attachment: HDFS-8411-003.patch

> Add bytes count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8411
> URL: https://issues.apache.org/jira/browse/HDFS-8411
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8411-001.patch, HDFS-8411-002.patch, 
> HDFS-8411-003.patch
>
>
> This is a sub task of HDFS-7674. It calculates the amount of data that is 
> read from local or remote to attend decoding work, and also the amount of 
> data that is written to local or remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9275) Fix TestRecoverStripedFile

2015-10-21 Thread Walter Su (JIRA)
Walter Su created HDFS-9275:
---

 Summary: Fix TestRecoverStripedFile
 Key: HDFS-9275
 URL: https://issues.apache.org/jira/browse/HDFS-9275
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9276) Failed to Update HDFS Delegation Token for long running application in HA mode

2015-10-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-9276:
-
Component/s: security

> Failed to Update HDFS Delegation Token for long running application in HA mode
> --
>
> Key: HDFS-9276
> URL: https://issues.apache.org/jira/browse/HDFS-9276
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, ha, security
>Reporter: Liangliang Gu
>
> The Scenario is as follows:
> 1. NameNode HA is enabled.
> 2. Kerberos is enabled.
> 3. HDFS Delegation Token (not Keytab or TGT) is used to communicate with 
> NameNode.
> 4. We want to update the HDFS Delegation Token for long running applicatons. 
> HDFS Client will generate private tokens for each NameNode. When we update 
> the HDFS Delegation Token, these private tokens will not be updated, which 
> will cause token expired.
> This bug can be reproduced by the following program:
> import java.security.PrivilegedExceptionAction
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.fs.{FileSystem, Path}
> import org.apache.hadoop.security.UserGroupInformation
> object HadoopKerberosTest {
>   def main(args: Array[String]): Unit = {
> val keytab = "/path/to/keytab/xxx.keytab"
> val principal = "x...@abc.com"
> val creds1 = new org.apache.hadoop.security.Credentials()
> val ugi1 = 
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
> ugi1.doAs(new PrivilegedExceptionAction[Void] {
>   // Get a copy of the credentials
>   override def run(): Void = {
> val fs = FileSystem.get(new Configuration())
> fs.addDelegationTokens("test", creds1)
> null
>   }
> })
> val ugi = UserGroupInformation.createRemoteUser("test")
> ugi.addCredentials(creds1)
> ugi.doAs(new PrivilegedExceptionAction[Void] {
>   // Get a copy of the credentials
>   override def run(): Void = {
> var i = 0
> while (true) {
>   val creds1 = new org.apache.hadoop.security.Credentials()
>   val ugi1 = 
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
>   ugi1.doAs(new PrivilegedExceptionAction[Void] {
> // Get a copy of the credentials
> override def run(): Void = {
>   val fs = FileSystem.get(new Configuration())
>   fs.addDelegationTokens("test", creds1)
>   null
> }
>   })
>   UserGroupInformation.getCurrentUser.addCredentials(creds1)
>   val fs = FileSystem.get( new Configuration())
>   i += 1
>   println()
>   println(i)
>   println(fs.listFiles(new Path("/user"), false))
>   Thread.sleep(60 * 1000)
> }
> null
>   }
> })
>   }
> }
> To reproduce the bug, please set the following configuration to Name Node:
> dfs.namenode.delegation.token.max-lifetime = 10min
> dfs.namenode.delegation.key.update-interval = 3min
> dfs.namenode.delegation.token.renew-interval = 3min
> The bug will occure after 3 minutes.
> The stacktrace is:
> Exception in thread "main" 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (HDFS_DELEGATION_TOKEN token 330156 for test) is expired
>   at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1679)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1106)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSys

[jira] [Moved] (HDFS-9276) Failed to Update HDFS Delegation Token for long running application in HA mode

2015-10-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran moved HADOOP-12497 to HDFS-9276:
---

Component/s: (was: ha)
 (was: fs)
 ha
 fs
Key: HDFS-9276  (was: HADOOP-12497)
Project: Hadoop HDFS  (was: Hadoop Common)

> Failed to Update HDFS Delegation Token for long running application in HA mode
> --
>
> Key: HDFS-9276
> URL: https://issues.apache.org/jira/browse/HDFS-9276
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, ha
>Reporter: Liangliang Gu
>
> The Scenario is as follows:
> 1. NameNode HA is enabled.
> 2. Kerberos is enabled.
> 3. HDFS Delegation Token (not Keytab or TGT) is used to communicate with 
> NameNode.
> 4. We want to update the HDFS Delegation Token for long running applicatons. 
> HDFS Client will generate private tokens for each NameNode. When we update 
> the HDFS Delegation Token, these private tokens will not be updated, which 
> will cause token expired.
> This bug can be reproduced by the following program:
> import java.security.PrivilegedExceptionAction
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.fs.{FileSystem, Path}
> import org.apache.hadoop.security.UserGroupInformation
> object HadoopKerberosTest {
>   def main(args: Array[String]): Unit = {
> val keytab = "/path/to/keytab/xxx.keytab"
> val principal = "x...@abc.com"
> val creds1 = new org.apache.hadoop.security.Credentials()
> val ugi1 = 
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
> ugi1.doAs(new PrivilegedExceptionAction[Void] {
>   // Get a copy of the credentials
>   override def run(): Void = {
> val fs = FileSystem.get(new Configuration())
> fs.addDelegationTokens("test", creds1)
> null
>   }
> })
> val ugi = UserGroupInformation.createRemoteUser("test")
> ugi.addCredentials(creds1)
> ugi.doAs(new PrivilegedExceptionAction[Void] {
>   // Get a copy of the credentials
>   override def run(): Void = {
> var i = 0
> while (true) {
>   val creds1 = new org.apache.hadoop.security.Credentials()
>   val ugi1 = 
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
>   ugi1.doAs(new PrivilegedExceptionAction[Void] {
> // Get a copy of the credentials
> override def run(): Void = {
>   val fs = FileSystem.get(new Configuration())
>   fs.addDelegationTokens("test", creds1)
>   null
> }
>   })
>   UserGroupInformation.getCurrentUser.addCredentials(creds1)
>   val fs = FileSystem.get( new Configuration())
>   i += 1
>   println()
>   println(i)
>   println(fs.listFiles(new Path("/user"), false))
>   Thread.sleep(60 * 1000)
> }
> null
>   }
> })
>   }
> }
> To reproduce the bug, please set the following configuration to Name Node:
> dfs.namenode.delegation.token.max-lifetime = 10min
> dfs.namenode.delegation.key.update-interval = 3min
> dfs.namenode.delegation.token.renew-interval = 3min
> The bug will occure after 3 minutes.
> The stacktrace is:
> Exception in thread "main" 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (HDFS_DELEGATION_TOKEN token 330156 for test) is expired
>   at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1679)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1106)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1

[jira] [Commented] (HDFS-336) dfsadmin -report should report number of blocks from datanode

2015-10-21 Thread Manjunath Ballur (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966579#comment-14966579
 ] 

Manjunath Ballur commented on HDFS-336:
---

Hi Harsh, I am a newbie. I would like to take this up. Do you think, it still 
makes sense to have this feature?

> dfsadmin -report should report number of blocks from datanode
> -
>
> Key: HDFS-336
> URL: https://issues.apache.org/jira/browse/HDFS-336
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lohit Vijayarenu
>Priority: Minor
>  Labels: newbie
>
> _hadoop dfsadmin -report_ seems to miss number of blocks from a datanode. 
> Number of blocks hosted by a datanode is a good info which should be included 
> in the report. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9274) Default value of dfs.datanode.directoryscan.throttle.limit.ms.per.sec should be consistent

2015-10-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966582#comment-14966582
 ] 

Hadoop QA commented on HDFS-9274:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 58s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 57s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 41s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests |  49m 32s | Tests passed in hadoop-hdfs. 
|
| | |  89m 55s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767719/HDFS-9274.001.patch |
| Optional Tests | javadoc javac unit |
| git revision | trunk / 0c4af0f |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13104/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13104/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13104/console |


This message was automatically generated.

> Default value of dfs.datanode.directoryscan.throttle.limit.ms.per.sec should 
> be consistent
> --
>
> Key: HDFS-9274
> URL: https://issues.apache.org/jira/browse/HDFS-9274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Trivial
> Attachments: HDFS-9274.001.patch
>
>
> Always see following error log while running:
> {noformat}
> ERROR datanode.DirectoryScanner (DirectoryScanner.java:(430)) - 
> dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 
> ms/sec. Assuming default value of 1000
> {noformat}
> {code}
> 
>   dfs.datanode.directoryscan.throttle.limit.ms.per.sec
>   0
> ...
> {code}
> The default value should be 1000 and consistent with 
> DFS_DATANODE_DIRECTORYSCAN_THROTTLE_LIMIT_MS_PER_SEC_DEFAULT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8287) DFSStripedOutputStream.writeChunk should not wait for writing parity

2015-10-21 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966585#comment-14966585
 ] 

Kai Sasaki commented on HDFS-8287:
--

[~rakeshr] I updated patch. Could check the current one? Thank you.

> DFSStripedOutputStream.writeChunk should not wait for writing parity 
> -
>
> Key: HDFS-8287
> URL: https://issues.apache.org/jira/browse/HDFS-8287
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Kai Sasaki
> Attachments: HDFS-8287-HDFS-7285.00.patch, 
> HDFS-8287-HDFS-7285.01.patch, HDFS-8287-HDFS-7285.02.patch, 
> HDFS-8287-HDFS-7285.03.patch, HDFS-8287-HDFS-7285.04.patch, 
> HDFS-8287-HDFS-7285.05.patch, HDFS-8287-HDFS-7285.06.patch, 
> HDFS-8287-HDFS-7285.07.patch, HDFS-8287-HDFS-7285.08.patch, 
> HDFS-8287-HDFS-7285.09.patch, HDFS-8287-HDFS-7285.10.patch, 
> HDFS-8287-HDFS-7285.11.patch, HDFS-8287-HDFS-7285.WIP.patch, 
> HDFS-8287-performance-report.pdf, HDFS-8287.12.patch, HDFS-8287.13.patch, 
> HDFS-8287.14.patch, HDFS-8287.15.patch, h8287_20150911.patch, jstack-dump.txt
>
>
> When a stripping cell is full, writeChunk computes and generates parity 
> packets.  It sequentially calls waitAndQueuePacket so that user client cannot 
> continue to write data until it finishes.
> We should allow user client to continue writing instead but not blocking it 
> when writing parity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966609#comment-14966609
 ] 

Steve Loughran commented on HDFS-9241:
--

I like this. I'll still have to move my code off DFSConfigKeys, which means 
some inlining of constants, but at least the critical issue, getting those 
hdfs-site and hdfs-default XML resources loaded will be addressed. That's the 
thing that would have really caused problems.

I'll leave for others to give a final vote, but I'm personally +1 here

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HDFS-9241.000.patch, HDFS-9241.001.patch
>
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8836) Skip newline on empty files with getMerge -nl

2015-10-21 Thread Jan Filipiak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1495#comment-1495
 ] 

Jan Filipiak commented on HDFS-8836:


[~ajisakaa] thank yu for rethinking this issue, the comments to the path you 
suggest make sense. I was always tempted todo something along the lines of: 

{code}
delimiter = cf.getOpt("skip-empty-file") ? "" : "\n";
{code}

But to I dont have very strong opinions about how this should look in the end. 

> Skip newline on empty files with getMerge -nl
> -
>
> Key: HDFS-8836
> URL: https://issues.apache.org/jira/browse/HDFS-8836
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.0, 2.7.1
>Reporter: Jan Filipiak
>Assignee: Kanaka Kumar Avvaru
>Priority: Trivial
> Attachments: HDFS-8836-01.patch, HDFS-8836-02.patch, 
> HDFS-8836-03.patch, HDFS-8836-04.patch, HDFS-8836-05.patch
>
>
> Hello everyone,
> I recently was in the need of using the new line option -nl with getMerge 
> because the files I needed to merge simply didn't had one. I was merging all 
> the files from one directory and unfortunately this directory also included 
> empty files, which effectively led to multiple newlines append after some 
> files. I needed to remove them manually afterwards.
> In this situation it is maybe good to have another argument that allows 
> skipping empty files.
> Thing one could try to implement this feature:
> The call for IOUtils.copyBytes(in, out, getConf(), false); doesn't
> return the number of bytes copied which would be convenient as one could
> skip append the new line when 0 bytes where copied or one would check the 
> file size before.
> I posted this Idea on the mailing list 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201507.mbox/%3C55B25140.3060005%40trivago.com%3E
>  but I didn't really get many responses, so I thought I my try this way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8836) Skip newline on empty files with getMerge -nl

2015-10-21 Thread Jan Filipiak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1497#comment-1497
 ] 

Jan Filipiak commented on HDFS-8836:


sorry for that many typos

> Skip newline on empty files with getMerge -nl
> -
>
> Key: HDFS-8836
> URL: https://issues.apache.org/jira/browse/HDFS-8836
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.0, 2.7.1
>Reporter: Jan Filipiak
>Assignee: Kanaka Kumar Avvaru
>Priority: Trivial
> Attachments: HDFS-8836-01.patch, HDFS-8836-02.patch, 
> HDFS-8836-03.patch, HDFS-8836-04.patch, HDFS-8836-05.patch
>
>
> Hello everyone,
> I recently was in the need of using the new line option -nl with getMerge 
> because the files I needed to merge simply didn't had one. I was merging all 
> the files from one directory and unfortunately this directory also included 
> empty files, which effectively led to multiple newlines append after some 
> files. I needed to remove them manually afterwards.
> In this situation it is maybe good to have another argument that allows 
> skipping empty files.
> Thing one could try to implement this feature:
> The call for IOUtils.copyBytes(in, out, getConf(), false); doesn't
> return the number of bytes copied which would be convenient as one could
> skip append the new line when 0 bytes where copied or one would check the 
> file size before.
> I posted this Idea on the mailing list 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201507.mbox/%3C55B25140.3060005%40trivago.com%3E
>  but I didn't really get many responses, so I thought I my try this way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9229) Expose size of NameNode directory as a metric

2015-10-21 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-9229:
-
Attachment: HDFS-9229.003.patch

Thanks [~zhz] for review...
Attached updated patch, please review..

> Expose size of NameNode directory as a metric
> -
>
> Key: HDFS-9229
> URL: https://issues.apache.org/jira/browse/HDFS-9229
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Surendra Singh Lilhore
>Priority: Minor
> Attachments: HDFS-9229.001.patch, HDFS-9229.002.patch, 
> HDFS-9229.003.patch
>
>
> Useful for admins in reserving / managing NN local file system space. Also 
> useful when transferring NN backups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9275) Fix TestRecoverStripedFile

2015-10-21 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-9275:

Status: Open  (was: Patch Available)

> Fix TestRecoverStripedFile
> --
>
> Key: HDFS-9275
> URL: https://issues.apache.org/jira/browse/HDFS-9275
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-9275.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9275) Fix TestRecoverStripedFile

2015-10-21 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-9275:

Attachment: HDFS-9275.01.patch

> Fix TestRecoverStripedFile
> --
>
> Key: HDFS-9275
> URL: https://issues.apache.org/jira/browse/HDFS-9275
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-9275.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9275) Fix TestRecoverStripedFile

2015-10-21 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-9275:

Status: Patch Available  (was: Open)

> Fix TestRecoverStripedFile
> --
>
> Key: HDFS-9275
> URL: https://issues.apache.org/jira/browse/HDFS-9275
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-9275.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9275) Fix TestRecoverStripedFile

2015-10-21 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-9275:

Component/s: test

> Fix TestRecoverStripedFile
> --
>
> Key: HDFS-9275
> URL: https://issues.apache.org/jira/browse/HDFS-9275
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-9275.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9277) IOException "Unable to load OAuth2 connection factory." in TestWebHDFSOAuth2.listStatusReturnsAsExpected

2015-10-21 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-9277:
-

 Summary: IOException "Unable to load OAuth2 connection factory." 
in TestWebHDFSOAuth2.listStatusReturnsAsExpected
 Key: HDFS-9277
 URL: https://issues.apache.org/jira/browse/HDFS-9277
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Wei-Chiu Chuang


This test is failing consistently in Hadoop-hdfs-trunk and 
Hadoop-hdfs-trunk-Java8 since September 22.

REGRESSION:  
org.apache.hadoop.hdfs.web.TestWebHDFSOAuth2.listStatusReturnsAsExpected

Error Message:
Unable to load OAuth2 connection factory.

Stack Trace:
java.io.IOException: Unable to load OAuth2 connection factory.
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:146)
at 
org.apache.hadoop.security.ssl.ReloadingX509TrustManager.loadTrustManager(ReloadingX509TrustManager.java:164)
at 
org.apache.hadoop.security.ssl.ReloadingX509TrustManager.(ReloadingX509TrustManager.java:81)
at 
org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:215)
at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:131)
at 
org.apache.hadoop.hdfs.web.URLConnectionFactory.newSslConnConfigurator(URLConnectionFactory.java:135)
at 
org.apache.hadoop.hdfs.web.URLConnectionFactory.newOAuth2URLConnectionFactory(URLConnectionFactory.java:110)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.initialize(WebHdfsFileSystem.java:158)
at 
org.apache.hadoop.hdfs.web.TestWebHDFSOAuth2.listStatusReturnsAsExpected(TestWebHDFSOAuth2.java:147)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9117) Config file reader / options classes for libhdfs++

2015-10-21 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9117:
-
Attachment: (was: HDFS-9117.HDFS-8707.empty.patch)

> Config file reader / options classes for libhdfs++
> --
>
> Key: HDFS-9117
> URL: https://issues.apache.org/jira/browse/HDFS-9117
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9117.HDFS-8707.001.patch, 
> HDFS-9117.HDFS-8707.002.patch, HDFS-9117.HDFS-8707.003.patch, 
> HDFS-9117.HDFS-8707.004.patch
>
>
> For environmental compatability with HDFS installations, libhdfs++ should be 
> able to read the configurations from Hadoop XML files and behave in line with 
> the Java implementation.
> Most notably, machine names and ports should be readable from Hadoop XML 
> configuration files.
> Similarly, an internal Options architecture for libhdfs++ should be developed 
> to efficiently transport the configuration information within the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9117) Config file reader / options classes for libhdfs++

2015-10-21 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9117:
-
Attachment: HDFS-9117.HDFS-8707.004.patch

> Config file reader / options classes for libhdfs++
> --
>
> Key: HDFS-9117
> URL: https://issues.apache.org/jira/browse/HDFS-9117
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9117.HDFS-8707.001.patch, 
> HDFS-9117.HDFS-8707.002.patch, HDFS-9117.HDFS-8707.003.patch, 
> HDFS-9117.HDFS-8707.004.patch
>
>
> For environmental compatability with HDFS installations, libhdfs++ should be 
> able to read the configurations from Hadoop XML files and behave in line with 
> the Java implementation.
> Most notably, machine names and ports should be readable from Hadoop XML 
> configuration files.
> Similarly, an internal Options architecture for libhdfs++ should be developed 
> to efficiently transport the configuration information within the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8660) Slow write to packet mirror should log which mirror and which block

2015-10-21 Thread Hazem Mahmoud (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hazem Mahmoud updated HDFS-8660:

Status: Patch Available  (was: Open)

> Slow write to packet mirror should log which mirror and which block
> ---
>
> Key: HDFS-8660
> URL: https://issues.apache.org/jira/browse/HDFS-8660
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Hazem Mahmoud
>Assignee: Hazem Mahmoud
>
> Currently, log format states something similar to: 
> "Slow BlockReceiver write packet to mirror took 468ms (threshold=300ms)"
> For troubleshooting purposes, it would be good to have it mention which block 
> ID it's writing as well as the mirror (DN) that it's writing it to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8660) Slow write to packet mirror should log which mirror and which block

2015-10-21 Thread Hazem Mahmoud (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hazem Mahmoud updated HDFS-8660:

Attachment: HDFS-8660.001.patch

> Slow write to packet mirror should log which mirror and which block
> ---
>
> Key: HDFS-8660
> URL: https://issues.apache.org/jira/browse/HDFS-8660
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Hazem Mahmoud
>Assignee: Hazem Mahmoud
> Attachments: HDFS-8660.001.patch
>
>
> Currently, log format states something similar to: 
> "Slow BlockReceiver write packet to mirror took 468ms (threshold=300ms)"
> For troubleshooting purposes, it would be good to have it mention which block 
> ID it's writing as well as the mirror (DN) that it's writing it to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8660) Slow write to packet mirror should log which mirror and which block

2015-10-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966779#comment-14966779
 ] 

Hadoop QA commented on HDFS-8660:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767780/HDFS-8660.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0c4af0f |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13107/console |


This message was automatically generated.

> Slow write to packet mirror should log which mirror and which block
> ---
>
> Key: HDFS-8660
> URL: https://issues.apache.org/jira/browse/HDFS-8660
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Hazem Mahmoud
>Assignee: Hazem Mahmoud
> Attachments: HDFS-8660.001.patch
>
>
> Currently, log format states something similar to: 
> "Slow BlockReceiver write packet to mirror took 468ms (threshold=300ms)"
> For troubleshooting purposes, it would be good to have it mention which block 
> ID it's writing as well as the mirror (DN) that it's writing it to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9117) Config file reader / options classes for libhdfs++

2015-10-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966780#comment-14966780
 ] 

Hadoop QA commented on HDFS-9117:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   8m  3s | Pre-patch HDFS-8707 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:red}-1{color} | javac |   1m 53s | The patch appears to cause the 
build to fail. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/1276/HDFS-9117.HDFS-8707.004.patch
 |
| Optional Tests | javac unit |
| git revision | HDFS-8707 / ea310d7 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13106/console |


This message was automatically generated.

> Config file reader / options classes for libhdfs++
> --
>
> Key: HDFS-9117
> URL: https://issues.apache.org/jira/browse/HDFS-9117
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9117.HDFS-8707.001.patch, 
> HDFS-9117.HDFS-8707.002.patch, HDFS-9117.HDFS-8707.003.patch, 
> HDFS-9117.HDFS-8707.004.patch
>
>
> For environmental compatability with HDFS installations, libhdfs++ should be 
> able to read the configurations from Hadoop XML files and behave in line with 
> the Java implementation.
> Most notably, machine names and ports should be readable from Hadoop XML 
> configuration files.
> Similarly, an internal Options architecture for libhdfs++ should be developed 
> to efficiently transport the configuration information within the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9229) Expose size of NameNode directory as a metric

2015-10-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966846#comment-14966846
 ] 

Hadoop QA commented on HDFS-9229:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  26m 43s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m 43s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m  1s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m 14s | Site still builds. |
| {color:red}-1{color} | checkstyle |   2m 56s | The applied patch generated  1 
new checkstyle issues (total was 421, now 421). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 45s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 52s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |   8m 33s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests |  63m 17s | Tests failed in hadoop-hdfs. |
| | | 132m  8s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767769/HDFS-9229.003.patch |
| Optional Tests | site javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0c4af0f |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13105/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/13105/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13105/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13105/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13105/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13105/console |


This message was automatically generated.

> Expose size of NameNode directory as a metric
> -
>
> Key: HDFS-9229
> URL: https://issues.apache.org/jira/browse/HDFS-9229
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Surendra Singh Lilhore
>Priority: Minor
> Attachments: HDFS-9229.001.patch, HDFS-9229.002.patch, 
> HDFS-9229.003.patch
>
>
> Useful for admins in reserving / managing NN local file system space. Also 
> useful when transferring NN backups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8631) WebHDFS : Support list/setQuota

2015-10-21 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967254#comment-14967254
 ] 

Surendra Singh Lilhore commented on HDFS-8631:
--

Thanks  [~andreina] for review...

I have one doubt...

{quote}
Both in NamespaceQuotaParam and in StoragespaceQuotaParam, can update below as 
default value:
public static final String DEFAULT = 
String.valueOf(HdfsConstants.QUOTA_DONT_SET);
instead of 
public static final String DEFAULT = String.valueOf(Long.MAX_VALUE);
{quote}

I am not using anywhere in my patch {{String.valueOf(Long.MAX_VALUE);}}. If you 
are talking about
{code}public static final String DEFAULT = "9223372036854775807";{code}

We can't use expression as default value for annotation attribute. If we use, 
we will get compilation error like this..
{code}The value for annotation attribute DefaultValue.value must be a constant 
expression{code}
So I used "9223372036854775807" as default value and this is same as 
{{QUOTA_DONT_SET}} and {{Long.MAX_VALUE}}.

Please can you check once?

> WebHDFS : Support list/setQuota
> ---
>
> Key: HDFS-8631
> URL: https://issues.apache.org/jira/browse/HDFS-8631
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8631-001.patch
>
>
> User is able do quota management from filesystem object. Same operation can 
> be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9273) ACLs on root directory may be lost after NN restart

2015-10-21 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967293#comment-14967293
 ] 

Xiao Chen commented on HDFS-9273:
-

The Findbugs warning is not relevant.

> ACLs on root directory may be lost after NN restart
> ---
>
> Key: HDFS-9273
> URL: https://issues.apache.org/jira/browse/HDFS-9273
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9273.001.patch
>
>
> After restarting namenode, the ACLs on the root directory ("/") may be lost 
> if it's rolled over to fsimage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967295#comment-14967295
 ] 

Hudson commented on HDFS-8647:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8678 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8678/])
HDFS-8647. Abstract BlockManager's rack policy into (mingma: rev 
e27c2ae8bafc94f18eb38f5d839dcef5652d424e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopologyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithUpgradeDomain.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyRackFaultTolerant.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithUpgradeDomain.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithNodeGroup.java


> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch, HDFS-8647-007.patch, 
> HDFS-8647-008.patch, HDFS-8647-009.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-21 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-8647:
--
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

+1 on the latest patch. I have committed it to trunk and branch-2. Thanks 
[~brahmareddy] for the great contribution; [~walter.k.su] for the initial 
patch; [~vinayrpet] and [~andrew.wang] for the suggestion and code review.

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch, HDFS-8647-007.patch, 
> HDFS-8647-008.patch, HDFS-8647-009.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-21 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967343#comment-14967343
 ] 

Brahma Reddy Battula commented on HDFS-8647:


[~mingma] thanks for prompted reviews and commit..and thanks to others for 
additional review and suggestions..

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch, HDFS-8647-007.patch, 
> HDFS-8647-008.patch, HDFS-8647-009.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9207) Move the implementation to the hdfs-native-client module

2015-10-21 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9207:
-
Attachment: (was: HDFS-9207.HDFS-8707.empty.patch)

> Move the implementation to the hdfs-native-client module
> 
>
> Key: HDFS-9207
> URL: https://issues.apache.org/jira/browse/HDFS-9207
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-9207.000.patch
>
>
> The implementation of libhdfspp should be moved to the new hdfs-native-client 
> module as HDFS-9170 has landed in trunk and branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9117) Config file reader / options classes for libhdfs++

2015-10-21 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967395#comment-14967395
 ] 

James Clampffer commented on HDFS-9117:
---

Some Feedback:

More comments would be nice, particularly about how 
Configuration::SubstituteVars works at a high level.

Is an empty string sufficient to indicate failure for get_homedir() and 
get_username()?  Maybe add a comment making that explicit.

I think file_exists should do a S_IFREG(buffer.st_mode) to make sure it's a 
regular file.  Otherwise it's possible for Configuration::AddFileResource to 
fail before checking all search paths if the absolute path turns out to be a 
directory as unlikely as that may be.

Should str_to_bool trim left and right whitespace?  Not sure if the xml parser 
handles that.

Otherwise it looks good to me.



> Config file reader / options classes for libhdfs++
> --
>
> Key: HDFS-9117
> URL: https://issues.apache.org/jira/browse/HDFS-9117
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9117.HDFS-8707.001.patch, 
> HDFS-9117.HDFS-8707.002.patch, HDFS-9117.HDFS-8707.003.patch, 
> HDFS-9117.HDFS-8707.004.patch
>
>
> For environmental compatability with HDFS installations, libhdfs++ should be 
> able to read the configurations from Hadoop XML files and behave in line with 
> the Java implementation.
> Most notably, machine names and ports should be readable from Hadoop XML 
> configuration files.
> Similarly, an internal Options architecture for libhdfs++ should be developed 
> to efficiently transport the configuration information within the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9278) Typo in OIV

2015-10-21 Thread Nicole Pazmany (JIRA)
Nicole Pazmany created HDFS-9278:


 Summary: Typo in OIV
 Key: HDFS-9278
 URL: https://issues.apache.org/jira/browse/HDFS-9278
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.1
Reporter: Nicole Pazmany
Assignee: Nicole Pazmany
Priority: Trivial


I found a typo in the offline image viewer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9278) Typo in OIV

2015-10-21 Thread Nicole Pazmany (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicole Pazmany updated HDFS-9278:
-
Attachment: HDFS-9278.001.patch

> Typo in OIV
> ---
>
> Key: HDFS-9278
> URL: https://issues.apache.org/jira/browse/HDFS-9278
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Nicole Pazmany
>Assignee: Nicole Pazmany
>Priority: Trivial
> Attachments: HDFS-9278.001.patch
>
>
> I found a typo in the offline image viewer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9278) Typo in OIV

2015-10-21 Thread Nicole Pazmany (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicole Pazmany updated HDFS-9278:
-
Status: Patch Available  (was: Open)

> Typo in OIV
> ---
>
> Key: HDFS-9278
> URL: https://issues.apache.org/jira/browse/HDFS-9278
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Nicole Pazmany
>Assignee: Nicole Pazmany
>Priority: Trivial
> Attachments: HDFS-9278.001.patch
>
>
> I found a typo in the offline image viewer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9278) Typo in OIV

2015-10-21 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967454#comment-14967454
 ] 

Daniel Templeton commented on HDFS-9278:


Looks good to me.  +1 (non-binding)

> Typo in OIV
> ---
>
> Key: HDFS-9278
> URL: https://issues.apache.org/jira/browse/HDFS-9278
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Nicole Pazmany
>Assignee: Nicole Pazmany
>Priority: Trivial
> Attachments: HDFS-9278.001.patch
>
>
> I found a typo in the offline image viewer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9279) Decomissioned capacity should not be considered for configured/used capacity

2015-10-21 Thread Kuhu Shukla (JIRA)
Kuhu Shukla created HDFS-9279:
-

 Summary: Decomissioned capacity should not be considered for 
configured/used capacity
 Key: HDFS-9279
 URL: https://issues.apache.org/jira/browse/HDFS-9279
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.1
Reporter: Kuhu Shukla
Assignee: Kuhu Shukla


Capacity of a decommissioned node is being accounted as configured and used 
capacity metrics. This gives incorrect perception of cluster usage.
Once a node is decommissioned, its capacity should be considered similar to a 
dead node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967470#comment-14967470
 ] 

Hudson commented on HDFS-8647:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #577 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/577/])
HDFS-8647. Abstract BlockManager's rack policy into (mingma: rev 
e27c2ae8bafc94f18eb38f5d839dcef5652d424e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopologyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithUpgradeDomain.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyRackFaultTolerant.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithNodeGroup.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithUpgradeDomain.java


> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch, HDFS-8647-007.patch, 
> HDFS-8647-008.patch, HDFS-8647-009.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9245) Fix findbugs warnings in hdfs-nfs/WriteCtx

2015-10-21 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-9245:
-
Component/s: nfs

> Fix findbugs warnings in hdfs-nfs/WriteCtx
> --
>
> Key: HDFS-9245
> URL: https://issues.apache.org/jira/browse/HDFS-9245
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9245.000.patch
>
>
> There are findbugs warnings as follows, brought by [HDFS-9092].
> It seems fine to ignore them by write a filter rule in the 
> {{findbugsExcludeFile.xml}} file. 
> {code:xml}
>  instanceHash="592511935f7cb9e5f97ef4c99a6c46c2" instanceOccurrenceNum="0" 
> priority="2" abbrev="IS" type="IS2_INCONSISTENT_SYNC" cweid="366" 
> instanceOccurrenceMax="0">
> Inconsistent synchronization
> 
> Inconsistent synchronization of 
> org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.offset; locked 75% of time
> 
> 
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java" end="314">
> At WriteCtx.java:[lines 40-314]
> 
> In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx
> 
> {code}
> and
> {code:xml}
>  instanceHash="4f3daa339eb819220f26c998369b02fe" instanceOccurrenceNum="0" 
> priority="2" abbrev="IS" type="IS2_INCONSISTENT_SYNC" cweid="366" 
> instanceOccurrenceMax="0">
> Inconsistent synchronization
> 
> Inconsistent synchronization of 
> org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount; locked 50% of time
> 
> 
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java" end="314">
> At WriteCtx.java:[lines 40-314]
> 
> In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx
> 
>  name="originalCount" primary="true" signature="I">
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java">
> In WriteCtx.java
> 
> 
> Field org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9278) Fix preferredBlockSize typo in OIV XML output

2015-10-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9278:
--
Hadoop Flags: Incompatible change
Release Note: The preferred block size XML element has been corrected 
from "" to "".
 Summary: Fix preferredBlockSize typo in OIV XML output  (was: Typo 
in OIV)
Target Version/s: 3.0.0

LGTM, targeting this for 3.0 since this is a user visible change. Will commit 
when Jenkins comes back clean, thanks for fixing this Nicole!

> Fix preferredBlockSize typo in OIV XML output
> -
>
> Key: HDFS-9278
> URL: https://issues.apache.org/jira/browse/HDFS-9278
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Nicole Pazmany
>Assignee: Nicole Pazmany
>Priority: Trivial
> Attachments: HDFS-9278.001.patch
>
>
> I found a typo in the offline image viewer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-21 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9231:

Attachment: HDFS-9231.004.patch

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch, 
> HDFS-9231.003.patch, HDFS-9231.004.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9236) Missing sanity check for block size during block recovery

2015-10-21 Thread Tony Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967513#comment-14967513
 ] 

Tony Wu commented on HDFS-9236:
---

Hi [~yzhangal],

Could you take another look at the updated patch?

Thanks,
Tony

> Missing sanity check for block size during block recovery
> -
>
> Key: HDFS-9236
> URL: https://issues.apache.org/jira/browse/HDFS-9236
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
> Attachments: HDFS-9236.001.patch, HDFS-9236.002.patch, 
> HDFS-9236.003.patch
>
>
> Ran into an issue while running test against faulty data-node code. 
> Currently in DataNode.java:
> {code:java}
>   /** Block synchronization */
>   void syncBlock(RecoveringBlock rBlock,
>  List syncList) throws IOException {
> …
> // Calculate the best available replica state.
> ReplicaState bestState = ReplicaState.RWR;
> …
> // Calculate list of nodes that will participate in the recovery
> // and the new block size
> List participatingList = new ArrayList();
> final ExtendedBlock newBlock = new ExtendedBlock(bpid, blockId,
> -1, recoveryId);
> switch(bestState) {
> …
> case RBW:
> case RWR:
>   long minLength = Long.MAX_VALUE;
>   for(BlockRecord r : syncList) {
> ReplicaState rState = r.rInfo.getOriginalReplicaState();
> if(rState == bestState) {
>   minLength = Math.min(minLength, r.rInfo.getNumBytes());
>   participatingList.add(r);
> }
>   }
>   newBlock.setNumBytes(minLength);
>   break;
> …
> }
> …
> nn.commitBlockSynchronization(block,
> newBlock.getGenerationStamp(), newBlock.getNumBytes(), true, false,
> datanodes, storages);
>   }
> {code}
> This code is called by the DN coordinating the block recovery. In the above 
> case, it is possible for none of the rState (reported by DNs with copies of 
> the replica being recovered) to match the bestState. This can either be 
> caused by faulty DN code or stale/modified/corrupted files on DN. When this 
> happens the DN will end up reporting the minLengh of Long.MAX_VALUE.
> Unfortunately there is no check on the NN for replica length. See 
> FSNamesystem.java:
> {code:java}
>   void commitBlockSynchronization(ExtendedBlock oldBlock,
>   long newgenerationstamp, long newlength,
>   boolean closeFile, boolean deleteblock, DatanodeID[] newtargets,
>   String[] newtargetstorages) throws IOException {
> …
>   if (deleteblock) {
> Block blockToDel = ExtendedBlock.getLocalBlock(oldBlock);
> boolean remove = iFile.removeLastBlock(blockToDel) != null;
> if (remove) {
>   blockManager.removeBlock(storedBlock);
> }
>   } else {
> // update last block
> if(!copyTruncate) {
>   storedBlock.setGenerationStamp(newgenerationstamp);
>   
>   // XXX block length is updated without any check <<<   storedBlock.setNumBytes(newlength);
> }
> …
> if (closeFile) {
>   LOG.info("commitBlockSynchronization(oldBlock=" + oldBlock
>   + ", file=" + src
>   + (copyTruncate ? ", newBlock=" + truncatedBlock
>   : ", newgenerationstamp=" + newgenerationstamp)
>   + ", newlength=" + newlength
>   + ", newtargets=" + Arrays.asList(newtargets) + ") successful");
> } else {
>   LOG.info("commitBlockSynchronization(" + oldBlock + ") successful");
> }
>   }
> {code}
> After this point the block length becomes Long.MAX_VALUE. Any subsequent 
> block report (even with correct length) will cause the block to be marked as 
> corrupted. Since this is block could be the last block of the file. If this 
> happens and the client goes away, NN won’t be able to recover the lease and 
> close the file because the last block is under-replicated.
> I believe we need to have a sanity check for block size on both DN and NN to 
> prevent such case from happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8766) Implement a libhdfs(3) compatible API

2015-10-21 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967515#comment-14967515
 ] 

Haohui Mai commented on HDFS-8766:
--

Thanks for the comments, Bob.

bq. I'm unsure if you see all of them as show-stoppers and some as small nits; 
it would be nice if you could clarify that when we have the narrow band of Jira 
to communicate.

The earlier comments are fairly mechanical. They usually come from Jenkins if 
Jenkins is working. We have strong emphasis to minimize code duplication, 
consistent coding styles to allow other people in the community to participate. 
I see the above comments as the low bars before the patch is ready to be 
reviewed.

For 1a and 1b, please refer to HDFS-9253.

bq. As we discussed, we will be losing a valuable debugging aid that has sussed 
out issues in integration testing already. I don't think we should ship with 
this in-place, but I would like to keep it for the short term. Again, let's 
make a "Before we merge with trunk" Jira to capture that so we can both reap 
the benefits of it during the high-risk portion of our development and not 
merge it into trunk.

While dev branches are usually fast moving, we have the same level of 
expectations on quality controls. We also try to maintain a very clear commit 
history so that it is easy to merge to code back to trunk. That is being said, 
it's very undesirable to commit macros and debugging codes into the repository. 
However, the git workflows should allow you to keep these codes locally and to 
be applied when they are required.

We also have strong emphasis keep patches small and independent so that they 
can be easily reviewed and committed. They will allow faster development in the 
dev branch.

> Implement a libhdfs(3) compatible API
> -
>
> Key: HDFS-8766
> URL: https://issues.apache.org/jira/browse/HDFS-8766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-8766.HDFS-8707.000.patch, 
> HDFS-8766.HDFS-8707.001.patch, HDFS-8766.HDFS-8707.002.patch, 
> HDFS-8766.HDFS-8707.003.patch, HDFS-8766.HDFS-8707.004.patch, 
> HDFS-8766.HDFS-8707.005.patch, HDFS-8766.HDFS-8707.006.patch, 
> HDFS-8766.HDFS-8707.007.patch
>
>
> Add a synchronous API that is compatible with the hdfs.h header used in 
> libhdfs and libhdfs3.  This will make it possible for projects using 
> libhdfs/libhdfs3 to relink against libhdfspp with minimal changes.
> This also provides a pure C interface that can be linked against projects 
> that aren't built in C++11 mode for various reasons but use the same 
> compiler.  It also allows many other programming languages to access 
> libhdfspp through builtin FFI interfaces.
> The libhdfs API is very similar to the posix file API which makes it easier 
> for programs built using posix filesystem calls to be modified to access HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9266) hadoop-hdfs - Avoid unsafe split and append on fields that might be IPv6 literals

2015-10-21 Thread Nemanja Matkovic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967519#comment-14967519
 ] 

Nemanja Matkovic commented on HDFS-9266:


All these failures are obviously flaky tests as these didn't fail with same 
code in in patch.1 (only difference in patches is disabling two test cases if 
running in IPv4 only mode).
I ran all of these locally again to confirm (except StripedFile ones, they are 
busted due to head position) and they all passed, so I think we are good to 
commit this into branch.

> hadoop-hdfs - Avoid unsafe split and append on fields that might be IPv6 
> literals
> -
>
> Key: HDFS-9266
> URL: https://issues.apache.org/jira/browse/HDFS-9266
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Nemanja Matkovic
>Assignee: Nemanja Matkovic
>  Labels: ipv6
> Attachments: HDFS-9266-HADOOP-11890.1.patch, 
> HDFS-9266-HADOOP-11890.2.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-21 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9231:

Status: Patch Available  (was: Open)

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch, 
> HDFS-9231.003.patch, HDFS-9231.004.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-21 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967525#comment-14967525
 ] 

Xiao Chen commented on HDFS-9231:
-

patch 004 is attached. I think I should re-summarize it below:

- fsck from command line with {{-includeSnapshots}} will also show full dir of 
snapshots
- fsck from command line without {{-includeSnapshots}} behavior unchanged
- NameNode WebUI's way of showing corrupted files/blocks unchanged.
- Added a sentence in NN WebUI to hint the admin to run fsck with 
{{-includeSnapshots}} if there're snapshots present in the system.
- Some refactoring to reuse existing code in new methods 
{{getSnapshottableDirs}} and {{ListCorruptFileBlocksWithSnapshot}}
- The reasoning of keep minimal change to NN WebUI and fsck without 
{{-includeSnapshots}} is that getting all possible snapshots may be slow, since 
it's user configured.

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch, 
> HDFS-9231.003.patch, HDFS-9231.004.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967539#comment-14967539
 ] 

Hudson commented on HDFS-8647:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1298 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1298/])
HDFS-8647. Abstract BlockManager's rack policy into (mingma: rev 
e27c2ae8bafc94f18eb38f5d839dcef5652d424e)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopologyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyRackFaultTolerant.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithUpgradeDomain.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithUpgradeDomain.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java


> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch, HDFS-8647-007.patch, 
> HDFS-8647-008.patch, HDFS-8647-009.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967548#comment-14967548
 ] 

Hudson commented on HDFS-8647:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #563 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/563/])
HDFS-8647. Abstract BlockManager's rack policy into (mingma: rev 
e27c2ae8bafc94f18eb38f5d839dcef5652d424e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithUpgradeDomain.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyRackFaultTolerant.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithNodeGroup.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopologyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithUpgradeDomain.java


> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch, HDFS-8647-007.patch, 
> HDFS-8647-008.patch, HDFS-8647-009.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9229) Expose size of NameNode directory as a metric

2015-10-21 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967551#comment-14967551
 ] 

Rakesh R commented on HDFS-9229:


Thanks [~surendrasingh], latest patch looks good to me.

> Expose size of NameNode directory as a metric
> -
>
> Key: HDFS-9229
> URL: https://issues.apache.org/jira/browse/HDFS-9229
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Surendra Singh Lilhore
>Priority: Minor
> Attachments: HDFS-9229.001.patch, HDFS-9229.002.patch, 
> HDFS-9229.003.patch
>
>
> Useful for admins in reserving / managing NN local file system space. Also 
> useful when transferring NN backups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9229) Expose size of NameNode directory as a metric

2015-10-21 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967575#comment-14967575
 ] 

Haohui Mai commented on HDFS-9229:
--

The metric needs to live in a different place instead of {{NameNodeMXBean}}. 
All other metrics in the interface do not involve I/O and are polled frequently.

This metric requires an unbound I/O time which can break many monitoring 
applications.

> Expose size of NameNode directory as a metric
> -
>
> Key: HDFS-9229
> URL: https://issues.apache.org/jira/browse/HDFS-9229
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Surendra Singh Lilhore
>Priority: Minor
> Attachments: HDFS-9229.001.patch, HDFS-9229.002.patch, 
> HDFS-9229.003.patch
>
>
> Useful for admins in reserving / managing NN local file system space. Also 
> useful when transferring NN backups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9272) Implement a unix-like cat utility

2015-10-21 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967578#comment-14967578
 ] 

Haohui Mai commented on HDFS-9272:
--

I think it makes more sense to just reuse 
https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_read.c

Please see HDFS-9253 for more details.





> Implement a unix-like cat utility
> -
>
> Key: HDFS-9272
> URL: https://issues.apache.org/jira/browse/HDFS-9272
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Minor
> Attachments: HDFS-9272.HDFS-8707.000.patch
>
>
> Implement the basic functionality of "cat" and have it build as a separate 
> executable.
> 2 Reasons for this:
> We don't have any real integration tests at the moment so something simple to 
> verify that the library actually works against a real cluster is useful.
> Eventually I'll make more utilities like stat, mkdir etc.  Once there are 
> enough of them it will be simple to make a C++ implementation of the hadoop 
> fs command line interface that doesn't take the latency hit of spinning up a 
> JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9144) Refactor libhdfs into stateful/ephemeral objects

2015-10-21 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967592#comment-14967592
 ] 

Haohui Mai commented on HDFS-9144:
--

bq. Stateful quasi-posix API that will be familiar and easy to consume

Can you elaborate what do you mean by familiar and easy to consume? Are you 
talking about the filesystem API in C++1y 
(https://msdn.microsoft.com/en-us/library/hh874694.aspx)?

There's no reference of RPCEngine in DataNodeConnection.

I think the plan might be too grand to execute. It needs to be broken down. 
Maybe it's nice to start with implementing AsyncReadBlockOperation and its 
cancel() method?




> Refactor libhdfs into stateful/ephemeral objects
> 
>
> Key: HDFS-9144
> URL: https://issues.apache.org/jira/browse/HDFS-9144
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: Bob Hansen
>
> In discussion for other efforts, we decided that we should separate several 
> concerns:
> * A posix-like FileSystem/FileHandle object (stream-based, positional reads)
> * An ephemeral ReadOperation object that holds the state for 
> reads-in-progress, which consumes
> * An immutable FileInfo object which holds the block map and file size (and 
> other metadata about the file that we assume will not change over the life of 
> the file)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-21 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967597#comment-14967597
 ] 

Haohui Mai commented on HDFS-9241:
--

{code}
+  String  DFS_NAMENODE_BACKUP_ADDRESS_KEY = "dfs.namenode.backup.address";
+  String  DFS_NAMENODE_BACKUP_HTTP_ADDRESS_KEY = 
"dfs.namenode.backup.http-address";
...
{code}

It makes sense to wrap all these strings into an interface to explicitly state 
that they are deprecated keys. For example.

{code}
/**
 * Some comments here
 **/
interface DeprecatedKeys {
  String ...
}
{code}



> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HDFS-9241.000.patch, HDFS-9241.001.patch
>
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8831) Trash Support for files in HDFS encryption zone

2015-10-21 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8831:
-
Attachment: HDFS-8831.00.patch

Attach an initial patch for Jenkins.

> Trash Support for files in HDFS encryption zone
> ---
>
> Key: HDFS-8831
> URL: https://issues.apache.org/jira/browse/HDFS-8831
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-8831-10152015.pdf, HDFS-8831.00.patch
>
>
> Currently, "Soft Delete" is only supported if the whole encryption zone is 
> deleted. If you delete files whinin the zone with trash feature enabled, you 
> will get error similar to the following 
> {code}
> rm: Failed to move to trash: hdfs://HW11217.local:9000/z1_1/startnn.sh: 
> /z1_1/startnn.sh can't be moved from an encryption zone.
> {code}
> With HDFS-8830, we can support "Soft Delete" by adding the .Trash folder of 
> the file being deleted appropriately to the same encryption zone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8831) Trash Support for deletion in HDFS encryption zone

2015-10-21 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8831:
-
Summary: Trash Support for deletion in HDFS encryption zone  (was: Trash 
Support for files in HDFS encryption zone)

> Trash Support for deletion in HDFS encryption zone
> --
>
> Key: HDFS-8831
> URL: https://issues.apache.org/jira/browse/HDFS-8831
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-8831-10152015.pdf, HDFS-8831.00.patch
>
>
> Currently, "Soft Delete" is only supported if the whole encryption zone is 
> deleted. If you delete files whinin the zone with trash feature enabled, you 
> will get error similar to the following 
> {code}
> rm: Failed to move to trash: hdfs://HW11217.local:9000/z1_1/startnn.sh: 
> /z1_1/startnn.sh can't be moved from an encryption zone.
> {code}
> With HDFS-8830, we can support "Soft Delete" by adding the .Trash folder of 
> the file being deleted appropriately to the same encryption zone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967602#comment-14967602
 ] 

Hudson commented on HDFS-8647:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2510 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2510/])
HDFS-8647. Abstract BlockManager's rack policy into (mingma: rev 
e27c2ae8bafc94f18eb38f5d839dcef5652d424e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithUpgradeDomain.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyRackFaultTolerant.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithUpgradeDomain.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopologyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java


> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch, HDFS-8647-007.patch, 
> HDFS-8647-008.patch, HDFS-8647-009.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7964) Add support for async edit logging

2015-10-21 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967625#comment-14967625
 ] 

Jing Zhao commented on HDFS-7964:
-

Thanks for updating the patch, Daryn.

bq. It's ensuring correctness by preventing a deadlock with the background 
thread. IIRC, there is also a call synchronized on the edit log call that must 
know the current txid (rolling?) which isn't possible when async.

Currently I only find startSegment and endSegment holding the lock and thus 
SyncEdit will be created for them. These two calls also requires the sync 
semantically. Thus I'm thinking if we can change the condition 
{{!Thread.holdsLock(this)}} to {{if op is not start/endSegment}}. We can still 
add an assertion to make sure the lock is held when creating SyncEdits and not 
held when creating AsyncEdits. Did I miss something here?

bq. Do you mean drain only as many edits from the pending queue as were present 
at the beginning of the cycle?

My original concern was mainly about the latency for a single request when 
there is not a lot of traffic to the NN. {{doSync = edit.logEdit()}} means the 
sync only happens when the buffer is full, thus if the request keeps coming 
into the pending queue (thus editPendingQ is always non-empty) the first 
request needs to wait for several extra iterations until the buffer is filled. 
But actually this extra latency is small so the current code should be fine.

> Add support for async edit logging
> --
>
> Key: HDFS-7964
> URL: https://issues.apache.org/jira/browse/HDFS-7964
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.2-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-7964.patch, HDFS-7964.patch, HDFS-7964.patch
>
>
> Edit logging is a major source of contention within the NN.  LogEdit is 
> called within the namespace write log, while logSync is called outside of the 
> lock to allow greater concurrency.  The handler thread remains busy until 
> logSync returns to provide the client with a durability guarantee for the 
> response.
> Write heavy RPC load and/or slow IO causes handlers to stall in logSync.  
> Although the write lock is not held, readers are limited/starved and the call 
> queue fills.  Combining an edit log thread with postponed RPC responses from 
> HADOOP-10300 will provide the same durability guarantee but immediately free 
> up the handlers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9278) Fix preferredBlockSize typo in OIV XML output

2015-10-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967633#comment-14967633
 ] 

Hadoop QA commented on HDFS-9278:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m  6s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 55s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 24s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 26s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 31s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 11s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  52m  6s | Tests failed in hadoop-hdfs. |
| | |  98m  9s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.hdfs.TestDFSClientRetries |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767812/HDFS-9278.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / e27c2ae |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13109/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13109/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13109/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13109/console |


This message was automatically generated.

> Fix preferredBlockSize typo in OIV XML output
> -
>
> Key: HDFS-9278
> URL: https://issues.apache.org/jira/browse/HDFS-9278
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Nicole Pazmany
>Assignee: Nicole Pazmany
>Priority: Trivial
> Attachments: HDFS-9278.001.patch
>
>
> I found a typo in the offline image viewer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9070) Allow fsck display pending replica location information for being-written blocks

2015-10-21 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967639#comment-14967639
 ] 

Jing Zhao commented on HDFS-9070:
-

Thanks for updating the patch [~demongaorui]. Also thanks for the review, 
[~andreina]. 
The latest patch also looks good to me. +1 pending Jenkins.

> Allow fsck display pending replica location information for being-written 
> blocks
> 
>
> Key: HDFS-9070
> URL: https://issues.apache.org/jira/browse/HDFS-9070
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: GAO Rui
>Assignee: GAO Rui
> Attachments: HDFS-9070--HDFS-7285.00.patch, 
> HDFS-9070-HDFS-7285.00.patch, HDFS-9070-HDFS-7285.01.patch, 
> HDFS-9070-HDFS-7285.02.patch, HDFS-9070-trunk.03.patch, 
> HDFS-9070-trunk.04.patch, HDFS-9070-trunk.05.patch, HDFS-9070-trunk.06.patch, 
> HDFS-9070-trunk.07.patch
>
>
> When a EC file is being written, it can be helpful to allow fsck display 
> datanode information of the being-written EC file block group. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9265) Use of undefined behavior in remote_block_reader causing deterministic crashes.

2015-10-21 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967642#comment-14967642
 ] 

Haohui Mai commented on HDFS-9265:
--

LGTM +1

> Use of undefined behavior in remote_block_reader causing deterministic 
> crashes.
> ---
>
> Key: HDFS-9265
> URL: https://issues.apache.org/jira/browse/HDFS-9265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Blocker
> Attachments: HDFS-9265.HDFS-8707.000.patch
>
>
> The remote block reader relies on undefined behavior in how it uses 
> enable_shared_from_this.
> http://en.cppreference.com/w/cpp/memory/enable_shared_from_this
> The spec states a shared_ptr to an object inheriting from 
> enable_shared_from_this must be live before calling make_shared_from_this.  
> Calling make_shared_from_this without an existing shared_ptr is undefined 
> behavior and causes deterministic crashes when the code is built with GCC.
> example:
> class foo : public enable_shared_from_this {/*bar*/};
> safe:
> auto ptr1 = std::make_shared();
> auto ptr2 = foo->make_shared_from_this();
> broken:
> foo *ptr = new foo();
> auto ptr2 = foo->make_shared_from_this(); //no existing live shared_ptr
> In order to fix the input stream should call std::make_shared and hang onto a 
> shared_ptr to the block reader.  The block reader will then be free to call 
> make_shared_from this as much as it wants without issue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9265) InputStreamImpl should hold a shared_ptr of the BlockReader

2015-10-21 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9265:
-
Summary: InputStreamImpl should hold a shared_ptr of the BlockReader  (was: 
Use of undefined behavior in remote_block_reader causing deterministic crashes.)

> InputStreamImpl should hold a shared_ptr of the BlockReader
> ---
>
> Key: HDFS-9265
> URL: https://issues.apache.org/jira/browse/HDFS-9265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Blocker
> Attachments: HDFS-9265.HDFS-8707.000.patch
>
>
> The remote block reader relies on undefined behavior in how it uses 
> enable_shared_from_this.
> http://en.cppreference.com/w/cpp/memory/enable_shared_from_this
> The spec states a shared_ptr to an object inheriting from 
> enable_shared_from_this must be live before calling make_shared_from_this.  
> Calling make_shared_from_this without an existing shared_ptr is undefined 
> behavior and causes deterministic crashes when the code is built with GCC.
> example:
> class foo : public enable_shared_from_this {/*bar*/};
> safe:
> auto ptr1 = std::make_shared();
> auto ptr2 = foo->make_shared_from_this();
> broken:
> foo *ptr = new foo();
> auto ptr2 = foo->make_shared_from_this(); //no existing live shared_ptr
> In order to fix the input stream should call std::make_shared and hang onto a 
> shared_ptr to the block reader.  The block reader will then be free to call 
> make_shared_from this as much as it wants without issue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9265) InputStreamImpl should hold a shared_ptr of the BlockReader

2015-10-21 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9265:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-8707
   Status: Resolved  (was: Patch Available)

I've committed the patch to the HDFS-8707 branch. Thanks [~James Clampffer] for 
the contribution.

> InputStreamImpl should hold a shared_ptr of the BlockReader
> ---
>
> Key: HDFS-9265
> URL: https://issues.apache.org/jira/browse/HDFS-9265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Blocker
> Fix For: HDFS-8707
>
> Attachments: HDFS-9265.HDFS-8707.000.patch
>
>
> The remote block reader relies on undefined behavior in how it uses 
> enable_shared_from_this.
> http://en.cppreference.com/w/cpp/memory/enable_shared_from_this
> The spec states a shared_ptr to an object inheriting from 
> enable_shared_from_this must be live before calling make_shared_from_this.  
> Calling make_shared_from_this without an existing shared_ptr is undefined 
> behavior and causes deterministic crashes when the code is built with GCC.
> example:
> class foo : public enable_shared_from_this {/*bar*/};
> safe:
> auto ptr1 = std::make_shared();
> auto ptr2 = foo->make_shared_from_this();
> broken:
> foo *ptr = new foo();
> auto ptr2 = foo->make_shared_from_this(); //no existing live shared_ptr
> In order to fix the input stream should call std::make_shared and hang onto a 
> shared_ptr to the block reader.  The block reader will then be free to call 
> make_shared_from this as much as it wants without issue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9267) TestDiskError should get stored replicas through FsDatasetTestUtils.

2015-10-21 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-9267:

Attachment: HDFS-9267.02.patch

Fix checkstyle warnings.

> TestDiskError should get stored replicas through FsDatasetTestUtils.
> 
>
> Key: HDFS-9267
> URL: https://issues.apache.org/jira/browse/HDFS-9267
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HDFS-9267.00.patch, HDFS-9267.01.patch, 
> HDFS-9267.02.patch
>
>
> {{TestDiskError#testReplicationError}} scans local directories to verify 
> blocks and metadata files, which leaks the details of {{FsDataset}} 
> implementation. 
> This JIRA will abstract the "scanning" operation to {{FsDatasetTestUtils}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9280) Document NFS gateway export point parameter

2015-10-21 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-9280:
---

 Summary: Document NFS gateway export point parameter
 Key: HDFS-9280
 URL: https://issues.apache.org/jira/browse/HDFS-9280
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.7.1
Reporter: Zhe Zhang
Priority: Trivial


We should document the {{nfs.export.point}} configuration parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9226) MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils

2015-10-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967672#comment-14967672
 ] 

Arpit Agarwal commented on HDFS-9226:
-

Thanks [~iwasakims], we should eliminate the new dependency i.e. I'd be +1 for  
[~elserj]'s v4 patch. I see Josh also moved over the existing Mockito uses in 
{{DatanodeTestUtils}} so this removes the current ambiguity.

> MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils
> -
>
> Key: HDFS-9226
> URL: https://issues.apache.org/jira/browse/HDFS-9226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS, test
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HDFS-9226.001.patch, HDFS-9226.002.patch, 
> HDFS-9226.003.patch, HDFS-9226.004.patch, HDFS-9226.005.patch
>
>
> Noticed a test failure when attempting to run Accumulo unit tests against 
> 2.8.0-SNAPSHOT:
> {noformat}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI

[jira] [Commented] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967726#comment-14967726
 ] 

Hadoop QA commented on HDFS-9231:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  16m 10s | Findbugs (version ) appears to 
be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 58s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 30s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 35s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   2m 34s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  51m  8s | Tests failed in hadoop-hdfs. |
| | |  94m 47s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.tools.TestDFSHAAdminMiniCluster |
|   | hadoop.hdfs.tools.TestStoragePolicyCommands |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencing |
| Timed out tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767823/HDFS-9231.004.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / e27c2ae |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13110/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13110/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13110/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13110/console |


This message was automatically generated.

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch, 
> HDFS-9231.003.patch, HDFS-9231.004.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9273) ACLs on root directory may be lost after NN restart

2015-10-21 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967733#comment-14967733
 ] 

Chris Nauroth commented on HDFS-9273:
-

Hi [~xiaochen].  The patch looks good.  I think the test can be simplified a 
bit by calling the {{restart(fs, true)}} helper function, like some of the 
existing tests in the suite.

> ACLs on root directory may be lost after NN restart
> ---
>
> Key: HDFS-9273
> URL: https://issues.apache.org/jira/browse/HDFS-9273
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9273.001.patch
>
>
> After restarting namenode, the ACLs on the root directory ("/") may be lost 
> if it's rolled over to fsimage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9083) Replication violates block placement policy.

2015-10-21 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-9083:
-
Target Version/s: 2.7.2  (was: 2.8.0)

> Replication violates block placement policy.
> 
>
> Key: HDFS-9083
> URL: https://issues.apache.org/jira/browse/HDFS-9083
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS, namenode
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Blocker
>
> Recently we are noticing many cases in which all the replica of the block are 
> residing on the same rack.
> During the block creation, the block placement policy was honored.
> But after node failure event in some specific manner, the block ends up in 
> such state.
> On investigating more I found out that BlockManager#blockHasEnoughRacks is 
> dependent on the config (net.topology.script.file.name)
> {noformat}
>  if (!this.shouldCheckForEnoughRacks) {
>   return true;
> }
> {noformat}
> We specify DNSToSwitchMapping implementation (our own custom implementation) 
> via net.topology.node.switch.mapping.impl and no longer use 
> net.topology.script.file.name config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8831) Trash Support for deletion in HDFS encryption zone

2015-10-21 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8831:
-
Status: Patch Available  (was: Open)

> Trash Support for deletion in HDFS encryption zone
> --
>
> Key: HDFS-8831
> URL: https://issues.apache.org/jira/browse/HDFS-8831
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-8831-10152015.pdf, HDFS-8831.00.patch
>
>
> Currently, "Soft Delete" is only supported if the whole encryption zone is 
> deleted. If you delete files whinin the zone with trash feature enabled, you 
> will get error similar to the following 
> {code}
> rm: Failed to move to trash: hdfs://HW11217.local:9000/z1_1/startnn.sh: 
> /z1_1/startnn.sh can't be moved from an encryption zone.
> {code}
> With HDFS-8830, we can support "Soft Delete" by adding the .Trash folder of 
> the file being deleted appropriately to the same encryption zone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8766) Implement a libhdfs(3) compatible API

2015-10-21 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-8766:
--
Attachment: HDFS-8766.HDFS-8707.008.patch

Got rid of macros, changed pread to Pread, got rid of hdfs.h, got rid of test, 
changed to unique ptrs and proper raii for worker threads.

Switched back to the hdfs_internal/hdfsFile_internal because of the forward 
declaration in hdfs.h.  Using macros to fix that would be uglier than just 
reusing those names.

> Implement a libhdfs(3) compatible API
> -
>
> Key: HDFS-8766
> URL: https://issues.apache.org/jira/browse/HDFS-8766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-8766.HDFS-8707.000.patch, 
> HDFS-8766.HDFS-8707.001.patch, HDFS-8766.HDFS-8707.002.patch, 
> HDFS-8766.HDFS-8707.003.patch, HDFS-8766.HDFS-8707.004.patch, 
> HDFS-8766.HDFS-8707.005.patch, HDFS-8766.HDFS-8707.006.patch, 
> HDFS-8766.HDFS-8707.007.patch, HDFS-8766.HDFS-8707.008.patch
>
>
> Add a synchronous API that is compatible with the hdfs.h header used in 
> libhdfs and libhdfs3.  This will make it possible for projects using 
> libhdfs/libhdfs3 to relink against libhdfspp with minimal changes.
> This also provides a pure C interface that can be linked against projects 
> that aren't built in C++11 mode for various reasons but use the same 
> compiler.  It also allows many other programming languages to access 
> libhdfspp through builtin FFI interfaces.
> The libhdfs API is very similar to the posix file API which makes it easier 
> for programs built using posix filesystem calls to be modified to access HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-21 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9241:

Attachment: HDFS-9241.002.patch

Thanks for your review [~ste...@apache.org] and [~wheat9]. The v2 patch puts 
the deprecated keys in a nested interface of {{HdfsClientConfigKeys}}.

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HDFS-9241.000.patch, HDFS-9241.001.patch, 
> HDFS-9241.002.patch
>
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-21 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967815#comment-14967815
 ] 

Xiao Chen commented on HDFS-9231:
-

The test failures look unrelated and passed locally.
Findbugs seems to be having problem.
{noformat}
Exception in thread "main" java.io.FileNotFoundException: 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/trunkFindbugsWarningshadoop-hdfs.xml
 (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:146)
at 
edu.umd.cs.findbugs.SortedBugCollection.progessMonitoredInputStream(SortedBugCollection.java:1231)
at 
edu.umd.cs.findbugs.SortedBugCollection.readXML(SortedBugCollection.java:308)
at 
edu.umd.cs.findbugs.SortedBugCollection.readXML(SortedBugCollection.java:295)
at edu.umd.cs.findbugs.workflow.Filter.main(Filter.java:712)
Pre-patch trunk findbugs is broken?
{noformat}

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch, 
> HDFS-9231.003.patch, HDFS-9231.004.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9070) Allow fsck display pending replica location information for being-written blocks

2015-10-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967837#comment-14967837
 ] 

Hadoop QA commented on HDFS-9070:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m 25s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m  4s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 38s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 27s | The applied patch generated  1 
new checkstyle issues (total was 118, now 111). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 31s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 16s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  51m 30s | Tests failed in hadoop-hdfs. |
| | |  98m 25s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767321/HDFS-9070-trunk.07.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / d759b4b |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13111/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/13111/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13111/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13111/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13111/console |


This message was automatically generated.

> Allow fsck display pending replica location information for being-written 
> blocks
> 
>
> Key: HDFS-9070
> URL: https://issues.apache.org/jira/browse/HDFS-9070
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: GAO Rui
>Assignee: GAO Rui
> Attachments: HDFS-9070--HDFS-7285.00.patch, 
> HDFS-9070-HDFS-7285.00.patch, HDFS-9070-HDFS-7285.01.patch, 
> HDFS-9070-HDFS-7285.02.patch, HDFS-9070-trunk.03.patch, 
> HDFS-9070-trunk.04.patch, HDFS-9070-trunk.05.patch, HDFS-9070-trunk.06.patch, 
> HDFS-9070-trunk.07.patch
>
>
> When a EC file is being written, it can be helpful to allow fsck display 
> datanode information of the being-written EC file block group. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9273) ACLs on root directory may be lost after NN restart

2015-10-21 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9273:

Attachment: HDFS-9273.002.patch

> ACLs on root directory may be lost after NN restart
> ---
>
> Key: HDFS-9273
> URL: https://issues.apache.org/jira/browse/HDFS-9273
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9273.001.patch, HDFS-9273.002.patch
>
>
> After restarting namenode, the ACLs on the root directory ("/") may be lost 
> if it's rolled over to fsimage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9273) ACLs on root directory may be lost after NN restart

2015-10-21 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9273:

Status: Open  (was: Patch Available)

> ACLs on root directory may be lost after NN restart
> ---
>
> Key: HDFS-9273
> URL: https://issues.apache.org/jira/browse/HDFS-9273
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9273.001.patch, HDFS-9273.002.patch
>
>
> After restarting namenode, the ACLs on the root directory ("/") may be lost 
> if it's rolled over to fsimage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9273) ACLs on root directory may be lost after NN restart

2015-10-21 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9273:

Status: Patch Available  (was: Open)

Thanks [~cnauroth] for the review!
You are absolutely right. Attached patch 002 to address the comment.

> ACLs on root directory may be lost after NN restart
> ---
>
> Key: HDFS-9273
> URL: https://issues.apache.org/jira/browse/HDFS-9273
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9273.001.patch, HDFS-9273.002.patch
>
>
> After restarting namenode, the ACLs on the root directory ("/") may be lost 
> if it's rolled over to fsimage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9280) Document NFS gateway export point parameter

2015-10-21 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9280:

Assignee: Xiao Chen

> Document NFS gateway export point parameter
> ---
>
> Key: HDFS-9280
> URL: https://issues.apache.org/jira/browse/HDFS-9280
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Xiao Chen
>Priority: Trivial
>
> We should document the {{nfs.export.point}} configuration parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9273) ACLs on root directory may be lost after NN restart

2015-10-21 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-9273:

Hadoop Flags: Reviewed

Patch v002 looks great.  I verified that the new test fails before applying the 
bug fix and then passes after applying it.  +1, pending a fresh Jenkins run.

> ACLs on root directory may be lost after NN restart
> ---
>
> Key: HDFS-9273
> URL: https://issues.apache.org/jira/browse/HDFS-9273
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9273.001.patch, HDFS-9273.002.patch
>
>
> After restarting namenode, the ACLs on the root directory ("/") may be lost 
> if it's rolled over to fsimage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8766) Implement a libhdfs(3) compatible API

2015-10-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967884#comment-14967884
 ] 

Hadoop QA commented on HDFS-8766:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   5m 56s | Pre-patch HDFS-8707 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:red}-1{color} | javac |   1m 23s | The patch appears to cause the 
build to fail. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767859/HDFS-8766.HDFS-8707.008.patch
 |
| Optional Tests | javac unit |
| git revision | HDFS-8707 / 6828dd5 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13113/console |


This message was automatically generated.

> Implement a libhdfs(3) compatible API
> -
>
> Key: HDFS-8766
> URL: https://issues.apache.org/jira/browse/HDFS-8766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-8766.HDFS-8707.000.patch, 
> HDFS-8766.HDFS-8707.001.patch, HDFS-8766.HDFS-8707.002.patch, 
> HDFS-8766.HDFS-8707.003.patch, HDFS-8766.HDFS-8707.004.patch, 
> HDFS-8766.HDFS-8707.005.patch, HDFS-8766.HDFS-8707.006.patch, 
> HDFS-8766.HDFS-8707.007.patch, HDFS-8766.HDFS-8707.008.patch
>
>
> Add a synchronous API that is compatible with the hdfs.h header used in 
> libhdfs and libhdfs3.  This will make it possible for projects using 
> libhdfs/libhdfs3 to relink against libhdfspp with minimal changes.
> This also provides a pure C interface that can be linked against projects 
> that aren't built in C++11 mode for various reasons but use the same 
> compiler.  It also allows many other programming languages to access 
> libhdfspp through builtin FFI interfaces.
> The libhdfs API is very similar to the posix file API which makes it easier 
> for programs built using posix filesystem calls to be modified to access HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9273) ACLs on root directory may be lost after NN restart

2015-10-21 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967891#comment-14967891
 ] 

Xiao Chen commented on HDFS-9273:
-

Thanks very much Chris for verifying the fix locally.

> ACLs on root directory may be lost after NN restart
> ---
>
> Key: HDFS-9273
> URL: https://issues.apache.org/jira/browse/HDFS-9273
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9273.001.patch, HDFS-9273.002.patch
>
>
> After restarting namenode, the ACLs on the root directory ("/") may be lost 
> if it's rolled over to fsimage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9129) Move the safemode block count into BlockManager

2015-10-21 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967881#comment-14967881
 ] 

Haohui Mai commented on HDFS-9129:
--

Thanks for the work, [~liuml07]

{code}
+if (status == BMSafeModeStatus.OFF) {
+  return;
+}

+if (status == BMSafeModeStatus.OFF) {
+  return;
+}
{code}

There are multiple cases like the above. They should be asserts.

{code}
+  synchronized boolean isSafeModeTrackingBlocks() {
+return namesystem.isHaEnabled() && shouldIncrementallyTrackBlocks;
+  }
{code}

The code can be simplified. {{namesystem.isHaEnabled()}} does not change in the 
lifecycle of the process.

{code}
+  /**
+   * The transition trigger of the safe mode state machine.
+   *
+   * The state machine elaborates itself in code. Specially the start status is
+   * always INITIALIZED and the end status is OFF. There is no transition to
+   * INITIALIZED and no transition from OFF.
+   */
{code}

It's better to document the conditions of state transition.

{code}
+needExtension = extension > 0 &&
+(blockThreshold > 0 || datanodeThreshold > 0);
{code}

This can be moved under the THRESHOLD statement and become a local variable.

{code}
+  if (areThresholdsMet()) {
+if (needExtension) {
+  // THRESHOLD -> EXTENSION
+  status = BMSafeModeStatus.EXTENSION;
+  reached = monotonicNow();
+  NameNode.stateChangeLog.info("STATE* Safe mode extension entered. \n"
+  + getSafeModeTip());
+  smmthread = new Daemon(new SafeModeMonitor());
+  smmthread.start();
+} else {
+  // THRESHOLD -> OFF
+  leaveSafeMode();
+}
+  }
+  initializeReplQueuesIfNecessary();
+  break;
+case EXTENSION:
+  if (areThresholdsMet() && monotonicNow() - reached >= extension) {
+// EXTENSION -> OFF
+leaveSafeMode();
+  }
+  break;
{code}

{{initializeReplQueuesIfNecessary()}} should be called only once.

{code}
+  blockManager.leaveSafeMode();
+  safeModeStatus = SafeModeStatus.OFF;
+  startSecretManagerIfNecessary();
{code}

{{safeModeStatus = SafeModeStatus.OFF}} should be moved to 
{{BlockManagerSafeMode}}.

{code}
+  private class SafeModeMonitor implements Runnable {
+/** interval in msec for checking safe mode: {@value}. */
+private static final long RECHECK_INTERVAL = 1000;
+
+@Override
+public void run() {
+  while (namesystem.isRunning()) {
+try {
+  namesystem.writeLock();
+  if (status == BMSafeModeStatus.OFF) {
+// We're not in start up safe mode any more while monitoring.
+// Chances are we left safe mode manually before thresholds and
+// extension are reached.
+break;
+  }
+  checkSafeMode();
+  if (!isInSafeMode()) {
+smmthread = null;
+break;
+  }
+} finally {
+  namesystem.writeUnlock();
+}
+
+try {
+  Thread.sleep(RECHECK_INTERVAL);
+} catch (InterruptedException ignored) {
+}
+  }
+
+  if (!namesystem.isRunning()) {
+LOG.info("NameNode is being shutdown, exit SafeModeMonitor thread");
+  }
+}
+  }
{code}
A cleaner approach is to put the reached timestamp into the constructor of 
{{SafeModeMonitor()}}, and there is no need to hold the namesystem lock anymore.


> Move the safemode block count into BlockManager
> ---
>
> Key: HDFS-9129
> URL: https://issues.apache.org/jira/browse/HDFS-9129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-9129.000.patch, HDFS-9129.001.patch, 
> HDFS-9129.002.patch, HDFS-9129.003.patch, HDFS-9129.004.patch, 
> HDFS-9129.005.patch, HDFS-9129.006.patch
>
>
> The {{SafeMode}} needs to track whether there are enough blocks so that the 
> NN can get out of the safemode. These fields can moved to the 
> {{BlockManager}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9129) Move the safemode block count into BlockManager

2015-10-21 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967881#comment-14967881
 ] 

Haohui Mai edited comment on HDFS-9129 at 10/21/15 8:59 PM:


Thanks for the work, [~liuml07]

{code}
+if (status == BMSafeModeStatus.OFF) {
+  return;
+}

+if (status == BMSafeModeStatus.OFF) {
+  return;
+}
{code}

There are multiple cases like the above. They should be asserts.

{code}
+  synchronized boolean isSafeModeTrackingBlocks() {
+return namesystem.isHaEnabled() && shouldIncrementallyTrackBlocks;
+  }
{code}

The code can be simplified. {{namesystem.isHaEnabled()}} does not change in the 
lifecycle of the process.

{code}
+  /**
+   * The transition trigger of the safe mode state machine.
+   *
+   * The state machine elaborates itself in code. Specially the start status is
+   * always INITIALIZED and the end status is OFF. There is no transition to
+   * INITIALIZED and no transition from OFF.
+   */
{code}

It's better to document the conditions of state transition.

{code}
+needExtension = extension > 0 &&
+(blockThreshold > 0 || datanodeThreshold > 0);
{code}

This can be moved under the THRESHOLD statement and become a local variable.

{code}
+  if (areThresholdsMet()) {
+if (needExtension) {
+  // THRESHOLD -> EXTENSION
+  status = BMSafeModeStatus.EXTENSION;
+  reached = monotonicNow();
+  NameNode.stateChangeLog.info("STATE* Safe mode extension entered. \n"
+  + getSafeModeTip());
+  smmthread = new Daemon(new SafeModeMonitor());
+  smmthread.start();
+} else {
+  // THRESHOLD -> OFF
+  leaveSafeMode();
+}
+  }
+  initializeReplQueuesIfNecessary();
+  break;
+case EXTENSION:
+  if (areThresholdsMet() && monotonicNow() - reached >= extension) {
+// EXTENSION -> OFF
+leaveSafeMode();
+  }
+  break;
{code}

{{initializeReplQueuesIfNecessary()}} should be called only once.

{code}
+  blockManager.leaveSafeMode();
+  safeModeStatus = SafeModeStatus.OFF;
+  startSecretManagerIfNecessary();
{code}

{{safeModeStatus = SafeModeStatus.OFF}} should be moved to 
{{BlockManagerSafeMode}}.

{code}
+  private class SafeModeMonitor implements Runnable {
+/** interval in msec for checking safe mode: {@value}. */
+private static final long RECHECK_INTERVAL = 1000;
+
+@Override
+public void run() {
+  while (namesystem.isRunning()) {
+try {
+  namesystem.writeLock();
+  if (status == BMSafeModeStatus.OFF) {
+// We're not in start up safe mode any more while monitoring.
+// Chances are we left safe mode manually before thresholds and
+// extension are reached.
+break;
+  }
+  checkSafeMode();
+  if (!isInSafeMode()) {
+smmthread = null;
+break;
+  }
+} finally {
+  namesystem.writeUnlock();
+}
+
+try {
+  Thread.sleep(RECHECK_INTERVAL);
+} catch (InterruptedException ignored) {
+}
+  }
+
+  if (!namesystem.isRunning()) {
+LOG.info("NameNode is being shutdown, exit SafeModeMonitor thread");
+  }
+}
+  }
{code}
A cleaner approach is to put the reached timestamp into the constructor of 
{{SafeModeMonitor()}}.

It might be good to have additional unit tests for BlockManagerSafeMode.



was (Author: wheat9):
Thanks for the work, [~liuml07]

{code}
+if (status == BMSafeModeStatus.OFF) {
+  return;
+}

+if (status == BMSafeModeStatus.OFF) {
+  return;
+}
{code}

There are multiple cases like the above. They should be asserts.

{code}
+  synchronized boolean isSafeModeTrackingBlocks() {
+return namesystem.isHaEnabled() && shouldIncrementallyTrackBlocks;
+  }
{code}

The code can be simplified. {{namesystem.isHaEnabled()}} does not change in the 
lifecycle of the process.

{code}
+  /**
+   * The transition trigger of the safe mode state machine.
+   *
+   * The state machine elaborates itself in code. Specially the start status is
+   * always INITIALIZED and the end status is OFF. There is no transition to
+   * INITIALIZED and no transition from OFF.
+   */
{code}

It's better to document the conditions of state transition.

{code}
+needExtension = extension > 0 &&
+(blockThreshold > 0 || datanodeThreshold > 0);
{code}

This can be moved under the THRESHOLD statement and become a local variable.

{code}
+  if (areThresholdsMet()) {
+if (needExtension) {
+  // THRESHOLD -> EXTENSION
+  status = BMSafeModeStatus.EXTENSION;
+  reached = monotonicNow();
+  NameNode.stateChangeLog.info("STATE* Safe mode extension entered. \n"
+  + getSafeModeTip());
+   

[jira] [Updated] (HDFS-9274) Default value of dfs.datanode.directoryscan.throttle.limit.ms.per.sec should be consistent

2015-10-21 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9274:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Good catch Yi! +1 on the patch. I just committed to both trunk and branch-2.

Also thanks Daniel for the review.

> Default value of dfs.datanode.directoryscan.throttle.limit.ms.per.sec should 
> be consistent
> --
>
> Key: HDFS-9274
> URL: https://issues.apache.org/jira/browse/HDFS-9274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-9274.001.patch
>
>
> Always see following error log while running:
> {noformat}
> ERROR datanode.DirectoryScanner (DirectoryScanner.java:(430)) - 
> dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 
> ms/sec. Assuming default value of 1000
> {noformat}
> {code}
> 
>   dfs.datanode.directoryscan.throttle.limit.ms.per.sec
>   0
> ...
> {code}
> The default value should be 1000 and consistent with 
> DFS_DATANODE_DIRECTORYSCAN_THROTTLE_LIMIT_MS_PER_SEC_DEFAULT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9263) tests are using /test/build/data; breaking Jenkins

2015-10-21 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967948#comment-14967948
 ] 

Chris Nauroth commented on HDFS-9263:
-

Hi [~ste...@apache.org].

I'm a little unclear on what this patch is trying to solve.  The various test 
working dir properties are all built up from a base of 
{{project.build.directory}} in hadoop-project/pom.xml.  I'd expect that to 
avoid dirtying the workspace by writing files outside of {{target}}.  It's true 
that there are too many redundant properties like Andrew pointed out, but I 
don't think consolidating those was the original intent of this issue.  Is 
there a specific test that you see writing outside of {{target}}?

Or is the issue specifically that this causes a problem only when running 
outside the Maven lifecycle?

bq. many buildups of test dirs now use something random, rather than a 
hard-coded path like "dfs". This includes minidfs cluster...which should 
improve parallelism on test runs.

I don't think we'll be able to do this.  As you've seen, there are various 
tests that need to do a NameNode restart without reformatting so that we can 
verify things like correct metadata reloaded from fsimage on startup.  If this 
is something needed to support runs outside of Maven, then would it be possible 
to statically intialize a random directory once at startup and then reuse it 
for the whole suite?  Maybe that would help.

When running through Maven, the build will generate N different unique 
directories, where N = number of concurrent test processes.  Then, each 
concurrent test process really operates in its own unique testing directory.  
That has been sufficient for isolation in the Maven runs.  (Again, maybe you're 
trying to improve support for running outside of Maven?)

> tests are using /test/build/data; breaking Jenkins
> --
>
> Key: HDFS-9263
> URL: https://issues.apache.org/jira/browse/HDFS-9263
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HDFS-9263-001.patch
>
>
> Some of the HDFS tests are using the path {{test/build/data}} to store files, 
> so leaking files which fail the new post-build RAT test checks on Jenkins 
> (and dirtying all development systems with paths which {{mvn clean}} will 
> miss.
> fix



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9263) tests are using /test/build/data; breaking Jenkins

2015-10-21 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967960#comment-14967960
 ] 

Chris Nauroth commented on HDFS-9263:
-

I just noticed the Jenkins run on HADOOP-11880 shows the release audit warning 
problems.  I guess we'd have to backtrace from those files to figure out which 
tests are naughty.  (If you've already done so, please let me know.)

> tests are using /test/build/data; breaking Jenkins
> --
>
> Key: HDFS-9263
> URL: https://issues.apache.org/jira/browse/HDFS-9263
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HDFS-9263-001.patch
>
>
> Some of the HDFS tests are using the path {{test/build/data}} to store files, 
> so leaking files which fail the new post-build RAT test checks on Jenkins 
> (and dirtying all development systems with paths which {{mvn clean}} will 
> miss.
> fix



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9267) TestDiskError should get stored replicas through FsDatasetTestUtils.

2015-10-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967981#comment-14967981
 ] 

Hadoop QA commented on HDFS-9267:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |   8m 41s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   8m 52s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 32s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 44s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m  8s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  53m 36s | Tests failed in hadoop-hdfs. |
| | |  79m 16s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter |
| Timed out tests | 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory
 |
|   | 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767845/HDFS-9267.02.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / d759b4b |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13112/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13112/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13112/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13112/console |


This message was automatically generated.

> TestDiskError should get stored replicas through FsDatasetTestUtils.
> 
>
> Key: HDFS-9267
> URL: https://issues.apache.org/jira/browse/HDFS-9267
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HDFS-9267.00.patch, HDFS-9267.01.patch, 
> HDFS-9267.02.patch
>
>
> {{TestDiskError#testReplicationError}} scans local directories to verify 
> blocks and metadata files, which leaks the details of {{FsDataset}} 
> implementation. 
> This JIRA will abstract the "scanning" operation to {{FsDatasetTestUtils}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9098) Erasure coding: emulate race conditions among striped streamers in write pipeline

2015-10-21 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967978#comment-14967978
 ] 

Jing Zhao commented on HDFS-9098:
-

Thanks for working on this, Zhe! The proposed approach looks pretty good to me. 
Some early comments:
# As you mentioned we need both sync points and fault injector. Thus the 
following setting-error-state code can be replaced by fault injector.
{code}
551 if (DFSClientSyncPointInjector.getInstance().
552 syncPoint(SyncPointType.DN_FAILURE)) {
553   getErrorState().setInternalError();
554   getErrorState().markFirstNodeIfNotMarked();
555 }
{code}
# We can also consider mapping sync events to different fault injectors. In the 
above example, the DN failure can be an instant failure or a timeout failure. 
Then maybe we can let streamer#1 hit a instant failure while streamer#2 hit a 
timeout.
# {{writeChunk}} is a complicated process thus we can consider adding multiple 
finer-grained sync points for it.
# It will be helpful if we can define some simple scripts to express the test 
cases. The scripts finally can also be generated automatically. But this can be 
done separately.

> Erasure coding: emulate race conditions among striped streamers in write 
> pipeline
> -
>
> Key: HDFS-9098
> URL: https://issues.apache.org/jira/browse/HDFS-9098
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-9098.wip.patch
>
>
> Apparently the interleaving of events among {{StripedDataStreamer}}'s is very 
> tricky to handle. [~walter.k.su] and [~jingzhao] have discussed several race 
> conditions under HDFS-9040.
> Let's use FaultInjector to emulate different combinations of interleaved 
> events.
> In particular, we should consider inject delays in the following places:
> # {{Streamer#endBlock}}
> # {{Streamer#locateFollowingBlock}}
> # {{Streamer#updateBlockForPipeline}}
> # {{Streamer#updatePipeline}}
> # {{OutputStream#writeChunk}}
> # {{OutputStream#close}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9274) Default value of dfs.datanode.directoryscan.throttle.limit.ms.per.sec should be consistent

2015-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967984#comment-14967984
 ] 

Hudson commented on HDFS-9274:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1300 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1300/])
HDFS-9274. Default value of (zhz: rev 59ce780d53fda1d2823e5da23d03ca68b15a4bf9)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Default value of dfs.datanode.directoryscan.throttle.limit.ms.per.sec should 
> be consistent
> --
>
> Key: HDFS-9274
> URL: https://issues.apache.org/jira/browse/HDFS-9274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-9274.001.patch
>
>
> Always see following error log while running:
> {noformat}
> ERROR datanode.DirectoryScanner (DirectoryScanner.java:(430)) - 
> dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 
> ms/sec. Assuming default value of 1000
> {noformat}
> {code}
> 
>   dfs.datanode.directoryscan.throttle.limit.ms.per.sec
>   0
> ...
> {code}
> The default value should be 1000 and consistent with 
> DFS_DATANODE_DIRECTORYSCAN_THROTTLE_LIMIT_MS_PER_SEC_DEFAULT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9274) Default value of dfs.datanode.directoryscan.throttle.limit.ms.per.sec should be consistent

2015-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967991#comment-14967991
 ] 

Hudson commented on HDFS-9274:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8681 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8681/])
HDFS-9274. Default value of (zhz: rev 59ce780d53fda1d2823e5da23d03ca68b15a4bf9)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Default value of dfs.datanode.directoryscan.throttle.limit.ms.per.sec should 
> be consistent
> --
>
> Key: HDFS-9274
> URL: https://issues.apache.org/jira/browse/HDFS-9274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-9274.001.patch
>
>
> Always see following error log while running:
> {noformat}
> ERROR datanode.DirectoryScanner (DirectoryScanner.java:(430)) - 
> dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 
> ms/sec. Assuming default value of 1000
> {noformat}
> {code}
> 
>   dfs.datanode.directoryscan.throttle.limit.ms.per.sec
>   0
> ...
> {code}
> The default value should be 1000 and consistent with 
> DFS_DATANODE_DIRECTORYSCAN_THROTTLE_LIMIT_MS_PER_SEC_DEFAULT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967993#comment-14967993
 ] 

Hudson commented on HDFS-8647:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2459 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2459/])
HDFS-8647. Abstract BlockManager's rack policy into (mingma: rev 
e27c2ae8bafc94f18eb38f5d839dcef5652d424e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithUpgradeDomain.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithUpgradeDomain.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyRackFaultTolerant.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopologyWithNodeGroup.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch, HDFS-8647-007.patch, 
> HDFS-8647-008.patch, HDFS-8647-009.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14968008#comment-14968008
 ] 

Hadoop QA commented on HDFS-9241:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  20m 30s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m 12s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 36s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 32s | The applied patch generated  
108 new checkstyle issues (total was 105, now 212). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 31s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |   0m 21s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 29s | Tests passed in 
hadoop-hdfs-client. |
| | |  53m  3s | |
\\
\\
|| Reason || Tests ||
| Failed build | hadoop-hdfs |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767860/HDFS-9241.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 25f8f80 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13114/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/13114/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13114/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13114/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13114/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13114/console |


This message was automatically generated.

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HDFS-9241.000.patch, HDFS-9241.001.patch, 
> HDFS-9241.002.patch
>
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14968004#comment-14968004
 ] 

Hudson commented on HDFS-8647:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #522 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/522/])
HDFS-8647. Abstract BlockManager's rack policy into (mingma: rev 
e27c2ae8bafc94f18eb38f5d839dcef5652d424e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithUpgradeDomain.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopologyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyRackFaultTolerant.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithUpgradeDomain.java


> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch, HDFS-8647-007.patch, 
> HDFS-8647-008.patch, HDFS-8647-009.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >