[jira] [Updated] (HDFS-11636) Ozone: TestContainerPlacement fails because of string mismatch in expected Message

2017-04-11 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-11636:
-
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

HDFS-11062 fixes this issue. duplicating this.

> Ozone: TestContainerPlacement fails because of string mismatch in expected 
> Message
> --
>
> Key: HDFS-11636
> URL: https://issues.apache.org/jira/browse/HDFS-11636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11636-HDFS-7240.001.patch
>
>
> TestContainerPlacement fails because of the following error.
> This happens because the error message in container Allocation was changed in 
> HDFS-11620. Expected error message in the test needs to be rephrased to solve 
> this issue.
> {code}
> Expected: (an instance of java.io.IOException and exception with message a 
> string starting with "Unable to find enough nodes that meet the space 
> requirement in healthy node set.")
>  but: exception with message a string starting with "Unable to find 
> enough nodes that meet the space requirement in healthy node set." message 
> was "Unable to find enough nodes that meet the space requirement of 
> 5368709120 bytes in healthy node set. Nodes required: 3 Found: 0"
> Stacktrace was: org.apache.hadoop.ozone.scm.exceptions.SCMException: Unable 
> to find enough nodes that meet the space requirement of 5368709120 bytes in 
> healthy node set. Nodes required: 3 Found: 0
>   at 
> org.apache.hadoop.ozone.scm.container.placement.algorithms.SCMCommonPolicy.chooseDatanodes(SCMCommonPolicy.java:132)
>   at 
> org.apache.hadoop.ozone.scm.container.placement.algorithms.SCMContainerPlacementCapacity.chooseDatanodes(SCMContainerPlacementCapacity.java:95)
>   at 
> org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:220)
>   at 
> org.apache.hadoop.ozone.scm.node.TestContainerPlacement.testContainerPlacementCapacity(TestContainerPlacement.java:182)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.junit.Assert.assertThat(Assert.java:865)
>   at org.junit.Assert.assertThat(Assert.java:832)
>   at 
> org.junit.rules.ExpectedException.handleException(ExpectedException.java:198)
>   at 
> org.junit.rules.ExpectedException.access$500(ExpectedException.java:85)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:177)
>   at 

[jira] [Commented] (HDFS-11163) Mover should move the file blocks to default storage once policy is unset

2017-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965402#comment-15965402
 ] 

Hudson commented on HDFS-11163:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11574 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11574/])
HDFS-11163. Mover should move the file blocks to default storage once 
(cnauroth: rev 23b1a7bdf1b546c1e29d7010cf139b6d700461fc)
* (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsServerDefaults.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestMover.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Mover should move the file blocks to default storage once policy is unset
> -
>
> Key: HDFS-11163
> URL: https://issues.apache.org/jira/browse/HDFS-11163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.8.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 3.0.0-alpha3, 2.8.1
>
> Attachments: HDFS-11163-001.patch, HDFS-11163-002.patch, 
> HDFS-11163-003.patch, HDFS-11163-004.patch, HDFS-11163-005.patch, 
> HDFS-11163-006.patch, HDFS-11163-007.patch, HDFS-11163-branch-2.001.patch, 
> HDFS-11163-branch-2.002.patch, HDFS-11163-branch-2.003.patch, 
> temp-YARN-6278.HDFS-11163.patch
>
>
> HDFS-9534 added new API in FileSystem to unset the storage policy. Once 
> policy is unset blocks should move back to the default storage policy.
> Currently mover is not moving file blocks which have zero storage ID
> {code}
>   // currently we ignore files with unspecified storage policy
>   if (policyId == HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED) {
> return;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11163) Mover should move the file blocks to default storage once policy is unset

2017-04-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-11163:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.1
   3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

+1 for the latest patches.  I have committed this to trunk, branch-2 and 
branch-2.8.  [~surendrasingh], thank you for the contribution.

> Mover should move the file blocks to default storage once policy is unset
> -
>
> Key: HDFS-11163
> URL: https://issues.apache.org/jira/browse/HDFS-11163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.8.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 3.0.0-alpha3, 2.8.1
>
> Attachments: HDFS-11163-001.patch, HDFS-11163-002.patch, 
> HDFS-11163-003.patch, HDFS-11163-004.patch, HDFS-11163-005.patch, 
> HDFS-11163-006.patch, HDFS-11163-007.patch, HDFS-11163-branch-2.001.patch, 
> HDFS-11163-branch-2.002.patch, HDFS-11163-branch-2.003.patch, 
> temp-YARN-6278.HDFS-11163.patch
>
>
> HDFS-9534 added new API in FileSystem to unset the storage policy. Once 
> policy is unset blocks should move back to the default storage policy.
> Currently mover is not moving file blocks which have zero storage ID
> {code}
>   // currently we ignore files with unspecified storage policy
>   if (policyId == HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED) {
> return;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10630) Federation State Store FS Implementation

2017-04-11 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965369#comment-15965369
 ] 

Chris Douglas commented on HDFS-10630:
--

* Are the File impls primarily for testing? Or do routers communicate through 
the FS?
* Should {{StateStoreConnectionMonitorService#serviceInit}} call 
{{super.serviceInit(conf)}}?
* {{PeriodicService}} probably shouldn't accept calls to its set\* methods 
after it's started
* start/stop of {{PeriodicService}} should probably be synchronized
* Why is the {{StateStoreService}} a singleton? It doesn't seem to follow, or 
benefit, from the {{Service}} pattern.
* The {{StateStoreUtils}} serialization methods using reflection are very 
general...

> Federation State Store FS Implementation
> 
>
> Key: HDFS-10630
> URL: https://issues.apache.org/jira/browse/HDFS-10630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10630.001.patch, HDFS-10630.002.patch, 
> HDFS-10630-HDFS-10467-003.patch, HDFS-10630-HDFS-10467-004.patch, 
> HDFS-10630-HDFS-10467-005.patch
>
>
> Interface to store the federation shared state across Routers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11630) TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds

2017-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965367#comment-15965367
 ] 

Hudson commented on HDFS-11630:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11573 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11573/])
HDFS-11630. TestThrottledAsyncCheckerTimeout fails intermittently in (arp: rev 
62e4573efbef8c14757223a43bebaf360f029ada)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/checker/TestDatasetVolumeCheckerTimeout.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/checker/TestThrottledAsyncCheckerTimeout.java


> TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds
> ---
>
> Key: HDFS-11630
> URL: https://issues.apache.org/jira/browse/HDFS-11630
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11630.001.patch, HDFS-11630.002.patch
>
>
> TestThrottledAsyncCheckerTimeout#testDiskCheckTimeoutInvokesOneCallbackOnly 
> fails intermittently in Jenkins builds. 
> We need to wait for disk checker timeout to callback the 
> FutureCallBack#onFailure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11630) TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds

2017-04-11 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-11630.
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha3

> TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds
> ---
>
> Key: HDFS-11630
> URL: https://issues.apache.org/jira/browse/HDFS-11630
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11630.001.patch, HDFS-11630.002.patch
>
>
> TestThrottledAsyncCheckerTimeout#testDiskCheckTimeoutInvokesOneCallbackOnly 
> fails intermittently in Jenkins builds. 
> We need to wait for disk checker timeout to callback the 
> FutureCallBack#onFailure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11630) TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds

2017-04-11 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965361#comment-15965361
 ] 

Arpit Agarwal commented on HDFS-11630:
--

Sorry I just realized after committing it that we didn't get a Jenkins run.

Since it's a unit-test only change, I am not reverting it to get a Jenkins run. 
I verified that the changed unit tests pass locally. 

> TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds
> ---
>
> Key: HDFS-11630
> URL: https://issues.apache.org/jira/browse/HDFS-11630
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11630.001.patch, HDFS-11630.002.patch
>
>
> TestThrottledAsyncCheckerTimeout#testDiskCheckTimeoutInvokesOneCallbackOnly 
> fails intermittently in Jenkins builds. 
> We need to wait for disk checker timeout to callback the 
> FutureCallBack#onFailure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11630) TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds

2017-04-11 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965360#comment-15965360
 ] 

Arpit Agarwal commented on HDFS-11630:
--

+1 for the v2 patch, thanks [~hanishakoneru]. I have committed it to trunk.

We've seen tests with shorter timeouts (10-20 seconds) often fail on 
under-powered VMs, so it's okay to set the test case timeout a little 
conservatively even if we never expect the test to take that long in practice.

> TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds
> ---
>
> Key: HDFS-11630
> URL: https://issues.apache.org/jira/browse/HDFS-11630
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11630.001.patch, HDFS-11630.002.patch
>
>
> TestThrottledAsyncCheckerTimeout#testDiskCheckTimeoutInvokesOneCallbackOnly 
> fails intermittently in Jenkins builds. 
> We need to wait for disk checker timeout to callback the 
> FutureCallBack#onFailure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11634) Optimize BlockIterator when interating starts in the middle.

2017-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965340#comment-15965340
 ] 

Hadoop QA commented on HDFS-11634:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 183 unchanged - 0 fixed = 185 total (was 183) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:612578f |
| JIRA Issue | HDFS-11634 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862961/HDFS-11634.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 74ff1e0c3974 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3a91376 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19057/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19057/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19057/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19057/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Optimize BlockIterator when interating starts in the middle.
> 

[jira] [Commented] (HDFS-11384) Add option for balancer to disperse getBlocks calls to avoid NameNode's rpc.CallQueueLength spike

2017-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965334#comment-15965334
 ] 

Hadoop QA commented on HDFS-11384:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 251 unchanged - 0 fixed = 252 total (was 251) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 24s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:612578f |
| JIRA Issue | HDFS-11384 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862956/HDFS-11384.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f6f002ce984b 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3a91376 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19056/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19056/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19056/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19056/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add option for balancer to disperse getBlocks calls to avoid NameNode's 
> rpc.CallQueueLength spike
> 

[jira] [Commented] (HDFS-11338) [SPS]: Fix timeout issue in unit tests caused by longger NN down time

2017-04-11 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965333#comment-15965333
 ] 

Rakesh R commented on HDFS-11338:
-

Thank you [~umamaheswararao] for the reviews and commits.

> [SPS]: Fix timeout issue in unit tests caused by longger NN down time
> -
>
> Key: HDFS-11338
> URL: https://issues.apache.org/jira/browse/HDFS-11338
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Rakesh R
> Fix For: HDFS-10285
>
> Attachments: HDFS-11338-HDFS-10285.00.patch, 
> HDFS-11338-HDFS-10285.01.patch, HDFS-11338-HDFS-10285-02.patch, 
> HDFS-11338-HDFS-10285-03.patch, HDFS-11338-HDFS-10285-04.patch, 
> HDFS-11338-HDFS-10285-05.patch
>
>
> As discussed in HDFS-11186, it takes longer to stop NN:
> {code}
> try {
>   storagePolicySatisfierThread.join(3000);
> } catch (InterruptedException ie) {
> }
> {code}
> So, it takes longer time to finish some tests and this leads to the timeout 
> failures.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11338) [SPS]: Fix timeout issue in unit tests caused by longger NN down time

2017-04-11 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R reassigned HDFS-11338:
---

Assignee: Rakesh R  (was: Wei Zhou)

> [SPS]: Fix timeout issue in unit tests caused by longger NN down time
> -
>
> Key: HDFS-11338
> URL: https://issues.apache.org/jira/browse/HDFS-11338
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Rakesh R
> Fix For: HDFS-10285
>
> Attachments: HDFS-11338-HDFS-10285.00.patch, 
> HDFS-11338-HDFS-10285.01.patch, HDFS-11338-HDFS-10285-02.patch, 
> HDFS-11338-HDFS-10285-03.patch, HDFS-11338-HDFS-10285-04.patch, 
> HDFS-11338-HDFS-10285-05.patch
>
>
> As discussed in HDFS-11186, it takes longer to stop NN:
> {code}
> try {
>   storagePolicySatisfierThread.join(3000);
> } catch (InterruptedException ie) {
> }
> {code}
> So, it takes longer time to finish some tests and this leads to the timeout 
> failures.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11634) Optimize BlockIterator when interating starts in the middle.

2017-04-11 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-11634:
---
Attachment: HDFS-11634.003.patch

* Fixed the typos. Thanks.
* Yes this the corner case.
In the previous patch if I have 3 storages having \{3, 3, 1\} blocks 
respectively. And I want to set iterator to startBlock=2. Then s=2 <= numBlocks 
for the first two storages, but not the third, and {{index}} will increment. 
Which is incorrect as startBlock=2 is on storage #0, rather than #1. My 
solution is to base the if condition directly on startBlock, and then one 
should accumulate blocks in storages, which sumBlocks does.
Hope this makes sense.

> Optimize BlockIterator when interating starts in the middle.
> 
>
> Key: HDFS-11634
> URL: https://issues.apache.org/jira/browse/HDFS-11634
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.5
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Attachments: HDFS-11634.001.patch, HDFS-11634.002.patch, 
> HDFS-11634.003.patch
>
>
> {{BlockManager.getBlocksWithLocations()}} needs to iterate blocks from a 
> randomly selected {{startBlock}} index. It creates an iterator which points 
> to the first block and then skips all blocks until {{startBlock}}. It is 
> inefficient when DN has multiple storages. Instead of skipping blocks one by 
> one we can skip entire storages. Should be more efficient on average.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11604) Define and parse erasure code codecs, schemas and policies

2017-04-11 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-11604:
-
Status: Patch Available  (was: Open)

> Define and parse erasure code codecs, schemas and policies
> --
>
> Key: HDFS-11604
> URL: https://issues.apache.org/jira/browse/HDFS-11604
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Kai Zheng
>Assignee: Lin Zeng
> Fix For: 3.0.0-alpha3
>
> Attachments: ec-config-sample.xml, ec-policy-config-sample-v2.xml, 
> HDFS-11664-v1.patch
>
>
> According to recent discussions with [~andrew.wang] in HDFS-7337, it would be 
> good to support allowing users to define their own erasure code codecs, 
> schemas and policies via an XML file. The XML file can be passed to a CLI cmd 
> to parse and send to NameNode to persist and maintain.
> Open this task to define the XML format providing a default sample file to 
> put in the configuration folder for users' reference, and implement the 
> necessary parser utility.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11643) Balancer fencing fails when writing erasure coded lock file

2017-04-11 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965269#comment-15965269
 ] 

SammiChen commented on HDFS-11643:
--

Hi [~andrew.wang], yea, we need face this a "replicated" EC policy now.  Let me 
first clear understand the requirements. So to your knowledge, can a file 
create API which forces use default replication solve all the issues we met so 
far and all foreseeable issues? Or should we also provide a way to set a 
"replicated" EC policy on directory? If only a create API is required, I would 
suggest not introduce the "replicated" EC policy concept, we can add a Boolean 
parameter to create function to enforce the file to be replication file. If we 
also need to provide set an directory back to replication, other than inherit 
its parent's EC policy, then "replicated" EC policy is a must. 

> Balancer fencing fails when writing erasure coded lock file
> ---
>
> Key: HDFS-11643
> URL: https://issues.apache.org/jira/browse/HDFS-11643
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
>
> At startup, the balancer writes its hostname to the lock file and calls 
> hflush(). hflush is not supported for EC files, so this fails when the entire 
> filesystem is erasure coded.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11384) Add option for balancer to disperse getBlocks calls to avoid NameNode's rpc.CallQueueLength spike

2017-04-11 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-11384:
---
Attachment: HDFS-11384.004.patch

> Add option for balancer to disperse getBlocks calls to avoid NameNode's 
> rpc.CallQueueLength spike
> -
>
> Key: HDFS-11384
> URL: https://issues.apache.org/jira/browse/HDFS-11384
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Affects Versions: 2.7.3
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
> Attachments: balancer.day.png, balancer.week.png, 
> HDFS-11384.001.patch, HDFS-11384.002.patch, HDFS-11384.003.patch, 
> HDFS-11384.004.patch
>
>
> When running balancer on hadoop cluster which have more than 3000 Datanodes 
> will cause NameNode's rpc.CallQueueLength spike. We observed this situation 
> could cause Hbase cluster failure due to RegionServer's WAL timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11384) Add option for balancer to disperse getBlocks calls to avoid NameNode's rpc.CallQueueLength spike

2017-04-11 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965266#comment-15965266
 ] 

Konstantin Shvachko commented on HDFS-11384:


* I am usually very conservative about introducing new configuration 
parameters. Parameters seem to give you flexibility to adjust them, but in many 
cases administrators don't know what to do with that flexibility, because there 
so many of them. I prefer to have a reasonable constant value initially, and 
add a config variable later if _other_ value are needed in certain cases. In 
the end adding configs is easy, but you can never remove them.
In this particular case the BALANCER_NUM_RPC_PER_SEC is chosen so that big 
clusters would distribute _initial_ RPC requests over 10 secs, and it does not 
effect small clusters at all. I think we are good with the constant set to 20 
for now, but let me know if you see use cases for different values.
* Fixed the typo in 004 patch. Thanks [~zhz].
* This would be a typical misuse of Preconditions, as we do in many cases in 
the code, and as it was discussed previously on many occasions. It is an 
assert, because we assume the condition should never happen. If it does, it's a 
bug, which should be caught during testing, with {{-ea}} option. And in the 
runtime we want to avoid checking any extra condition for performance reasons.

> Add option for balancer to disperse getBlocks calls to avoid NameNode's 
> rpc.CallQueueLength spike
> -
>
> Key: HDFS-11384
> URL: https://issues.apache.org/jira/browse/HDFS-11384
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Affects Versions: 2.7.3
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
> Attachments: balancer.day.png, balancer.week.png, 
> HDFS-11384.001.patch, HDFS-11384.002.patch, HDFS-11384.003.patch, 
> HDFS-11384.004.patch
>
>
> When running balancer on hadoop cluster which have more than 3000 Datanodes 
> will cause NameNode's rpc.CallQueueLength spike. We observed this situation 
> could cause Hbase cluster failure due to RegionServer's WAL timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11645) DataXceiver thread should log the actual error when getting InvalidMagicNumberException

2017-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965237#comment-15965237
 ] 

Hadoop QA commented on HDFS-11645:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 67m  
1s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:612578f |
| JIRA Issue | HDFS-11645 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862938/HDFS-11645.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e835d3068b10 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3a91376 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19055/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19055/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DataXceiver thread should log the actual error when getting 
> InvalidMagicNumberException
> ---
>
> Key: HDFS-11645
> URL: https://issues.apache.org/jira/browse/HDFS-11645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1, 2.8.1
>Reporter: Chen Liang
>Assignee: Chen Liang
>

[jira] [Updated] (HDFS-11640) Datanodes should use a unique identifier when reading from external stores

2017-04-11 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11640:
--
Attachment: HDFS-11640-HDFS-9806.001.patch

Attaching a PoC patch that is based on the most recent patches available for 
HDFS-7878 and HDFS-6984. The patch includes the changes proposed in HDFS-7878 
and HDFS-6984 (will be removed once they are committed). This also involves 
making {{FileStatus}} a member of {{FileRegion}}, adding a protobuf-based 
implementation of {{FileRegion}} (to be able to serialize and deserialize 
easily), and renaming {{TextFileRegionFormat}} to {{LocalFileRegionFormat}} (as 
it now uses binary files to store the block map). 
If HDFS-7878 goes with {{open(InodeId)}} instead of {{open(FileStatus)}}, this 
patch will be modified appropriately. 


> Datanodes should use a unique identifier when reading from external stores
> --
>
> Key: HDFS-11640
> URL: https://issues.apache.org/jira/browse/HDFS-11640
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-11640-HDFS-9806.001.patch
>
>
> Use a unique identifier when reading from external stores to ensure that 
> datanodes read the correct (version of) file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11546) Federation Router RPC server

2017-04-11 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965226#comment-15965226
 ] 

Chris Douglas commented on HDFS-11546:
--

Looks good, overall. A few questions:
* Many fields e.g., {{ConnectionManager.Timer}}, 
{{RouterRpcClient.retryPolicy}}, {{RouterRpcClient.executorService}} can be 
final
* {{RemoteLocationContext#equals}} relying on hashcode equality of Strings is 
too weak
* In {{ConnectionPool#getNumActiveConnections}}, instead of catching 
AIOOBException: use a lock, COWArrayList, or other threadsafe collection.
* The threading in {{ConnectionPool}} generally seems wonky. It avoids locks, 
but catches a lot of exceptions. Instead of synchronizing on a special {{Object 
lock}}, most of this could synchronize on the {{connections}} field or use a 
threadsafe collections.
* {{ConnectionPool}} seems to distribute requests to connections RR, adding new 
connections (up to some limit) if it wraps while a connection is still "busy". 
Is that right?
* {{ConnectionPool#cleanup}} should remove from the end of an ArrayList to 
avoid copies, not the front (and be correctly synchronized)
* If the client in the {{ConnectionPool}} is configured to retry, and the 
proxied client retries, that may go to a different connection in the pool (or a 
different router), right? Should the proxy never retry to avoid repeating past 
operations, or does some other mechanism prevent this?
* {{ConnectionPool#close}} doesn't seem to do any cleanup work (interrupting 
threads, etc.). This is a "soft" shutdown?
* {{ConnectionManager}} uses {{Timer/TimerTask}}, which are sort-of deprecated 
in favor of {{ScheduledThreadPoolExecutor}} after 1.5
* The {{ConnectionManager}} creates keys for each RPC proxy by creating a 
String of the UGI and hashcode for each token. The chance of collision seems 
remote, but unnecessarily non-zero. Instead of a flat {{String -> 
ConnectionPool}} map, could this maintain a key of user/NN? Is there a 
particular reason to include tokens as part of the key?
* Does {{RouterRpcClient#invokeSequential}} and 
{{RouterRpcClient#invokeConcurrent}} implement functionality similar to 
HADOOP-12077? There should probably be a documentation JIRA to describe common 
patterns, limitations, and deployment. In particular, the subset of 
{{ClientProtocol}} implemented by {{RouterRpcServer}} should be documented.
* I didn't review the details of {{RouterRpcServer}} since most of it seems to 
be wrapping the client proxy, but there are some TODO items there. Those don't 
need to block commit to the branch, but they should probably be documented or 
addressed before merge.
* The defaults in {{hdfs-site.xml}} look conservative; the description could 
include some cursory guidance on setting them.
* Thanks for adding the integration tests

> Federation Router RPC server
> 
>
> Key: HDFS-11546
> URL: https://issues.apache.org/jira/browse/HDFS-11546
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: HDFS-10467
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HDFS-11546-HDFS-10467-000.patch, 
> HDFS-11546-HDFS-10467-001.patch, HDFS-11546-HDFS-10467-002.patch, 
> HDFS-11546-HDFS-10467-003.patch, HDFS-11546-HDFS-10467-004.patch
>
>
> RPC server side of the Federation Router implements ClientProtocol.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10531) Add EC policy and storage policy related usage summarization function to dfs du command

2017-04-11 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965223#comment-15965223
 ] 

Andrew Wang commented on HDFS-10531:


Hi Sammi, thanks for the comment,

bq. From user's point of view, put the function in "ls" is better than put in 
"ec" function. Because "ls" has already has the column to show file replication 
factor. EC is one of file replication scheme. So it's natural to show file's EC 
policy here. However it will make the "ec -getPolicy" sub-function a little bit 
redundant.

Since {{ls}} output is probably very commonly parsed by end users, we should be 
careful about changing it. IMO we should add a new flag to also display the EC 
policy.

bq. Cluster wide stats is helpful. And if consider multi-tenant cluster 
environment, per directory stats will also be helpful. So have EC policy 
summary in "du" command can help user.

I liked "count" better since "du" is expected to behave like the Unix "du" 
command. It's also likely that there are users parsing "du" output, whereas 
"count" is something HDFS-specific that we can more easily extend.

bq. As for this JIRA, since EC file is no different from 3-way replication file 
from quotation point of view, it's not clear user can benefit what from knowing 
how many quotas used by each type of EC policy. So I will not recommend add 
"EC" information in "hdfs dfs -count" command. 

"count -q" is specific to quotas, since we don't have quotas for EC, I agree 
that it doesn't make sense to add this to "-q", but we could add a new flag to 
display EC usage.

> Add EC policy and storage policy related usage summarization function to dfs 
> du command
> ---
>
> Key: HDFS-10531
> URL: https://issues.apache.org/jira/browse/HDFS-10531
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Rui Gao
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-10531.001.patch
>
>
> Currently du command output:
> {code}
> [ ~]$ hdfs dfs -du  -h /home/rgao/
> 0  /home/rgao/.Trash
> 0  /home/rgao/.staging
> 100 M  /home/rgao/ds
> 250 M  /home/rgao/ds-2
> 200 M  /home/rgao/noECBackup-ds
> 500 M  /home/rgao/noECBackup-ds-2
> {code}
> For hdfs users and administrators, EC policy and storage policy related usage 
> summarization would be very helpful when managing storages of cluster. The 
> imitate output of du could be like the following.
> {code}
> [ ~]$ hdfs dfs -du  -h -t( total, parameter to be added) /home/rgao
>  
> 0  /home/rgao/.Trash
> 0  /home/rgao/.staging
> [Archive] [EC:RS-DEFAULT-6-3-64k] 100 M  /home/rgao/ds
> [DISK] [EC:RS-DEFAULT-6-3-64k] 250 M  /home/rgao/ds-2
> [DISK] [Replica] 200 M  /home/rgao/noECBackup-ds
> [DISK] [Replica] 500 M  /home/rgao/noECBackup-ds-2
>  
> Total:
>  
> [Archive][EC:RS-DEFAULT-6-3-64k]  100 M
> [Archive][Replica]0 M
> [DISK] [EC:RS-DEFAULT-6-3-64k] 250 M
> [DISK] [Replica]   700 M  
>  
> [Archive][ALL] 100M
> [DISK][ALL]  950M
> [ALL] [EC:RS-DEFAULT-6-3-64k]350M
> [ALL] [Replica]  700M
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11163) Mover should move the file blocks to default storage once policy is unset

2017-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965215#comment-15965215
 ] 

Hadoop QA commented on HDFS-11163:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 34m 
46s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
56s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
24s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
53s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
12s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
45s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
55s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
14s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
13s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
20s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 20s{color} | {color:orange} root: The patch generated 2 new + 340 unchanged 
- 1 fixed = 342 total (was 341) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
34s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
36s{color} | {color:green} The patch does not generate 

[jira] [Resolved] (HDFS-11552) Erasure Coding: Support Parity Blocks placement onto same nodes hosting Data Blocks when DataNodes are insufficient

2017-04-11 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-11552.

Resolution: Not A Problem

> Erasure Coding: Support Parity Blocks placement onto same nodes hosting Data 
> Blocks when DataNodes are insufficient
> ---
>
> Key: HDFS-11552
> URL: https://issues.apache.org/jira/browse/HDFS-11552
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-nice-to-have
>
> Currently, {{DFSStripedOutputStream}} verifies if the allocated block 
> locations are at least numDataBlocks length. That is, for the EC Policy 
> RS-6-3-64K, though the total needed DNs for a full EC Block Group is 9, 
> Clients will be able to successfully create a DFSStripedOutputStream with 
> just 6 DNs. Moreover, the output stream thus created with less DNs will 
> totally ignore writing Parity Blocks.
> {code}
> [Thread-5] WARN  hdfs.DFSOutputStream 
> (DFSStripedOutputStream.java:allocateNewBlock(497)) - Failed to get block 
> location for parity block, index=6
> [Thread-5] WARN  hdfs.DFSOutputStream 
> (DFSStripedOutputStream.java:allocateNewBlock(497)) - Failed to get block 
> location for parity block, index=7
> [Thread-5] WARN  hdfs.DFSOutputStream 
> (DFSStripedOutputStream.java:allocateNewBlock(497)) - Failed to get block 
> location for parity block, index=8
> {code}
> So, upon file stream close we get the following warning message (though not 
> accurate) when the parity blocks are not yet written out.
> {code}
> INFO  namenode.FSNamesystem (FSNamesystem.java:checkBlocksComplete(2726)) - 
> BLOCK* blk_-9223372036854775792_1002 is COMMITTED but not COMPLETE(numNodes= 
> 0 <  minimum = 6) in file /ec/test1
> INFO  hdfs.StateChange (FSNamesystem.java:completeFile(2679)) - DIR* 
> completeFile: /ec/test1 is closed by DFSClient_NONMAPREDUCE_-1900076771_17
> WARN  hdfs.DFSOutputStream 
> (DFSStripedOutputStream.java:logCorruptBlocks(1117)) - Block group <1> has 3 
> corrupt blocks. It's at high risk of losing data.
> {code}
> I am not sure if there are any practical limitations in placing more blocks 
> of a Block Group onto the same node. At least, we can allow parity blocks 
> co-exist with data blocks, whenever there are insufficient DNs in the 
> cluster. Later, upon addition of more DataNodes, the Block Placement Policy 
> can detect the improper placement for such BlockGroups and can tigger EC 
> reconstruction. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11552) Erasure Coding: Support Parity Blocks placement onto same nodes hosting Data Blocks when DataNodes are insufficient

2017-04-11 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965214#comment-15965214
 ] 

Andrew Wang commented on HDFS-11552:


I think with the XOR-2-1 policy this also becomes less important, since we have 
an appropriate policy for testing on a small cluster.

Let's resolve this one and revisit later if necessary. Thanks Manoj and 
Takanobu for the discussion!

> Erasure Coding: Support Parity Blocks placement onto same nodes hosting Data 
> Blocks when DataNodes are insufficient
> ---
>
> Key: HDFS-11552
> URL: https://issues.apache.org/jira/browse/HDFS-11552
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-nice-to-have
>
> Currently, {{DFSStripedOutputStream}} verifies if the allocated block 
> locations are at least numDataBlocks length. That is, for the EC Policy 
> RS-6-3-64K, though the total needed DNs for a full EC Block Group is 9, 
> Clients will be able to successfully create a DFSStripedOutputStream with 
> just 6 DNs. Moreover, the output stream thus created with less DNs will 
> totally ignore writing Parity Blocks.
> {code}
> [Thread-5] WARN  hdfs.DFSOutputStream 
> (DFSStripedOutputStream.java:allocateNewBlock(497)) - Failed to get block 
> location for parity block, index=6
> [Thread-5] WARN  hdfs.DFSOutputStream 
> (DFSStripedOutputStream.java:allocateNewBlock(497)) - Failed to get block 
> location for parity block, index=7
> [Thread-5] WARN  hdfs.DFSOutputStream 
> (DFSStripedOutputStream.java:allocateNewBlock(497)) - Failed to get block 
> location for parity block, index=8
> {code}
> So, upon file stream close we get the following warning message (though not 
> accurate) when the parity blocks are not yet written out.
> {code}
> INFO  namenode.FSNamesystem (FSNamesystem.java:checkBlocksComplete(2726)) - 
> BLOCK* blk_-9223372036854775792_1002 is COMMITTED but not COMPLETE(numNodes= 
> 0 <  minimum = 6) in file /ec/test1
> INFO  hdfs.StateChange (FSNamesystem.java:completeFile(2679)) - DIR* 
> completeFile: /ec/test1 is closed by DFSClient_NONMAPREDUCE_-1900076771_17
> WARN  hdfs.DFSOutputStream 
> (DFSStripedOutputStream.java:logCorruptBlocks(1117)) - Block group <1> has 3 
> corrupt blocks. It's at high risk of losing data.
> {code}
> I am not sure if there are any practical limitations in placing more blocks 
> of a Block Group onto the same node. At least, we can allow parity blocks 
> co-exist with data blocks, whenever there are insufficient DNs in the 
> cluster. Later, upon addition of more DataNodes, the Block Placement Policy 
> can detect the improper placement for such BlockGroups and can tigger EC 
> reconstruction. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11636) Ozone: TestContainerPlacement fails because of string mismatch in expected Message

2017-04-11 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965212#comment-15965212
 ] 

Anu Engineer commented on HDFS-11636:
-

There is a merge conflict with the top of the tree, can you please rebase this 
patch if it is still valid.


> Ozone: TestContainerPlacement fails because of string mismatch in expected 
> Message
> --
>
> Key: HDFS-11636
> URL: https://issues.apache.org/jira/browse/HDFS-11636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11636-HDFS-7240.001.patch
>
>
> TestContainerPlacement fails because of the following error.
> This happens because the error message in container Allocation was changed in 
> HDFS-11620. Expected error message in the test needs to be rephrased to solve 
> this issue.
> {code}
> Expected: (an instance of java.io.IOException and exception with message a 
> string starting with "Unable to find enough nodes that meet the space 
> requirement in healthy node set.")
>  but: exception with message a string starting with "Unable to find 
> enough nodes that meet the space requirement in healthy node set." message 
> was "Unable to find enough nodes that meet the space requirement of 
> 5368709120 bytes in healthy node set. Nodes required: 3 Found: 0"
> Stacktrace was: org.apache.hadoop.ozone.scm.exceptions.SCMException: Unable 
> to find enough nodes that meet the space requirement of 5368709120 bytes in 
> healthy node set. Nodes required: 3 Found: 0
>   at 
> org.apache.hadoop.ozone.scm.container.placement.algorithms.SCMCommonPolicy.chooseDatanodes(SCMCommonPolicy.java:132)
>   at 
> org.apache.hadoop.ozone.scm.container.placement.algorithms.SCMContainerPlacementCapacity.chooseDatanodes(SCMContainerPlacementCapacity.java:95)
>   at 
> org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:220)
>   at 
> org.apache.hadoop.ozone.scm.node.TestContainerPlacement.testContainerPlacementCapacity(TestContainerPlacement.java:182)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.junit.Assert.assertThat(Assert.java:865)
>   at org.junit.Assert.assertThat(Assert.java:832)
>   at 
> org.junit.rules.ExpectedException.handleException(ExpectedException.java:198)
>   at 
> org.junit.rules.ExpectedException.access$500(ExpectedException.java:85)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:177)

[jira] [Commented] (HDFS-11641) Reduce cost of audit logging by using FileStatus instead of HdfsFileStatus

2017-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965194#comment-15965194
 ] 

Hadoop QA commented on HDFS-11641:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 283 unchanged - 2 fixed = 285 total (was 285) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 73m  
8s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:612578f |
| JIRA Issue | HDFS-11641 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862929/HDFS-11641.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 28bca6c775f7 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3a91376 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19052/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19052/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19052/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Reduce cost of audit logging by using FileStatus instead of HdfsFileStatus
> --
>
> Key: HDFS-11641
> URL: https://issues.apache.org/jira/browse/HDFS-11641
> 

[jira] [Commented] (HDFS-11565) Use compact identifiers for built-in ECPolicies in HdfsFileStatus

2017-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965193#comment-15965193
 ] 

Hadoop QA commented on HDFS-11565:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
99 unchanged - 0 fixed = 100 total (was 99) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:612578f |
| JIRA Issue | HDFS-11565 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862930/HDFS-11565.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 9911dc0d171d 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3a91376 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19053/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| unit | 

[jira] [Commented] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval

2017-04-11 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965185#comment-15965185
 ] 

Inigo Goiri commented on HDFS-11576:


I would log the events the other way around:
* Info: when a block has timed out and we issue a new request.
* Debug: when a block is still within the time out time.

BTW, with sl4j you could use proper format in the logs.
Other than that, it looks good.

Anybody available to review this patch?

> Block recovery will fail indefinitely if recovery time > heartbeat interval
> ---
>
> Key: HDFS-11576
> URL: https://issues.apache.org/jira/browse/HDFS-11576
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, namenode
>Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Critical
> Attachments: HDFS-11576.001.patch, HDFS-11576.002.patch, 
> HDFS-11576.003.patch, HDFS-11576.004.patch, HDFS-11576.repro.patch
>
>
> Block recovery will fail indefinitely if the time to recover a block is 
> always longer than the heartbeat interval. Scenario:
> 1. DN sends heartbeat 
> 2. NN sends a recovery command to DN, recoveryID=X
> 3. DN starts recovery
> 4. DN sends another heartbeat
> 5. NN sends a recovery command to DN, recoveryID=X+1
> 6. DN calls commitBlockSyncronization after succeeding with first recovery to 
> NN, which fails because X < X+1
> ... 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11551) Handle SlowDiskReport from DataNode at the NameNode

2017-04-11 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11551:
-
Fix Version/s: 2.9.0

+1 for the v2 branch-2 patch. I've committed it. Thanks [~hanishakoneru].

> Handle SlowDiskReport from DataNode at the NameNode
> ---
>
> Key: HDFS-11551
> URL: https://issues.apache.org/jira/browse/HDFS-11551
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-11551.001.patch, HDFS-11551.002.patch, 
> HDFS-11551.003.patch, HDFS-11551.004.patch, HDFS-11551.005.patch, 
> HDFS-11551.006.patch, HDFS-11551.007.patch, HDFS-11551.008.patch, 
> HDFS-11551.009.patch, HDFS-11551.010.patch, HDFS-11551-branch-2.001.patch, 
> HDFS-11551-branch-2.002.patch
>
>
> DataNodes send slow disk reports via heartbeats. Handle these reports at the 
> NameNode to find the topN slow disks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11630) TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds

2017-04-11 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11630:
--
Attachment: HDFS-11630.002.patch

[~arpitagarwal], thank you for the suggestions.
- Used mockito timeout feature to address the no loop exit condition for mock 
FuruteCallBack object.
- Set test timeouts to 300 seconds.

> TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds
> ---
>
> Key: HDFS-11630
> URL: https://issues.apache.org/jira/browse/HDFS-11630
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11630.001.patch, HDFS-11630.002.patch
>
>
> TestThrottledAsyncCheckerTimeout#testDiskCheckTimeoutInvokesOneCallbackOnly 
> fails intermittently in Jenkins builds. 
> We need to wait for disk checker timeout to callback the 
> FutureCallBack#onFailure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9342) Erasure coding: client should update and commit block based on acknowledged size

2017-04-11 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965164#comment-15965164
 ] 

Andrew Wang commented on HDFS-9342:
---

LGTM, thanks Sammi for refining, only some minors:

* Please add a comment to the new logic in updatePipeline explaining why we do 
this. IIUC it's because we should only update the acked length to the NN.
* Recommend using Preconditions rather than asserts, since asserts are not 
enabled outside of tests
* Grammar: "lest" -> "least", "are got acked" -> "were acked"

[~zhz] could you confirm that this looks good to you too? Would like your 
review before commit.

> Erasure coding: client should update and commit block based on acknowledged 
> size
> 
>
> Key: HDFS-9342
> URL: https://issues.apache.org/jira/browse/HDFS-9342
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Zhe Zhang
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-9342.01.patch, HDFS-9342.02.patch, 
> HDFS-9342.03.patch, HDFS-9342.04.patch
>
>
> For non-EC files, we have:
> {code}
> protected ExtendedBlock block; // its length is number of bytes acked
> {code}
> For EC files, the size of {{DFSStripedOutputStream#currentBlockGroup}} is 
> incremented in {{writeChunk}} without waiting for ack. And both 
> {{updatePipeline}} and {{commitBlock}} are based on size of 
> {{currentBlockGroup}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11645) DataXceiver thread should log the actual error when getting InvalidMagicNumberException

2017-04-11 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11645:
--
Status: Patch Available  (was: Open)

> DataXceiver thread should log the actual error when getting 
> InvalidMagicNumberException
> ---
>
> Key: HDFS-11645
> URL: https://issues.apache.org/jira/browse/HDFS-11645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1, 2.8.1
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HDFS-11645.001.patch
>
>
> Currently, {{DataXceiver#run}} method only logs an error message when getting 
> an {{InvalidMagicNumberException}}. It should also log the actual exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11645) DataXceiver thread should log the actual error when getting InvalidMagicNumberException

2017-04-11 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965149#comment-15965149
 ] 

Chen Liang commented on HDFS-11645:
---

Post v001 patch with the exception logged.

> DataXceiver thread should log the actual error when getting 
> InvalidMagicNumberException
> ---
>
> Key: HDFS-11645
> URL: https://issues.apache.org/jira/browse/HDFS-11645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1, 2.8.1
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HDFS-11645.001.patch
>
>
> Currently, {{DataXceiver#run}} method only logs an error message when getting 
> an {{InvalidMagicNumberException}}. It should also log the actual exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11645) DataXceiver thread should log the actual error when getting InvalidMagicNumberException

2017-04-11 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11645:
--
Attachment: HDFS-11645.001.patch

> DataXceiver thread should log the actual error when getting 
> InvalidMagicNumberException
> ---
>
> Key: HDFS-11645
> URL: https://issues.apache.org/jira/browse/HDFS-11645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1, 2.8.1
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HDFS-11645.001.patch
>
>
> Currently, {{DataXceiver#run}} method only logs an error message when getting 
> an {{InvalidMagicNumberException}}. It should also log the actual exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11504) Ozone: SCM: Add AllocateBlock/DeleteBlock/GetBlock APIs

2017-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965148#comment-15965148
 ] 

Hadoop QA commented on HDFS-11504:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
24s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Redundant nullcheck of 
org.apache.hadoop.ozone.scm.block.BlockManagerImpl.currentContainerName, which 
is known to be non-null in 
org.apache.hadoop.ozone.scm.block.BlockManagerImpl.allocateBlock(long)  
Redundant null check at BlockManagerImpl.java:is known to be non-null in 
org.apache.hadoop.ozone.scm.block.BlockManagerImpl.allocateBlock(long)  
Redundant null check at BlockManagerImpl.java:[line 119] |
| Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11504 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862925/HDFS-11504-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux cc1b3d0737e7 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 

[jira] [Created] (HDFS-11645) DataXceiver thread should log the actual error when getting InvalidMagicNumberException

2017-04-11 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11645:
-

 Summary: DataXceiver thread should log the actual error when 
getting InvalidMagicNumberException
 Key: HDFS-11645
 URL: https://issues.apache.org/jira/browse/HDFS-11645
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.0-alpha1, 2.8.1
Reporter: Chen Liang
Assignee: Chen Liang
Priority: Minor


Currently, {{DataXceiver#run}} method only logs an error message when getting 
an {{InvalidMagicNumberException}}. It should also log the actual exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11644) DFSStripedOutputStream should not implement Syncable

2017-04-11 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-11644:
--

 Summary: DFSStripedOutputStream should not implement Syncable
 Key: HDFS-11644
 URL: https://issues.apache.org/jira/browse/HDFS-11644
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0-alpha1
Reporter: Andrew Wang


FSDataOutputStream#hsync checks if a stream implements Syncable, and if so, 
calls hsync. Otherwise, it just calls flush. This is used, for instance, by 
YARN's FileSystemTimelineWriter.

DFSStripedOutputStream extends DFSOutputStream, which implements Syncable. 
However, DFSStripedOS throws a runtime exception when the Syncable methods are 
called.

We should refactor the inheritance structure so DFSStripedOS does not implement 
Syncable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11636) Ozone: TestContainerPlacement fails because of string mismatch in expected Message

2017-04-11 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965127#comment-15965127
 ] 

Anu Engineer commented on HDFS-11636:
-

+1, thanks for catching and fix this. I will commit this shortly.


> Ozone: TestContainerPlacement fails because of string mismatch in expected 
> Message
> --
>
> Key: HDFS-11636
> URL: https://issues.apache.org/jira/browse/HDFS-11636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11636-HDFS-7240.001.patch
>
>
> TestContainerPlacement fails because of the following error.
> This happens because the error message in container Allocation was changed in 
> HDFS-11620. Expected error message in the test needs to be rephrased to solve 
> this issue.
> {code}
> Expected: (an instance of java.io.IOException and exception with message a 
> string starting with "Unable to find enough nodes that meet the space 
> requirement in healthy node set.")
>  but: exception with message a string starting with "Unable to find 
> enough nodes that meet the space requirement in healthy node set." message 
> was "Unable to find enough nodes that meet the space requirement of 
> 5368709120 bytes in healthy node set. Nodes required: 3 Found: 0"
> Stacktrace was: org.apache.hadoop.ozone.scm.exceptions.SCMException: Unable 
> to find enough nodes that meet the space requirement of 5368709120 bytes in 
> healthy node set. Nodes required: 3 Found: 0
>   at 
> org.apache.hadoop.ozone.scm.container.placement.algorithms.SCMCommonPolicy.chooseDatanodes(SCMCommonPolicy.java:132)
>   at 
> org.apache.hadoop.ozone.scm.container.placement.algorithms.SCMContainerPlacementCapacity.chooseDatanodes(SCMContainerPlacementCapacity.java:95)
>   at 
> org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:220)
>   at 
> org.apache.hadoop.ozone.scm.node.TestContainerPlacement.testContainerPlacementCapacity(TestContainerPlacement.java:182)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.junit.Assert.assertThat(Assert.java:865)
>   at org.junit.Assert.assertThat(Assert.java:832)
>   at 
> org.junit.rules.ExpectedException.handleException(ExpectedException.java:198)
>   at 
> org.junit.rules.ExpectedException.access$500(ExpectedException.java:85)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:177)
>   at 

[jira] [Commented] (HDFS-11530) Use HDFS specific network topology to choose datanode in BlockPlacementPolicyDefault

2017-04-11 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965124#comment-15965124
 ] 

Chen Liang commented on HDFS-11530:
---

The failed tests seem unrelated, they all passed in my local run.

> Use HDFS specific network topology to choose datanode in 
> BlockPlacementPolicyDefault
> 
>
> Key: HDFS-11530
> URL: https://issues.apache.org/jira/browse/HDFS-11530
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11530.001.patch, HDFS-11530.002.patch, 
> HDFS-11530.003.patch, HDFS-11530.004.patch, HDFS-11530.005.patch, 
> HDFS-11530.006.patch, HDFS-11530.007.patch, HDFS-11530.008.patch
>
>
> The work for {{chooseRandomWithStorageType}} has been merged in HDFS-11482. 
> But this method is contained in new topology {{DFSNetworkTopology}} which is 
> specified for HDFS. We should update this and let 
> {{BlockPlacementPolicyDefault}} use the new way since the original way is 
> inefficient.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10996) Ability to specify per-file EC policy at create time

2017-04-11 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965109#comment-15965109
 ] 

Andrew Wang commented on HDFS-10996:


Hi Sammi, thanks for working on this, sorry for the slow review:

bq. I go through the HdfsAdmin and find there is not a single create function 
exposed there. So I'm not sure If I should add a new create API and will that 
benefit its users? Uploaded v5 patch.

Sure, we can tackle this in a separate issue. We're doing some internal testing 
and already found a few issues already regarding the lack of hflush support for 
EC files (e.g. HDFS-11643, and apparently YARN job history too). These could 
potentially be solved by having these apps specify an explicit "replicated" 
policy, which requires a public API.

Some other code review nits, +1 pending these and Jenkins:

* getErasureCodingPolicyByName, we could assert hasReadLock() instead, since 
this method doesn't do any writes
* Need a rebase since ECPolicies have moved to the new 
SystemErasureCodingPolicies class
* testFileLevelECPolicy: nit: "policy should be found" -> "policy should not be 
found"

> Ability to specify per-file EC policy at create time
> 
>
> Key: HDFS-10996
> URL: https://issues.apache.org/jira/browse/HDFS-10996
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-10996-v1.patch, HDFS-10996-v2.patch, 
> HDFS-10996-v3.patch, HDFS-10996-v4.patch, HDFS-10996-v5.patch
>
>
> Based on discussion in HDFS-10971, it would be useful to specify the EC 
> policy when the file is created. This is useful for situations where app 
> requirements do not map nicely to the current directory-level policies.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11551) Handle SlowDiskReport from DataNode at the NameNode

2017-04-11 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11551:
--
Attachment: HDFS-11551-branch-2.002.patch

> Handle SlowDiskReport from DataNode at the NameNode
> ---
>
> Key: HDFS-11551
> URL: https://issues.apache.org/jira/browse/HDFS-11551
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11551.001.patch, HDFS-11551.002.patch, 
> HDFS-11551.003.patch, HDFS-11551.004.patch, HDFS-11551.005.patch, 
> HDFS-11551.006.patch, HDFS-11551.007.patch, HDFS-11551.008.patch, 
> HDFS-11551.009.patch, HDFS-11551.010.patch, HDFS-11551-branch-2.001.patch, 
> HDFS-11551-branch-2.002.patch
>
>
> DataNodes send slow disk reports via heartbeats. Handle these reports at the 
> NameNode to find the topN slow disks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long

2017-04-11 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965089#comment-15965089
 ] 

Xiaobing Zhou commented on HDFS-11558:
--

Posted branch-2 patch. Thanks all.

> BPServiceActor thread name is too long
> --
>
> Key: HDFS-11558
> URL: https://issues.apache.org/jira/browse/HDFS-11558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, 
> HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, 
> HDFS-11558.005.patch, HDFS-11558.006.patch, HDFS-11558-branch-2.006.patch
>
>
> Currently, the thread name looks like
> {code}
> 2017-03-20 18:32:22,022 [DataNode: 
> [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0,
>  
> [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]]
>   heartbeating to localhost/127.0.0.1:51772] INFO  ...
> {code}
> which contains the full path for each storage dir.  It is unnecessarily long.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11530) Use HDFS specific network topology to choose datanode in BlockPlacementPolicyDefault

2017-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965097#comment-15965097
 ] 

Hadoop QA commented on HDFS-11530:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:612578f |
| JIRA Issue | HDFS-11530 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862919/HDFS-11530.008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0ee2213b5628 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7d873c4 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19050/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19050/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19050/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Use HDFS specific network topology to choose datanode in 
> BlockPlacementPolicyDefault
> 

[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long

2017-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965095#comment-15965095
 ] 

Hudson commented on HDFS-11558:
---

ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #11572 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11572/])
HDFS-11558. BPServiceActor thread name is too long. Contributed by (liuml07: 
rev 3a91376707d451777b8269f81bcd48315edd9fc7)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockPoolManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java


> BPServiceActor thread name is too long
> --
>
> Key: HDFS-11558
> URL: https://issues.apache.org/jira/browse/HDFS-11558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, 
> HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, 
> HDFS-11558.005.patch, HDFS-11558.006.patch, HDFS-11558-branch-2.006.patch
>
>
> Currently, the thread name looks like
> {code}
> 2017-03-20 18:32:22,022 [DataNode: 
> [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0,
>  
> [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]]
>   heartbeating to localhost/127.0.0.1:51772] INFO  ...
> {code}
> which contains the full path for each storage dir.  It is unnecessarily long.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11558) BPServiceActor thread name is too long

2017-04-11 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-11558:
-
Attachment: HDFS-11558-branch-2.006.patch

> BPServiceActor thread name is too long
> --
>
> Key: HDFS-11558
> URL: https://issues.apache.org/jira/browse/HDFS-11558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, 
> HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, 
> HDFS-11558.005.patch, HDFS-11558.006.patch, HDFS-11558-branch-2.006.patch
>
>
> Currently, the thread name looks like
> {code}
> 2017-03-20 18:32:22,022 [DataNode: 
> [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0,
>  
> [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]]
>   heartbeating to localhost/127.0.0.1:51772] INFO  ...
> {code}
> which contains the full path for each storage dir.  It is unnecessarily long.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11470) Ozone: SCM: Add SCM CLI

2017-04-11 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965067#comment-15965067
 ] 

Xiaoyu Yao commented on HDFS-11470:
---

Thanks [~anu] for writing up and updating the doc and all for the discussion. A 
few more comments:

1. In section 1.2 for put key, can we change the the input data (with -i 
instead of -o)?

2. Can we move section 5 (Pipleline) before section 2 (Container) which has 
dependency on the Pipeline?

3. We don't want to maintain empty pool without any nodes. In section 3.1, can 
we add a require parameter for -nodes  create pool while keeping the 
separate adding/removing command? When the number of nodes in a pool reaches 0, 
the pool will be removed as well.
 
4. In section, can we add an optional -metric parameter to filter only metrics 
that are interested? 

5. Can we change 4.3 with a node list command with filter by status (default = 
All)
hdfs scm -node list 

[jira] [Updated] (HDFS-11558) BPServiceActor thread name is too long

2017-04-11 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-11558:
-
  Resolution: Fixed
   Fix Version/s: 3.0.0-alpha3
Target Version/s: 3.0.0-alpha3, 2.8.1
  Status: Resolved  (was: Patch Available)

Thanks [~xiaobingo] for your contribution. Thanks [~szetszwo],  
[~hanishakoneru] and [~arpitagarwal] for your review and discussion.

> BPServiceActor thread name is too long
> --
>
> Key: HDFS-11558
> URL: https://issues.apache.org/jira/browse/HDFS-11558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, 
> HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, 
> HDFS-11558.005.patch, HDFS-11558.006.patch
>
>
> Currently, the thread name looks like
> {code}
> 2017-03-20 18:32:22,022 [DataNode: 
> [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0,
>  
> [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]]
>   heartbeating to localhost/127.0.0.1:51772] INFO  ...
> {code}
> which contains the full path for each storage dir.  It is unnecessarily long.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11615) FSNamesystemLock metrics can be inaccurate due to millisecond precision

2017-04-11 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965062#comment-15965062
 ] 

Andrew Wang commented on HDFS-11615:


These metrics are normally consumed by a metrics system and regraphed, so I 
don't think human parsing is that important.

Related note, I like to put the unit (e.g. "Nanos") into the metric and 
variable name, since otherwise I have to look it up each time. Possible to do 
this here too?

> FSNamesystemLock metrics can be inaccurate due to millisecond precision
> ---
>
> Key: HDFS-11615
> URL: https://issues.apache.org/jira/browse/HDFS-11615
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.4
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-11615.000.patch
>
>
> Currently the {{FSNamesystemLock}} metrics created in HDFS-10872 track the 
> lock hold time using {{Timer.monotonicNow()}}, which has millisecond-level 
> precision. However, many of these operations hold the lock for less than a 
> millisecond, making these metrics inaccurate. We should instead use 
> {{System.nanoTime()}} for higher accuracy.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11558) BPServiceActor thread name is too long

2017-04-11 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965054#comment-15965054
 ] 

Mingliang Liu edited comment on HDFS-11558 at 4/11/17 10:38 PM:


Committed to {{trunk}} branch. Xiaobing, can you upload the 2.x patches? I see 
minor conflicts when backporting. Thanks,


was (Author: liuml07):
Committed to {{trunk}} branch. Keep this open for {{branch-2}} and 
{{branch-2.8}} changes. Xiaobing, can you upload the 2.x patches? I see minor 
conflicts when backporting. Thanks,

> BPServiceActor thread name is too long
> --
>
> Key: HDFS-11558
> URL: https://issues.apache.org/jira/browse/HDFS-11558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, 
> HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, 
> HDFS-11558.005.patch, HDFS-11558.006.patch
>
>
> Currently, the thread name looks like
> {code}
> 2017-03-20 18:32:22,022 [DataNode: 
> [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0,
>  
> [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]]
>   heartbeating to localhost/127.0.0.1:51772] INFO  ...
> {code}
> which contains the full path for each storage dir.  It is unnecessarily long.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11642) Block Storage: fix TestCBlockCLI and TestCBlockServerPersistence cleanup

2017-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965061#comment-15965061
 ] 

Hadoop QA commented on HDFS-11642:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks 
|
|   | hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits |
|   | hadoop.ozone.scm.node.TestContainerPlacement |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
| Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11642 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862914/HDFS-11642-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a315f2a90cd9 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 349a19b |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19049/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19049/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19049/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Block Storage: fix TestCBlockCLI and TestCBlockServerPersistence cleanup
> 

[jira] [Updated] (HDFS-11565) Use compact identifiers for built-in ECPolicies in HdfsFileStatus

2017-04-11 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11565:
---
Attachment: HDFS-11565.003.patch

Sure, good idea Wei-chiu, added some Precondition checks and new unit tests. 
Let's hope precommit works this time.

> Use compact identifiers for built-in ECPolicies in HdfsFileStatus
> -
>
> Key: HDFS-11565
> URL: https://issues.apache.org/jira/browse/HDFS-11565
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11565.001.patch, HDFS-11565.002.patch, 
> HDFS-11565.003.patch
>
>
> Discussed briefly on HDFS-7337 with Kai Zheng. Quoting our convo:
> {quote}
> From looking at the protos, one other question I had is about the overhead of 
> these protos when using the hardcoded policies. There are a bunch of strings 
> and ints, which can be kind of heavy since they're added to each 
> HdfsFileStatus. Should we make the built-in ones identified by purely an ID, 
> with these fully specified protos used for the pluggable policies?
> {quote}
> {quote}
> Sounds like this could be considered separately because, either built-in 
> policies or plugged-in polices, the full meta info is maintained either by 
> the codes or in the fsimage persisted, so identifying them by purely an ID 
> should works fine. If agree, we could refactor the codes you mentioned above 
> separately.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11641) Reduce cost of audit logging by using FileStatus instead of HdfsFileStatus

2017-04-11 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-11641:
---
Attachment: HDFS-11641.1.patch

Merely undid 1-line change in FSDirAttrOp for creating the toRemove list.

> Reduce cost of audit logging by using FileStatus instead of HdfsFileStatus
> --
>
> Key: HDFS-11641
> URL: https://issues.apache.org/jira/browse/HDFS-11641
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-11641.1.patch, HDFS-11641.patch
>
>
> Audit logging operations create a HdfsFileStatus but audit logging promptly 
> converts it to a FileStatus to pass to the loggers.  A HdfsFileStatus is more 
> expensive to create, ex. multiple node to root scans for feature info that 
> will only be discarded in the conversion to FileStatus.  Operations should 
> create a FileStatus to eliminate all the superfluous overhead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long

2017-04-11 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965054#comment-15965054
 ] 

Mingliang Liu commented on HDFS-11558:
--

Committed to {{trunk}} branch. Keep this open for {{branch-2}} and 
{{branch-2.8}} changes. Xiaobing, can you upload the 2.x patches? I see minor 
conflicts when backporting. Thanks,

> BPServiceActor thread name is too long
> --
>
> Key: HDFS-11558
> URL: https://issues.apache.org/jira/browse/HDFS-11558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, 
> HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, 
> HDFS-11558.005.patch, HDFS-11558.006.patch
>
>
> Currently, the thread name looks like
> {code}
> 2017-03-20 18:32:22,022 [DataNode: 
> [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0,
>  
> [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]]
>   heartbeating to localhost/127.0.0.1:51772] INFO  ...
> {code}
> which contains the full path for each storage dir.  It is unnecessarily long.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11634) Optimize BlockIterator when interating starts in the middle.

2017-04-11 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965029#comment-15965029
 ] 

Rushabh S Shah edited comment on HDFS-11634 at 4/11/17 10:08 PM:
-

One Minor nit:
 Couple of typos for 'oterator'

One question:
 I don't understand why {{sumBlocks}} was introduced in 
{{DatanodeDescriptor#BlockIterator}} constructor in latest patch.
What corner case were you trying to fix ?


was (Author: shahrs87):
Minor nit:
Couple of typos for 'oterator'
I don't understand why {{sumBlocks}} was introduced in 
{{DatanodeDescriptor#BlockIterator}} constructor in latest patch.
What corner case were you trying to fix ?

> Optimize BlockIterator when interating starts in the middle.
> 
>
> Key: HDFS-11634
> URL: https://issues.apache.org/jira/browse/HDFS-11634
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.5
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Attachments: HDFS-11634.001.patch, HDFS-11634.002.patch
>
>
> {{BlockManager.getBlocksWithLocations()}} needs to iterate blocks from a 
> randomly selected {{startBlock}} index. It creates an iterator which points 
> to the first block and then skips all blocks until {{startBlock}}. It is 
> inefficient when DN has multiple storages. Instead of skipping blocks one by 
> one we can skip entire storages. Should be more efficient on average.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11634) Optimize BlockIterator when interating starts in the middle.

2017-04-11 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965029#comment-15965029
 ] 

Rushabh S Shah commented on HDFS-11634:
---

Minor nit:
Couple of typos for 'oterator'
I don't understand why {{sumBlocks}} was introduced in 
{{DatanodeDescriptor#BlockIterator}} constructor in latest patch.
What corner case were you trying to fix ?

> Optimize BlockIterator when interating starts in the middle.
> 
>
> Key: HDFS-11634
> URL: https://issues.apache.org/jira/browse/HDFS-11634
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.5
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Attachments: HDFS-11634.001.patch, HDFS-11634.002.patch
>
>
> {{BlockManager.getBlocksWithLocations()}} needs to iterate blocks from a 
> randomly selected {{startBlock}} index. It creates an iterator which points 
> to the first block and then skips all blocks until {{startBlock}}. It is 
> inefficient when DN has multiple storages. Instead of skipping blocks one by 
> one we can skip entire storages. Should be more efficient on average.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11503) Integrate Chocolate Cloud RS coder implementation

2017-04-11 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965022#comment-15965022
 ] 

Andrew Wang commented on HDFS-11503:


Hi [~sw0rdf1sh],

We already have assignees actively working on HADOOP-13200 and the subtasks of 
HDFS-7337, so I think the biggest contribution would be if you and your team 
could help review the design doc and patches. Would be much appreciated!

> Integrate Chocolate Cloud RS coder implementation
> -
>
> Key: HDFS-11503
> URL: https://issues.apache.org/jira/browse/HDFS-11503
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Marcell Feher
> Attachments: HDFS-11503.patch
>
>
> Quote from Marcell on HDFS-7285:
> First of all let me introduce ourselves: we are Chocolate Cloud from Denmark, 
> we use erasure coding to improve storage solutions. We already have 
> Reed-Solomon and Random Linear Network Coding backends for Liberasurecode, 
> and now we are at the final stage of developing our RS plugin to HDFS-EC. The 
> performance of our plugin is similar to ISA-L's, in some configurations we 
> are better, in others we are worse (our initial speed comparison charts can 
> be found here: https://www.chocolate-cloud.cc/Plugins/HDFS-EC/hdfs.html).
> We would like our plugin to become officially supported in Hadoop 3.0. We can 
> already provide a preliminary version of our (native) library and a patch 
> with the necessary glue code for the next alpha release.
> I'd like to know your thoughts about whether it's possible and how it could 
> be achieved.
> P.S: I'm happy to share more details if there's interest



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11504) Ozone: SCM: Add AllocateBlock/DeleteBlock/GetBlock APIs

2017-04-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11504:
--
Attachment: HDFS-11504-HDFS-7240.002.patch

Fix the checkstyle issue.

> Ozone: SCM: Add AllocateBlock/DeleteBlock/GetBlock APIs
> ---
>
> Key: HDFS-11504
> URL: https://issues.apache.org/jira/browse/HDFS-11504
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11504-HDFS-7240.001.patch, 
> HDFS-11504-HDFS-7240.002.patch
>
>
> The signature of the APIs are listed below. This allows SCM to 
> 1) allocateBlock for client and maintain the key->container mapping in level 
> DB in addition to the existing container to pipeline mapping in level DB. 
> 2) return the pipeline of a block based on the key.
> 3) remove the block based on the key of the block. 
> {code}
>  allocateBlock(long size)
>  getBlock(key);
> void deleteBlock(key);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11190) Namenode support for data stored in external stores.

2017-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965000#comment-15965000
 ] 

Hadoop QA commented on HDFS-11190:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
0s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 4s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
29s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
1s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
14s{color} | {color:green} HDFS-9806 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-tools {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
22s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} HDFS-9806 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 11s{color} | {color:orange} root: The patch generated 15 new + 811 unchanged 
- 2 fixed = 826 total (was 813) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-tools {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
1s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 
36s{color} | {color:green} hadoop-tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}184m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11190 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862906/HDFS-11190-HDFS-9806.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle 

[jira] [Updated] (HDFS-11338) [SPS]: Fix timeout issue in unit tests caused by longger NN down time

2017-04-11 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-11338:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-10285
   Status: Resolved  (was: Patch Available)

Committed to branch!

> [SPS]: Fix timeout issue in unit tests caused by longger NN down time
> -
>
> Key: HDFS-11338
> URL: https://issues.apache.org/jira/browse/HDFS-11338
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Wei Zhou
> Fix For: HDFS-10285
>
> Attachments: HDFS-11338-HDFS-10285.00.patch, 
> HDFS-11338-HDFS-10285.01.patch, HDFS-11338-HDFS-10285-02.patch, 
> HDFS-11338-HDFS-10285-03.patch, HDFS-11338-HDFS-10285-04.patch, 
> HDFS-11338-HDFS-10285-05.patch
>
>
> As discussed in HDFS-11186, it takes longer to stop NN:
> {code}
> try {
>   storagePolicySatisfierThread.join(3000);
> } catch (InterruptedException ie) {
> }
> {code}
> So, it takes longer time to finish some tests and this leads to the timeout 
> failures.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11338) [SPS]: Fix timeout issue in unit tests caused by longger NN down time

2017-04-11 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964982#comment-15964982
 ] 

Uma Maheswara Rao G commented on HDFS-11338:


+1 on the latest patch

> [SPS]: Fix timeout issue in unit tests caused by longger NN down time
> -
>
> Key: HDFS-11338
> URL: https://issues.apache.org/jira/browse/HDFS-11338
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Wei Zhou
> Attachments: HDFS-11338-HDFS-10285.00.patch, 
> HDFS-11338-HDFS-10285.01.patch, HDFS-11338-HDFS-10285-02.patch, 
> HDFS-11338-HDFS-10285-03.patch, HDFS-11338-HDFS-10285-04.patch, 
> HDFS-11338-HDFS-10285-05.patch
>
>
> As discussed in HDFS-11186, it takes longer to stop NN:
> {code}
> try {
>   storagePolicySatisfierThread.join(3000);
> } catch (InterruptedException ie) {
> }
> {code}
> So, it takes longer time to finish some tests and this leads to the timeout 
> failures.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11530) Use HDFS specific network topology to choose datanode in BlockPlacementPolicyDefault

2017-04-11 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11530:
--
Attachment: HDFS-11530.008.patch

Attaching v008 patch to fix the issue. [~linyiqun] [~arpitagarwal] would you 
please take a look on this one some time? As v008 patch made a few small 
changes but they kind of break some earlier assumptions.

More specifically, {{TestBlockManager.testSafeModeIBR}} fail seems to be caused 
by that the {{add(Node node)}} was called more than once on the same node, with 
different storage type info. This violates the earlier assumption that nodes 
only get called by add once, or at least always the same storage info. The 
reason seems that this tests is simulating datanode fails and this is why nodes 
get added twice. This seems to be a valid case so I added 
{{DFSTopologyNodeImpl.updateExistingDatanode}} for this.

On the other hand, 
{{TestBlockStatsMXBean.testStorageTypeStatsWhenStorageFailed}} fail seems to be 
caused by that the {{excludedNodes}} set does not have to be 
{{DatanodeDescriptor}}, but can also be {{DatanodeInfo}} instead. (Although 
it's a bit suspicious to me why this is the only test that failed due to this). 
Assuming this is valid, I modified 
{{DFSNetworkTopology.chooseRandomWithStorageType}} to do some more check.

I do appreciate any comments. Basically I made the changes here assuming the 
tests are doing the right thing, but it might very well be that the tests 
themselves should be modified instead. 

> Use HDFS specific network topology to choose datanode in 
> BlockPlacementPolicyDefault
> 
>
> Key: HDFS-11530
> URL: https://issues.apache.org/jira/browse/HDFS-11530
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11530.001.patch, HDFS-11530.002.patch, 
> HDFS-11530.003.patch, HDFS-11530.004.patch, HDFS-11530.005.patch, 
> HDFS-11530.006.patch, HDFS-11530.007.patch, HDFS-11530.008.patch
>
>
> The work for {{chooseRandomWithStorageType}} has been merged in HDFS-11482. 
> But this method is contained in new topology {{DFSNetworkTopology}} which is 
> specified for HDFS. We should update this and let 
> {{BlockPlacementPolicyDefault}} use the new way since the original way is 
> inefficient.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11643) Balancer fencing fails when writing erasure coded lock file

2017-04-11 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964941#comment-15964941
 ] 

Andrew Wang commented on HDFS-11643:


Heads up [~Sammi], this relates to our discussion about a "replicated" EC 
policy. We could use it to force the balancer lock file to be a replicated file.

> Balancer fencing fails when writing erasure coded lock file
> ---
>
> Key: HDFS-11643
> URL: https://issues.apache.org/jira/browse/HDFS-11643
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
>
> At startup, the balancer writes its hostname to the lock file and calls 
> hflush(). hflush is not supported for EC files, so this fails when the entire 
> filesystem is erasure coded.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11643) Balancer fencing fails when writing erasure coded lock file

2017-04-11 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-11643:
--

 Summary: Balancer fencing fails when writing erasure coded lock 
file
 Key: HDFS-11643
 URL: https://issues.apache.org/jira/browse/HDFS-11643
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer & mover, erasure-coding
Affects Versions: 3.0.0-alpha1
Reporter: Andrew Wang
Priority: Critical


At startup, the balancer writes its hostname to the lock file and calls 
hflush(). hflush is not supported for EC files, so this fails when the entire 
filesystem is erasure coded.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11062) Ozone:SCM: Remove null command

2017-04-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11062:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks [~yuanbo] for the contribution and all for the reviews. I've commit the 
patch to the feature branch. 

> Ozone:SCM: Remove null command
> --
>
> Key: HDFS-11062
> URL: https://issues.apache.org/jira/browse/HDFS-11062
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Yuanbo Liu
> Fix For: HDFS-7240
>
> Attachments: HDFS-11062-HDFS-7240.001.patch, 
> HDFS-11062-HDFS-7240.002.patch, HDFS-11062-HDFS-7240.003.patch
>
>
> in SCM protocol we have a nullCommand that gets returned as the default case. 
> Explore if we can remove this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11504) Ozone: SCM: Add AllocateBlock/DeleteBlock/GetBlock APIs

2017-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964924#comment-15964924
 ] 

Hadoop QA commented on HDFS-11504:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
11s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
17s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-hdfs-project: The patch generated 10 new 
+ 0 unchanged - 0 fixed = 10 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
55s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Redundant nullcheck of 
org.apache.hadoop.ozone.scm.block.BlockManagerImpl.currentContainerName, which 
is known to be non-null in 
org.apache.hadoop.ozone.scm.block.BlockManagerImpl.allocateBlock(long)  
Redundant null check at BlockManagerImpl.java:is known to be non-null in 
org.apache.hadoop.ozone.scm.block.BlockManagerImpl.allocateBlock(long)  
Redundant null check at BlockManagerImpl.java:[line 119] |
| Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.ozone.scm.node.TestContainerPlacement |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11504 |
| JIRA Patch URL | 

[jira] [Updated] (HDFS-11062) Ozone:SCM: Remove null command

2017-04-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11062:
--
Summary: Ozone:SCM: Remove null command  (was: Ozone:SCM: Explore if we can 
remove nullcommand)

> Ozone:SCM: Remove null command
> --
>
> Key: HDFS-11062
> URL: https://issues.apache.org/jira/browse/HDFS-11062
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Yuanbo Liu
> Fix For: HDFS-7240
>
> Attachments: HDFS-11062-HDFS-7240.001.patch, 
> HDFS-11062-HDFS-7240.002.patch, HDFS-11062-HDFS-7240.003.patch
>
>
> in SCM protocol we have a nullCommand that gets returned as the default case. 
> Explore if we can remove this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11062) Ozone:SCM: Explore if we can remove nullcommand

2017-04-11 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964917#comment-15964917
 ] 

Xiaoyu Yao commented on HDFS-11062:
---

+1 for v3 patch. The unit test failures are unrelated and tracked by 
HDFS-11642. 
I will commit it shortly.


> Ozone:SCM: Explore if we can remove nullcommand
> ---
>
> Key: HDFS-11062
> URL: https://issues.apache.org/jira/browse/HDFS-11062
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Yuanbo Liu
> Fix For: HDFS-7240
>
> Attachments: HDFS-11062-HDFS-7240.001.patch, 
> HDFS-11062-HDFS-7240.002.patch, HDFS-11062-HDFS-7240.003.patch
>
>
> in SCM protocol we have a nullCommand that gets returned as the default case. 
> Explore if we can remove this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11642) Block Storage: fix TestCBlockCLI and TestCBlockServerPersistence cleanup

2017-04-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11642:
--
Status: Patch Available  (was: Open)

> Block Storage: fix TestCBlockCLI and TestCBlockServerPersistence cleanup
> 
>
> Key: HDFS-11642
> URL: https://issues.apache.org/jira/browse/HDFS-11642
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11642-HDFS-7240.001.patch
>
>
> This was found in recent Jenkins run on HDFS-7240.
> The cblock service RPC binding port (9810) was not cleaned up after test.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11642) Block Storage: fix TestCBlockCLI and TestCBlockServerPersistence cleanup

2017-04-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11642:
--
Attachment: HDFS-11642-HDFS-7240.001.patch

> Block Storage: fix TestCBlockCLI and TestCBlockServerPersistence cleanup
> 
>
> Key: HDFS-11642
> URL: https://issues.apache.org/jira/browse/HDFS-11642
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11642-HDFS-7240.001.patch
>
>
> This was found in recent Jenkins run on HDFS-7240.
> The cblock service RPC binding port (9810) was not cleaned up after test.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11642) Block Storage: fix TestCBlockCLI and TestCBlockServerPersistence cleanup

2017-04-11 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964913#comment-15964913
 ] 

Xiaoyu Yao commented on HDFS-11642:
---

https://builds.apache.org/job/PreCommit-HDFS-Build/18992/testReport/

{code}
Problem binding to [0.0.0.0:9810] java.net.BindException: Address already in 
use; For more details see:  http://wiki.apache.org/hadoop/BindException
Stacktrace

java.net.BindException: Problem binding to [0.0.0.0:9810] 
java.net.BindException: Address already in use; For more details see:  
http://wiki.apache.org/hadoop/BindException
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:543)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:1033)
at org.apache.hadoop.ipc.Server.(Server.java:2791)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:960)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:420)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:341)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:802)
at 
org.apache.hadoop.cblock.CBlockManager.startRpcServer(CBlockManager.java:201)
at org.apache.hadoop.cblock.CBlockManager.(CBlockManager.java:117)
at org.apache.hadoop.cblock.TestCBlockCLI.setup(TestCBlockCLI.java:57)
{code}

> Block Storage: fix TestCBlockCLI and TestCBlockServerPersistence cleanup
> 
>
> Key: HDFS-11642
> URL: https://issues.apache.org/jira/browse/HDFS-11642
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> This was found in recent Jenkins run on HDFS-7240.
> The cblock service RPC binding port (9810) was not cleaned up after test.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11642) Block Storage: fix TestCBlockCLI and TestCBlockServerPersistence cleanup

2017-04-11 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-11642:
-

 Summary: Block Storage: fix TestCBlockCLI and 
TestCBlockServerPersistence cleanup
 Key: HDFS-11642
 URL: https://issues.apache.org/jira/browse/HDFS-11642
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This was found in recent Jenkins run on HDFS-7240.

The cblock service RPC binding port (9810) was not cleaned up after test.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11641) Reduce cost of audit logging by using FileStatus instead of HdfsFileStatus

2017-04-11 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964847#comment-15964847
 ] 

Daryn Sharp commented on HDFS-11641:


Ug.  Need to update to remove 2 lines that slipped in.  This is just a small 
piece carved off a large change.

> Reduce cost of audit logging by using FileStatus instead of HdfsFileStatus
> --
>
> Key: HDFS-11641
> URL: https://issues.apache.org/jira/browse/HDFS-11641
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-11641.patch
>
>
> Audit logging operations create a HdfsFileStatus but audit logging promptly 
> converts it to a FileStatus to pass to the loggers.  A HdfsFileStatus is more 
> expensive to create, ex. multiple node to root scans for feature info that 
> will only be discarded in the conversion to FileStatus.  Operations should 
> create a FileStatus to eliminate all the superfluous overhead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11641) Reduce cost of audit logging by using FileStatus instead of HdfsFileStatus

2017-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964835#comment-15964835
 ] 

Hadoop QA commented on HDFS-11641:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 283 unchanged - 2 fixed = 285 total (was 285) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestHdfsAdmin |
|   | hadoop.hdfs.tools.TestStoragePolicyCommands |
|   | hadoop.hdfs.TestApplyingStoragePolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:612578f |
| JIRA Issue | HDFS-11641 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862897/HDFS-11641.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3b3069033eb9 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7d873c4 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19045/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19045/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19045/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Updated] (HDFS-11163) Mover should move the file blocks to default storage once policy is unset

2017-04-11 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-11163:
--
Attachment: HDFS-11163-branch-2.003.patch

Sorry, Its my mistake. I didn't pull the latest code.
Attached new patch for branch-2

> Mover should move the file blocks to default storage once policy is unset
> -
>
> Key: HDFS-11163
> URL: https://issues.apache.org/jira/browse/HDFS-11163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.8.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11163-001.patch, HDFS-11163-002.patch, 
> HDFS-11163-003.patch, HDFS-11163-004.patch, HDFS-11163-005.patch, 
> HDFS-11163-006.patch, HDFS-11163-007.patch, HDFS-11163-branch-2.001.patch, 
> HDFS-11163-branch-2.002.patch, HDFS-11163-branch-2.003.patch, 
> temp-YARN-6278.HDFS-11163.patch
>
>
> HDFS-9534 added new API in FileSystem to unset the storage policy. Once 
> policy is unset blocks should move back to the default storage policy.
> Currently mover is not moving file blocks which have zero storage ID
> {code}
>   // currently we ignore files with unspecified storage policy
>   if (policyId == HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED) {
> return;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11504) Ozone: SCM: Add AllocateBlock/DeleteBlock/GetBlock APIs

2017-04-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11504:
--
Status: Patch Available  (was: Open)

> Ozone: SCM: Add AllocateBlock/DeleteBlock/GetBlock APIs
> ---
>
> Key: HDFS-11504
> URL: https://issues.apache.org/jira/browse/HDFS-11504
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11504-HDFS-7240.001.patch
>
>
> The signature of the APIs are listed below. This allows SCM to 
> 1) allocateBlock for client and maintain the key->container mapping in level 
> DB in addition to the existing container to pipeline mapping in level DB. 
> 2) return the pipeline of a block based on the key.
> 3) remove the block based on the key of the block. 
> {code}
>  allocateBlock(long size)
>  getBlock(key);
> void deleteBlock(key);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11190) Namenode support for data stored in external stores.

2017-04-11 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11190:
--
Status: Patch Available  (was: Open)

> Namenode support for data stored in external stores.
> 
>
> Key: HDFS-11190
> URL: https://issues.apache.org/jira/browse/HDFS-11190
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-11190-HDFS-9806.001.patch, 
> HDFS-11190-HDFS-9806.002.patch, HDFS-11190-HDFS-9806.003.patch
>
>
> The goal of this JIRA is to enable the Namenode to know about blocks that are 
> in {{PROVIDED}} stores and are not necessarily stored on any Datanodes. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11190) Namenode support for data stored in external stores.

2017-04-11 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11190:
--
Attachment: HDFS-11190-HDFS-9806.003.patch

> Namenode support for data stored in external stores.
> 
>
> Key: HDFS-11190
> URL: https://issues.apache.org/jira/browse/HDFS-11190
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-11190-HDFS-9806.001.patch, 
> HDFS-11190-HDFS-9806.002.patch, HDFS-11190-HDFS-9806.003.patch
>
>
> The goal of this JIRA is to enable the Namenode to know about blocks that are 
> in {{PROVIDED}} stores and are not necessarily stored on any Datanodes. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11190) Namenode support for data stored in external stores.

2017-04-11 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11190:
--
Status: Open  (was: Patch Available)

> Namenode support for data stored in external stores.
> 
>
> Key: HDFS-11190
> URL: https://issues.apache.org/jira/browse/HDFS-11190
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-11190-HDFS-9806.001.patch, 
> HDFS-11190-HDFS-9806.002.patch
>
>
> The goal of this JIRA is to enable the Namenode to know about blocks that are 
> in {{PROVIDED}} stores and are not necessarily stored on any Datanodes. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11190) Namenode support for data stored in external stores.

2017-04-11 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11190:
--
Attachment: (was: HDFS-11190-HDFS-9806.003.patch)

> Namenode support for data stored in external stores.
> 
>
> Key: HDFS-11190
> URL: https://issues.apache.org/jira/browse/HDFS-11190
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-11190-HDFS-9806.001.patch, 
> HDFS-11190-HDFS-9806.002.patch
>
>
> The goal of this JIRA is to enable the Namenode to know about blocks that are 
> in {{PROVIDED}} stores and are not necessarily stored on any Datanodes. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11504) Ozone: SCM: Add AllocateBlock/DeleteBlock/GetBlock APIs

2017-04-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11504:
--
Attachment: HDFS-11504-HDFS-7240.001.patch

> Ozone: SCM: Add AllocateBlock/DeleteBlock/GetBlock APIs
> ---
>
> Key: HDFS-11504
> URL: https://issues.apache.org/jira/browse/HDFS-11504
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11504-HDFS-7240.001.patch
>
>
> The signature of the APIs are listed below. This allows SCM to 
> 1) allocateBlock for client and maintain the key->container mapping in level 
> DB in addition to the existing container to pipeline mapping in level DB. 
> 2) return the pipeline of a block based on the key.
> 3) remove the block based on the key of the block. 
> {code}
>  allocateBlock(long size)
>  getBlock(key);
> void deleteBlock(key);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11504) Ozone: SCM: Add AllocateBlock/DeleteBlock/GetBlock APIs

2017-04-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11504:
--
Attachment: (was: HDFS-11504-HDFS-7240.001.patch)

> Ozone: SCM: Add AllocateBlock/DeleteBlock/GetBlock APIs
> ---
>
> Key: HDFS-11504
> URL: https://issues.apache.org/jira/browse/HDFS-11504
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> The signature of the APIs are listed below. This allows SCM to 
> 1) allocateBlock for client and maintain the key->container mapping in level 
> DB in addition to the existing container to pipeline mapping in level DB. 
> 2) return the pipeline of a block based on the key.
> 3) remove the block based on the key of the block. 
> {code}
>  allocateBlock(long size)
>  getBlock(key);
> void deleteBlock(key);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11504) Ozone: SCM: Add AllocateBlock/DeleteBlock/GetBlock APIs

2017-04-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11504:
--
Attachment: HDFS-11504-HDFS-7240.001.patch

Attach an initial patch with unit test.

> Ozone: SCM: Add AllocateBlock/DeleteBlock/GetBlock APIs
> ---
>
> Key: HDFS-11504
> URL: https://issues.apache.org/jira/browse/HDFS-11504
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11504-HDFS-7240.001.patch
>
>
> The signature of the APIs are listed below. This allows SCM to 
> 1) allocateBlock for client and maintain the key->container mapping in level 
> DB in addition to the existing container to pipeline mapping in level DB. 
> 2) return the pipeline of a block based on the key.
> 3) remove the block based on the key of the block. 
> {code}
>  allocateBlock(long size)
>  getBlock(key);
> void deleteBlock(key);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11608) HDFS write crashed with block size greater than 2 GB

2017-04-11 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11608:
-
Target Version/s:   (was: 3.0.0-alpha3, 2.8.1)
   Fix Version/s: 2.7.4

+1 for the branch-2.7 patch.

I've committed it after running the affected unit tests with JDK7 locally. 
Thanks [~xiaobingo].

> HDFS write crashed with block size greater than 2 GB
> 
>
> Key: HDFS-11608
> URL: https://issues.apache.org/jira/browse/HDFS-11608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Critical
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha3, 2.8.1
>
> Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, 
> HDFS-11608.002.patch, HDFS-11608.003.patch, HDFS-11608-branch-2.7.003.patch
>
>
> We've seen HDFS write crashes in the case of huge block size. For example, 
> writing a 3 GB file using block size > 2 GB (e.g., 3 GB), HDFS client throws 
> out of memory exception. DataNode gives out IOException. After changing heap 
> size limit,  DFSOutputStream ResponseProcessor exception is seen followed by 
> Broken pipe and pipeline recovery.
> Give below:
> DN exception,
> {noformat}
> 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK 
> operation  src: /192.168.64.101:47167 dst: /192.168.64.101:50010
> java.io.IOException: Incorrect value for packet payload size: 2147483128
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11608) HDFS write crashed with block size greater than 2 GB

2017-04-11 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964731#comment-15964731
 ] 

Xiaobing Zhou commented on HDFS-11608:
--

Posted 2.7 patch. Thanks [~xyao] for committing it and all for reviews.

> HDFS write crashed with block size greater than 2 GB
> 
>
> Key: HDFS-11608
> URL: https://issues.apache.org/jira/browse/HDFS-11608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1
>
> Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, 
> HDFS-11608.002.patch, HDFS-11608.003.patch, HDFS-11608-branch-2.7.003.patch
>
>
> We've seen HDFS write crashes in the case of huge block size. For example, 
> writing a 3 GB file using block size > 2 GB (e.g., 3 GB), HDFS client throws 
> out of memory exception. DataNode gives out IOException. After changing heap 
> size limit,  DFSOutputStream ResponseProcessor exception is seen followed by 
> Broken pipe and pipeline recovery.
> Give below:
> DN exception,
> {noformat}
> 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK 
> operation  src: /192.168.64.101:47167 dst: /192.168.64.101:50010
> java.io.IOException: Incorrect value for packet payload size: 2147483128
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11608) HDFS write crashed with block size greater than 2 GB

2017-04-11 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-11608:
-
Attachment: HDFS-11608-branch-2.7.003.patch

> HDFS write crashed with block size greater than 2 GB
> 
>
> Key: HDFS-11608
> URL: https://issues.apache.org/jira/browse/HDFS-11608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1
>
> Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, 
> HDFS-11608.002.patch, HDFS-11608.003.patch, HDFS-11608-branch-2.7.003.patch
>
>
> We've seen HDFS write crashes in the case of huge block size. For example, 
> writing a 3 GB file using block size > 2 GB (e.g., 3 GB), HDFS client throws 
> out of memory exception. DataNode gives out IOException. After changing heap 
> size limit,  DFSOutputStream ResponseProcessor exception is seen followed by 
> Broken pipe and pipeline recovery.
> Give below:
> DN exception,
> {noformat}
> 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK 
> operation  src: /192.168.64.101:47167 dst: /192.168.64.101:50010
> java.io.IOException: Incorrect value for packet payload size: 2147483128
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11640) Datanodes should use a unique identifier when reading from external stores

2017-04-11 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11640:
--
Summary: Datanodes should use a unique identifier when reading from 
external stores  (was: Datanodes use a unique identifier when reading from 
external stores)

> Datanodes should use a unique identifier when reading from external stores
> --
>
> Key: HDFS-11640
> URL: https://issues.apache.org/jira/browse/HDFS-11640
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>
> Ensure that datanodes read the correct (version of) file from an external 
> store.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11640) Datanodes should use a unique identifier when reading from external stores

2017-04-11 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11640:
--
Description: Use a unique identifier when reading from external stores to 
ensure that datanodes read the correct (version of) file.  (was: Ensure that 
datanodes read the correct (version of) file from an external store.)

> Datanodes should use a unique identifier when reading from external stores
> --
>
> Key: HDFS-11640
> URL: https://issues.apache.org/jira/browse/HDFS-11640
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>
> Use a unique identifier when reading from external stores to ensure that 
> datanodes read the correct (version of) file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11641) Reduce cost of audit logging by using FileStatus instead of HdfsFileStatus

2017-04-11 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964682#comment-15964682
 ] 

Haohui Mai edited comment on HDFS-11641 at 4/11/17 5:30 PM:


Thanks for the patch! +1 pending Jenkins.


was (Author: wheat9):
+1 pending Jenkins.

> Reduce cost of audit logging by using FileStatus instead of HdfsFileStatus
> --
>
> Key: HDFS-11641
> URL: https://issues.apache.org/jira/browse/HDFS-11641
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-11641.patch
>
>
> Audit logging operations create a HdfsFileStatus but audit logging promptly 
> converts it to a FileStatus to pass to the loggers.  A HdfsFileStatus is more 
> expensive to create, ex. multiple node to root scans for feature info that 
> will only be discarded in the conversion to FileStatus.  Operations should 
> create a FileStatus to eliminate all the superfluous overhead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11641) Reduce cost of audit logging by using FileStatus instead of HdfsFileStatus

2017-04-11 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964682#comment-15964682
 ] 

Haohui Mai commented on HDFS-11641:
---

+1 pending Jenkins.

> Reduce cost of audit logging by using FileStatus instead of HdfsFileStatus
> --
>
> Key: HDFS-11641
> URL: https://issues.apache.org/jira/browse/HDFS-11641
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-11641.patch
>
>
> Audit logging operations create a HdfsFileStatus but audit logging promptly 
> converts it to a FileStatus to pass to the loggers.  A HdfsFileStatus is more 
> expensive to create, ex. multiple node to root scans for feature info that 
> will only be discarded in the conversion to FileStatus.  Operations should 
> create a FileStatus to eliminate all the superfluous overhead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11641) Reduce cost of audit logging by using FileStatus instead of HdfsFileStatus

2017-04-11 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-11641:
---
Status: Patch Available  (was: Open)

> Reduce cost of audit logging by using FileStatus instead of HdfsFileStatus
> --
>
> Key: HDFS-11641
> URL: https://issues.apache.org/jira/browse/HDFS-11641
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-11641.patch
>
>
> Audit logging operations create a HdfsFileStatus but audit logging promptly 
> converts it to a FileStatus to pass to the loggers.  A HdfsFileStatus is more 
> expensive to create, ex. multiple node to root scans for feature info that 
> will only be discarded in the conversion to FileStatus.  Operations should 
> create a FileStatus to eliminate all the superfluous overhead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11641) Reduce cost of audit logging by using FileStatus instead of HdfsFileStatus

2017-04-11 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-11641:
---
Attachment: HDFS-11641.patch

Very simple patch to change to swap HdfsFileStatus to FileStatus.  
FSN#startFile is the only operation that still passes HdfsFileStatus since it 
already computed it for the client.


> Reduce cost of audit logging by using FileStatus instead of HdfsFileStatus
> --
>
> Key: HDFS-11641
> URL: https://issues.apache.org/jira/browse/HDFS-11641
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-11641.patch
>
>
> Audit logging operations create a HdfsFileStatus but audit logging promptly 
> converts it to a FileStatus to pass to the loggers.  A HdfsFileStatus is more 
> expensive to create, ex. multiple node to root scans for feature info that 
> will only be discarded in the conversion to FileStatus.  Operations should 
> create a FileStatus to eliminate all the superfluous overhead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11641) Reduce cost of audit logging by using FileStatus instead of HdfsFileStatus

2017-04-11 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-11641:
--

 Summary: Reduce cost of audit logging by using FileStatus instead 
of HdfsFileStatus
 Key: HDFS-11641
 URL: https://issues.apache.org/jira/browse/HDFS-11641
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 2.0.0-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp


Audit logging operations create a HdfsFileStatus but audit logging promptly 
converts it to a FileStatus to pass to the loggers.  A HdfsFileStatus is more 
expensive to create, ex. multiple node to root scans for feature info that will 
only be discarded in the conversion to FileStatus.  Operations should create a 
FileStatus to eliminate all the superfluous overhead.






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11530) Use HDFS specific network topology to choose datanode in BlockPlacementPolicyDefault

2017-04-11 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964652#comment-15964652
 ] 

Chen Liang commented on HDFS-11530:
---


bq. I am thinking one reasonable way that we can use DFSNetworkTopology only if 
the current block placement is the default 
way(BlockPlacementPolicyRackFaultTolerant). Otherwise, we use the 
NetworkTopology#getInstance(conf)

Thanks [~linyiqun] for the analysis, nice catch. This makes sense to me. 
overall v007 path LGTM, but looks like the failed tests have something to do 
with the function I introduced in v003 patch, will look into it.

> Use HDFS specific network topology to choose datanode in 
> BlockPlacementPolicyDefault
> 
>
> Key: HDFS-11530
> URL: https://issues.apache.org/jira/browse/HDFS-11530
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11530.001.patch, HDFS-11530.002.patch, 
> HDFS-11530.003.patch, HDFS-11530.004.patch, HDFS-11530.005.patch, 
> HDFS-11530.006.patch, HDFS-11530.007.patch
>
>
> The work for {{chooseRandomWithStorageType}} has been merged in HDFS-11482. 
> But this method is contained in new topology {{DFSNetworkTopology}} which is 
> specified for HDFS. We should update this and let 
> {{BlockPlacementPolicyDefault}} use the new way since the original way is 
> inefficient.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11338) [SPS]: Fix timeout issue in unit tests caused by longger NN down time

2017-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964622#comment-15964622
 ] 

Hadoop QA commented on HDFS-11338:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 2s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 35s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestFileChecksum |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11338 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862791/HDFS-11338-HDFS-10285-05.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e6d69f1469bb 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / ff9ccfe |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19042/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19042/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19042/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [SPS]: Fix timeout issue in unit tests caused by longger NN down time
> -
>
> Key: HDFS-11338
>   

[jira] [Commented] (HDFS-11163) Mover should move the file blocks to default storage once policy is unset

2017-04-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964553#comment-15964553
 ] 

Chris Nauroth commented on HDFS-11163:
--

[~surendrasingh], it looks like HDFS-11163-branch-2.002.patch still doesn't 
apply cleanly.

> Mover should move the file blocks to default storage once policy is unset
> -
>
> Key: HDFS-11163
> URL: https://issues.apache.org/jira/browse/HDFS-11163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.8.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11163-001.patch, HDFS-11163-002.patch, 
> HDFS-11163-003.patch, HDFS-11163-004.patch, HDFS-11163-005.patch, 
> HDFS-11163-006.patch, HDFS-11163-007.patch, HDFS-11163-branch-2.001.patch, 
> HDFS-11163-branch-2.002.patch, temp-YARN-6278.HDFS-11163.patch
>
>
> HDFS-9534 added new API in FileSystem to unset the storage policy. Once 
> policy is unset blocks should move back to the default storage policy.
> Currently mover is not moving file blocks which have zero storage ID
> {code}
>   // currently we ignore files with unspecified storage policy
>   if (policyId == HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED) {
> return;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11163) Mover should move the file blocks to default storage once policy is unset

2017-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964549#comment-15964549
 ] 

Hadoop QA commented on HDFS-11163:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} HDFS-11163 does not apply to branch-2. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11163 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862850/HDFS-11163-branch-2.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19043/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Mover should move the file blocks to default storage once policy is unset
> -
>
> Key: HDFS-11163
> URL: https://issues.apache.org/jira/browse/HDFS-11163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.8.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11163-001.patch, HDFS-11163-002.patch, 
> HDFS-11163-003.patch, HDFS-11163-004.patch, HDFS-11163-005.patch, 
> HDFS-11163-006.patch, HDFS-11163-007.patch, HDFS-11163-branch-2.001.patch, 
> HDFS-11163-branch-2.002.patch, temp-YARN-6278.HDFS-11163.patch
>
>
> HDFS-9534 added new API in FileSystem to unset the storage policy. Once 
> policy is unset blocks should move back to the default storage policy.
> Currently mover is not moving file blocks which have zero storage ID
> {code}
>   // currently we ignore files with unspecified storage policy
>   if (policyId == HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED) {
> return;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11503) Integrate Chocolate Cloud RS coder implementation

2017-04-11 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964470#comment-15964470
 ] 

Wei-Chiu Chuang commented on HDFS-11503:


I think you will also need to file an Apache Infra ticket to have 
ChocolateCloud-RS installed. An example is INFRA-10333 which requested 
openssl-devel installed on Jenkins. The library ChocolateCloud-RS is 
proprietary, so you may have to be specific how to download it, get license and 
how to install it.

> Integrate Chocolate Cloud RS coder implementation
> -
>
> Key: HDFS-11503
> URL: https://issues.apache.org/jira/browse/HDFS-11503
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Marcell Feher
> Attachments: HDFS-11503.patch
>
>
> Quote from Marcell on HDFS-7285:
> First of all let me introduce ourselves: we are Chocolate Cloud from Denmark, 
> we use erasure coding to improve storage solutions. We already have 
> Reed-Solomon and Random Linear Network Coding backends for Liberasurecode, 
> and now we are at the final stage of developing our RS plugin to HDFS-EC. The 
> performance of our plugin is similar to ISA-L's, in some configurations we 
> are better, in others we are worse (our initial speed comparison charts can 
> be found here: https://www.chocolate-cloud.cc/Plugins/HDFS-EC/hdfs.html).
> We would like our plugin to become officially supported in Hadoop 3.0. We can 
> already provide a preliminary version of our (native) library and a patch 
> with the necessary glue code for the next alpha release.
> I'd like to know your thoughts about whether it's possible and how it could 
> be achieved.
> P.S: I'm happy to share more details if there's interest



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11163) Mover should move the file blocks to default storage once policy is unset

2017-04-11 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964338#comment-15964338
 ] 

Surendra Singh Lilhore commented on HDFS-11163:
---

Attached branch-2 v2 patch..

> Mover should move the file blocks to default storage once policy is unset
> -
>
> Key: HDFS-11163
> URL: https://issues.apache.org/jira/browse/HDFS-11163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.8.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11163-001.patch, HDFS-11163-002.patch, 
> HDFS-11163-003.patch, HDFS-11163-004.patch, HDFS-11163-005.patch, 
> HDFS-11163-006.patch, HDFS-11163-007.patch, HDFS-11163-branch-2.001.patch, 
> HDFS-11163-branch-2.002.patch, temp-YARN-6278.HDFS-11163.patch
>
>
> HDFS-9534 added new API in FileSystem to unset the storage policy. Once 
> policy is unset blocks should move back to the default storage policy.
> Currently mover is not moving file blocks which have zero storage ID
> {code}
>   // currently we ignore files with unspecified storage policy
>   if (policyId == HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED) {
> return;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11163) Mover should move the file blocks to default storage once policy is unset

2017-04-11 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-11163:
--
Attachment: HDFS-11163-branch-2.002.patch

> Mover should move the file blocks to default storage once policy is unset
> -
>
> Key: HDFS-11163
> URL: https://issues.apache.org/jira/browse/HDFS-11163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.8.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11163-001.patch, HDFS-11163-002.patch, 
> HDFS-11163-003.patch, HDFS-11163-004.patch, HDFS-11163-005.patch, 
> HDFS-11163-006.patch, HDFS-11163-007.patch, HDFS-11163-branch-2.001.patch, 
> HDFS-11163-branch-2.002.patch, temp-YARN-6278.HDFS-11163.patch
>
>
> HDFS-9534 added new API in FileSystem to unset the storage policy. Once 
> policy is unset blocks should move back to the default storage policy.
> Currently mover is not moving file blocks which have zero storage ID
> {code}
>   // currently we ignore files with unspecified storage policy
>   if (policyId == HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED) {
> return;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11530) Use HDFS specific network topology to choose datanode in BlockPlacementPolicyDefault

2017-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964328#comment-15964328
 ] 

Hadoop QA commented on HDFS-11530:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:612578f |
| JIRA Issue | HDFS-11530 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862831/HDFS-11530.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4a63c3ff3bf0 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / aabf08d |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19041/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19041/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19041/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Use HDFS specific network topology to choose datanode in 
> BlockPlacementPolicyDefault
> 
>
> Key: HDFS-11530
>

  1   2   >