[jira] [Commented] (HDFS-12303) Change default EC cell size to 1MB for better performance

2017-08-15 Thread Wei Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128369#comment-16128369
 ] 

Wei Zhou commented on HDFS-12303:
-

The junit test failures reported can not be reproduced in my local environment. 
 Resubmit the patch to trigger another round Hadoop QA check. Thanks!

> Change default EC cell size to 1MB for better performance
> -
>
> Key: HDFS-12303
> URL: https://issues.apache.org/jira/browse/HDFS-12303
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12303.00.patch
>
>
> As discussed in HDFS-11814, 1MB cell size shows better performance than 
> others during the tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12303) Change default EC cell size to 1MB for better performance

2017-08-15 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou updated HDFS-12303:

Status: Open  (was: Patch Available)

> Change default EC cell size to 1MB for better performance
> -
>
> Key: HDFS-12303
> URL: https://issues.apache.org/jira/browse/HDFS-12303
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12303.00.patch
>
>
> As discussed in HDFS-11814, 1MB cell size shows better performance than 
> others during the tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12303) Change default EC cell size to 1MB for better performance

2017-08-15 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou updated HDFS-12303:

Status: Patch Available  (was: Open)

> Change default EC cell size to 1MB for better performance
> -
>
> Key: HDFS-12303
> URL: https://issues.apache.org/jira/browse/HDFS-12303
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12303.00.patch
>
>
> As discussed in HDFS-11814, 1MB cell size shows better performance than 
> others during the tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12258) ec -listPolicies should list all policies in system, no matter it's enabled or disabled

2017-08-15 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou updated HDFS-12258:

Attachment: HDFS-12258.03.patch

Thanks [~rakeshr] for reviewing and the comments! 
For the 4th issue, my solution is ignore the user's input policy state and set 
it to {{DISABLED}}. What's your opinion here? Thanks!

> ec -listPolicies should list all policies in system, no matter it's enabled 
> or disabled
> ---
>
> Key: HDFS-12258
> URL: https://issues.apache.org/jira/browse/HDFS-12258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: Wei Zhou
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12258.01.patch, HDFS-12258.02.patch, 
> HDFS-12258.03.patch
>
>
> ec -listPolicies should list all policies in system, no matter it's enabled 
> or disabled



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12302) FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl object

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128347#comment-16128347
 ] 

Hadoop QA commented on HDFS-12302:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12302 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882072/HDFS-12302.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e6daba63f5da 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 75dd866 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20715/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20715/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20715/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl 
> object
> ---
>
> Key: HDFS-12302
> URL: 

[jira] [Commented] (HDFS-12214) [SPS]: Fix review comments of StoragePolicySatisfier feature

2017-08-15 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128340#comment-16128340
 ] 

Rakesh R commented on HDFS-12214:
-

Thanks [~surendrasingh] for the comments.
bq. In BlockManager#activate() we no need to start SPS thread, for HA and 
Non-HA both the case.
{{FSNamesystem#startActiveServices()}} flow is for HA and start SPS only in 
Active NN, the non-HA is starting SPS via {{BlockManager#activate()}} so is the 
explicit check, does this makes sense?

> [SPS]: Fix review comments of StoragePolicySatisfier feature
> 
>
> Key: HDFS-12214
> URL: https://issues.apache.org/jira/browse/HDFS-12214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-12214-HDFS-10285-00.patch, 
> HDFS-12214-HDFS-10285-01.patch, HDFS-12214-HDFS-10285-02.patch, 
> HDFS-12214-HDFS-10285-03.patch, HDFS-12214-HDFS-10285-04.patch, 
> HDFS-12214-HDFS-10285-05.patch, HDFS-12214-HDFS-10285-06.patch, 
> HDFS-12214-HDFS-10285-07.patch
>
>
> This sub-task is to address [~andrew.wang]'s review comments. Please refer 
> the [review 
> comment|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16103734=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16103734]
>  in HDFS-10285 umbrella jira.
> # Rename configuration property 'dfs.storage.policy.satisfier.activate' to 
> 'dfs.storage.policy.satisfier.enabled'
> # Disable SPS feature by default.
> # Rather than using the acronym (which a user might not know), maybe rename 
> "-isSpsRunning" to "-isSatisfierRunning"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11082) Erasure Coding : Provide replicated EC policy to just replicating the files

2017-08-15 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-11082:
-
Attachment: HDFS-11082.004.patch

Hi Andrew, thanks for the quick review.  004.patch is uploaded to improve the 
"-setPolicy" help. 

HDFS-12308 is created for tracking purpose. 

> Erasure Coding : Provide replicated EC policy to just replicating the files
> ---
>
> Key: HDFS-11082
> URL: https://issues.apache.org/jira/browse/HDFS-11082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11082.001.patch, HDFS-11082.002.patch, 
> HDFS-11082.003.patch, HDFS-11082.004.patch
>
>
> The idea of this jira is to provide a new {{replicated EC policy}} so that we 
> can override the EC policy on a parent directory and go back to just 
> replicating the files based on replication factors.
> Thanks [~andrew.wang] for the 
> [discussions|https://issues.apache.org/jira/browse/HDFS-11072?focusedCommentId=15620743=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15620743].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12308) Erasure Coding: Provide DistributedFileSystem & DFSClient API to return the effective EC policy on a directory or file, including the replication policy

2017-08-15 Thread SammiChen (JIRA)
SammiChen created HDFS-12308:


 Summary: Erasure Coding: Provide DistributedFileSystem &  
DFSClient API to return the effective EC policy on a directory or file, 
including the replication policy
 Key: HDFS-12308
 URL: https://issues.apache.org/jira/browse/HDFS-12308
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding
 Environment: Provide DistributedFileSystem &  DFSClient API to return 
the effective EC policy on a directory or file, including the replication 
policy. Policy name will like {{getNominalErasureCodingPolicy(PATH)}} and 
{{getAllNominalErasureCodingPolicies}}. 
Reporter: SammiChen






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11738) Hedged pread takes more time when block moved from initial locations

2017-08-15 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128334#comment-16128334
 ] 

Wei-Chiu Chuang commented on HDFS-11738:


Thanks, the latest patch looks really good to me.

On a separate note,
I really don't like the lengthy {{DFSInputStream#hedgedFetchBlockByteRange()}} 
method though. the if .. else blocks contain the code that looks similar. I 
feel it can use some refactory, or split the method into two. But that's really 
secondary to this patch.

Would any one else like to chime in? Thanks

> Hedged pread takes more time when block moved from initial locations
> 
>
> Key: HDFS-11738
> URL: https://issues.apache.org/jira/browse/HDFS-11738
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-11738-01.patch, HDFS-11738-02.patch, 
> HDFS-11738-03.patch, HDFS-11738-04.patch
>
>
> Scenario : 
> Same as HDFS-11708.
> During Hedge read, 
> 1. First two locations fails to read the data in hedged mode.
> 2. chooseData refetches locations and adds a future to read from DN3.
> 3. after adding future to DN3, main thread goes for refetching locations in 
> loop and stucks there till all 3  retries to fetch locations exhausted, which 
> consumes ~20 seconds with exponential retry time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12269) Better to return a Map rather than HashMap in getErasureCodingCodecs

2017-08-15 Thread Huafeng Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huafeng Wang updated HDFS-12269:

Attachment: HDFS-12269.002.patch

> Better to return a Map rather than HashMap in getErasureCodingCodecs
> 
>
> Key: HDFS-12269
> URL: https://issues.apache.org/jira/browse/HDFS-12269
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: Huafeng Wang
>Assignee: Huafeng Wang
>Priority: Minor
> Attachments: HDFS-12269.001.patch, HDFS-12269.002.patch
>
>
> Currently the getErasureCodingCodecs function defined in ClientProtocal 
> returns a Hashmap:
> {code:java}
>   HashMap getErasureCodingCodecs() throws IOException;
> {code}
> It's better to return a Map.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12307) Ozone: TestKeys#testPutAndGetKeyWithDnRestart fails

2017-08-15 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128290#comment-16128290
 ] 

Weiwei Yang commented on HDFS-12307:


I did a quick look, it looks like the root cause is the container server port 
{{XceiverServer}} in {{OzoneContainer}} changes during DN restart, because the 
mini ozone cluster is configured to use random ports. Therefore when client 
tries to acquire a xceiver client in 
{{ChunkGroupInputStream#getFromKsmKeyInfo}}, it uses the old port that was read 
from the pipeline datanodeID.

> Ozone: TestKeys#testPutAndGetKeyWithDnRestart fails
> ---
>
> Key: HDFS-12307
> URL: https://issues.apache.org/jira/browse/HDFS-12307
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>
> It seems this UT constantly fails with following error
> {noformat}
> org.apache.hadoop.ozone.web.exceptions.OzoneException: Exception getting 
> XceiverClient.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119)
>   at 
> com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:243)
>   at 
> com.fasterxml.jackson.databind.deser.std.ThrowableDeserializer.deserializeFromObject(ThrowableDeserializer.java:146)
>   at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:133)
>   at 
> com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1579)
>   at 
> com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1200)
>   at 
> org.apache.hadoop.ozone.web.exceptions.OzoneException.parse(OzoneException.java:248)
>   at 
> org.apache.hadoop.ozone.web.client.OzoneBucket.executeGetKey(OzoneBucket.java:395)
>   at 
> org.apache.hadoop.ozone.web.client.OzoneBucket.getKey(OzoneBucket.java:321)
>   at 
> org.apache.hadoop.ozone.web.client.TestKeys.runTestPutAndGetKeyWithDnRestart(TestKeys.java:288)
>   at 
> org.apache.hadoop.ozone.web.client.TestKeys.testPutAndGetKeyWithDnRestart(TestKeys.java:265)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11738) Hedged pread takes more time when block moved from initial locations

2017-08-15 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128285#comment-16128285
 ] 

Vinayakumar B commented on HDFS-11738:
--

Test failure is unrelated. 
checkstyle is about line length, could be corrected during commit.

> Hedged pread takes more time when block moved from initial locations
> 
>
> Key: HDFS-11738
> URL: https://issues.apache.org/jira/browse/HDFS-11738
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-11738-01.patch, HDFS-11738-02.patch, 
> HDFS-11738-03.patch, HDFS-11738-04.patch
>
>
> Scenario : 
> Same as HDFS-11708.
> During Hedge read, 
> 1. First two locations fails to read the data in hedged mode.
> 2. chooseData refetches locations and adds a future to read from DN3.
> 3. after adding future to DN3, main thread goes for refetching locations in 
> loop and stucks there till all 3  retries to fetch locations exhausted, which 
> consumes ~20 seconds with exponential retry time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11882) Client fails if acknowledged size is greater than bytes sent

2017-08-15 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128282#comment-16128282
 ] 

Akira Ajisaka commented on HDFS-11882:
--

Thanks [~andrew.wang] for the update! Mostly looks good to me.
Minor nit:
{code}
final long partialStripeBlockLength = (fullStripeLength / numDataBlocks) + 
partialCellSize;
{code}
I prefer {{numFullStripes * cellSize}} rather than {{fullStripeLength / 
numDataBlocks}}.

> Client fails if acknowledged size is greater than bytes sent
> 
>
> Key: HDFS-11882
> URL: https://issues.apache.org/jira/browse/HDFS-11882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11882.01.patch, HDFS-11882.02.patch, 
> HDFS-11882.03.patch, HDFS-11882.regressiontest.patch
>
>
> Some tests of erasure coding fails by the following exception. The following 
> test was removed by HDFS-11823, however, this type of error can happen in 
> real cluster.
> {noformat}
> Running 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure)
>   Time elapsed: 38.831 sec  <<< ERROR!
> java.lang.IllegalStateException: null
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12307) Ozone: TestKeys#testPutAndGetKeyWithDnRestart fails

2017-08-15 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-12307:
--

 Summary: Ozone: TestKeys#testPutAndGetKeyWithDnRestart fails
 Key: HDFS-12307
 URL: https://issues.apache.org/jira/browse/HDFS-12307
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Weiwei Yang


It seems this UT constantly fails with following error

{noformat}
org.apache.hadoop.ozone.web.exceptions.OzoneException: Exception getting 
XceiverClient.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119)
at 
com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:243)
at 
com.fasterxml.jackson.databind.deser.std.ThrowableDeserializer.deserializeFromObject(ThrowableDeserializer.java:146)
at 
com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:133)
at 
com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1579)
at 
com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1200)
at 
org.apache.hadoop.ozone.web.exceptions.OzoneException.parse(OzoneException.java:248)
at 
org.apache.hadoop.ozone.web.client.OzoneBucket.executeGetKey(OzoneBucket.java:395)
at 
org.apache.hadoop.ozone.web.client.OzoneBucket.getKey(OzoneBucket.java:321)
at 
org.apache.hadoop.ozone.web.client.TestKeys.runTestPutAndGetKeyWithDnRestart(TestKeys.java:288)
at 
org.apache.hadoop.ozone.web.client.TestKeys.testPutAndGetKeyWithDnRestart(TestKeys.java:265)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12302) FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl object

2017-08-15 Thread liaoyuxiangqin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liaoyuxiangqin updated HDFS-12302:
--
Attachment: HDFS-12302.002.patch

> FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl 
> object
> ---
>
> Key: HDFS-12302
> URL: https://issues.apache.org/jira/browse/HDFS-12302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20, Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
> Attachments: HDFS-12302.001.patch, HDFS-12302.002.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
>   When i read the  code of Instantiate FsDatasetImpl object on DataNode 
> start process, i find that the getVolumeMap function actually can't get 
> ReplicaMap info for each fsVolume, the reason is fsVolume's  bpSlices hasn't 
> been initialized in this time, the detail code as follows:
> {code:title=FsVolumeImpl.java}
> void getVolumeMap(ReplicaMap volumeMap,
> final RamDiskReplicaTracker ramDiskReplicaMap)
>   throws IOException {
> LOG.info("Added volume -  getVolumeMap bpSlices:" + 
> bpSlices.values().size());
> for(BlockPoolSlice s : bpSlices.values()) {
>   s.getVolumeMap(volumeMap, ramDiskReplicaMap);
> }
>   }
> {code}
> Then, i have add some info log and start DataNode, the log info cord with the 
> code description, the detail log info as follows:
>  INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap end
> INFO: Added new volume: DS-48ac6ef9-fd6f-49b7-a5fb-77b82cadc973
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> end
> INFO: Added new volume: DS-159b615c-144c-4d99-8b63-5f37247fb8ed
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK
> At last i think the getVolumeMap process for each fsVloume not necessary when 
> Instantiate FsDatasetImpl object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12302) FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl object

2017-08-15 Thread liaoyuxiangqin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liaoyuxiangqin updated HDFS-12302:
--
Status: Patch Available  (was: Open)

> FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl 
> object
> ---
>
> Key: HDFS-12302
> URL: https://issues.apache.org/jira/browse/HDFS-12302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20, Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
> Attachments: HDFS-12302.001.patch, HDFS-12302.002.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
>   When i read the  code of Instantiate FsDatasetImpl object on DataNode 
> start process, i find that the getVolumeMap function actually can't get 
> ReplicaMap info for each fsVolume, the reason is fsVolume's  bpSlices hasn't 
> been initialized in this time, the detail code as follows:
> {code:title=FsVolumeImpl.java}
> void getVolumeMap(ReplicaMap volumeMap,
> final RamDiskReplicaTracker ramDiskReplicaMap)
>   throws IOException {
> LOG.info("Added volume -  getVolumeMap bpSlices:" + 
> bpSlices.values().size());
> for(BlockPoolSlice s : bpSlices.values()) {
>   s.getVolumeMap(volumeMap, ramDiskReplicaMap);
> }
>   }
> {code}
> Then, i have add some info log and start DataNode, the log info cord with the 
> code description, the detail log info as follows:
>  INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap end
> INFO: Added new volume: DS-48ac6ef9-fd6f-49b7-a5fb-77b82cadc973
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> end
> INFO: Added new volume: DS-159b615c-144c-4d99-8b63-5f37247fb8ed
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK
> At last i think the getVolumeMap process for each fsVloume not necessary when 
> Instantiate FsDatasetImpl object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12302) FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl object

2017-08-15 Thread liaoyuxiangqin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liaoyuxiangqin updated HDFS-12302:
--
Status: Open  (was: Patch Available)

> FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl 
> object
> ---
>
> Key: HDFS-12302
> URL: https://issues.apache.org/jira/browse/HDFS-12302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20, Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
> Attachments: HDFS-12302.001.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
>   When i read the  code of Instantiate FsDatasetImpl object on DataNode 
> start process, i find that the getVolumeMap function actually can't get 
> ReplicaMap info for each fsVolume, the reason is fsVolume's  bpSlices hasn't 
> been initialized in this time, the detail code as follows:
> {code:title=FsVolumeImpl.java}
> void getVolumeMap(ReplicaMap volumeMap,
> final RamDiskReplicaTracker ramDiskReplicaMap)
>   throws IOException {
> LOG.info("Added volume -  getVolumeMap bpSlices:" + 
> bpSlices.values().size());
> for(BlockPoolSlice s : bpSlices.values()) {
>   s.getVolumeMap(volumeMap, ramDiskReplicaMap);
> }
>   }
> {code}
> Then, i have add some info log and start DataNode, the log info cord with the 
> code description, the detail log info as follows:
>  INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap end
> INFO: Added new volume: DS-48ac6ef9-fd6f-49b7-a5fb-77b82cadc973
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> end
> INFO: Added new volume: DS-159b615c-144c-4d99-8b63-5f37247fb8ed
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK
> At last i think the getVolumeMap process for each fsVloume not necessary when 
> Instantiate FsDatasetImpl object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12306) [branch-2]Separate class InnerNode from class NetworkTopology and make it extendable

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128220#comment-16128220
 ] 

Hadoop QA commented on HDFS-12306:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 3s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
25s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
4s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
45s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
11s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
6s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
7s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
17s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 33s{color} | {color:orange} root: The patch generated 26 new + 63 unchanged 
- 24 fixed = 89 total (was 87) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
43s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}190m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_144 Failed junit tests | hadoop.tracing.TestTracing |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
| JDK v1.8.0_144 Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
| JDK v1.7.0_131 Failed junit tests | 

[jira] [Commented] (HDFS-12304) Remove unused parameter from FsDatasetImpl#addVolume

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128203#comment-16128203
 ] 

Hadoop QA commented on HDFS-12304:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12304 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882048/HDFS-12304.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 26b1476dbdf9 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 75dd866 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20714/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20714/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20714/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove unused parameter from FsDatasetImpl#addVolume
> 
>
> Key: HDFS-12304
> URL: 

[jira] [Commented] (HDFS-11082) Erasure Coding : Provide replicated EC policy to just replicating the files

2017-08-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128174#comment-16128174
 ] 

Andrew Wang commented on HDFS-11082:


Thanks for revving Sammi, only a single nit, looks good otherwise:

* We should update the help to make it clear that -policy and -replicate are 
optional arguments, e.g.

{noformat}
 [-setPolicy -path  [-policy ] [-replicate]]
{noformat}

Please file and link the follow-on JIRA for here for tracking too. Great work!

> Erasure Coding : Provide replicated EC policy to just replicating the files
> ---
>
> Key: HDFS-11082
> URL: https://issues.apache.org/jira/browse/HDFS-11082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11082.001.patch, HDFS-11082.002.patch, 
> HDFS-11082.003.patch
>
>
> The idea of this jira is to provide a new {{replicated EC policy}} so that we 
> can override the EC policy on a parent directory and go back to just 
> replicating the files based on replication factors.
> Thanks [~andrew.wang] for the 
> [discussions|https://issues.apache.org/jira/browse/HDFS-11072?focusedCommentId=15620743=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15620743].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11082) Erasure Coding : Provide replicated EC policy to just replicating the files

2017-08-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128174#comment-16128174
 ] 

Andrew Wang edited comment on HDFS-11082 at 8/16/17 1:08 AM:
-

Thanks for revving Sammi, only a single nit, looks good otherwise:

* We should update the help to make it clear that -policy and -replicate are 
optional arguments, e.g.

{noformat}
 [-setPolicy -path  [-policy ] [-replicate]]
{noformat}

Please file and link the follow-on JIRA here for tracking too. Great work!


was (Author: andrew.wang):
Thanks for revving Sammi, only a single nit, looks good otherwise:

* We should update the help to make it clear that -policy and -replicate are 
optional arguments, e.g.

{noformat}
 [-setPolicy -path  [-policy ] [-replicate]]
{noformat}

Please file and link the follow-on JIRA for here for tracking too. Great work!

> Erasure Coding : Provide replicated EC policy to just replicating the files
> ---
>
> Key: HDFS-11082
> URL: https://issues.apache.org/jira/browse/HDFS-11082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11082.001.patch, HDFS-11082.002.patch, 
> HDFS-11082.003.patch
>
>
> The idea of this jira is to provide a new {{replicated EC policy}} so that we 
> can override the EC policy on a parent directory and go back to just 
> replicating the files based on replication factors.
> Thanks [~andrew.wang] for the 
> [discussions|https://issues.apache.org/jira/browse/HDFS-11072?focusedCommentId=15620743=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15620743].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12302) FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl object

2017-08-15 Thread liaoyuxiangqin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128170#comment-16128170
 ] 

liaoyuxiangqin commented on HDFS-12302:
---

[~vagarychen] Thanks for you watch, could you help me see if this problem 
really exists? thanks!

> FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl 
> object
> ---
>
> Key: HDFS-12302
> URL: https://issues.apache.org/jira/browse/HDFS-12302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20, Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
> Attachments: HDFS-12302.001.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
>   When i read the  code of Instantiate FsDatasetImpl object on DataNode 
> start process, i find that the getVolumeMap function actually can't get 
> ReplicaMap info for each fsVolume, the reason is fsVolume's  bpSlices hasn't 
> been initialized in this time, the detail code as follows:
> {code:title=FsVolumeImpl.java}
> void getVolumeMap(ReplicaMap volumeMap,
> final RamDiskReplicaTracker ramDiskReplicaMap)
>   throws IOException {
> LOG.info("Added volume -  getVolumeMap bpSlices:" + 
> bpSlices.values().size());
> for(BlockPoolSlice s : bpSlices.values()) {
>   s.getVolumeMap(volumeMap, ramDiskReplicaMap);
> }
>   }
> {code}
> Then, i have add some info log and start DataNode, the log info cord with the 
> code description, the detail log info as follows:
>  INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap end
> INFO: Added new volume: DS-48ac6ef9-fd6f-49b7-a5fb-77b82cadc973
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> end
> INFO: Added new volume: DS-159b615c-144c-4d99-8b63-5f37247fb8ed
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK
> At last i think the getVolumeMap process for each fsVloume not necessary when 
> Instantiate FsDatasetImpl object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12260) StreamCapabilities.StreamCapability should be public.

2017-08-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128172#comment-16128172
 ] 

Andrew Wang commented on HDFS-12260:


Are we changing hasCapability as part of this? Part of why it's string-typed is 
so we can query for unknown capabilities, which lets other subclasses add new 
capabilities without modifying the core hadoop code.

> StreamCapabilities.StreamCapability should be public.
> -
>
> Key: HDFS-12260
> URL: https://issues.apache.org/jira/browse/HDFS-12260
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>
> Client should use {{StreamCapability}} enum instead of raw string to query 
> the capability of an OutputStream, for better type safety / IDE supports and 
> etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10787) libhdfs++: hdfs_configuration and configuration_loader should be accessible from our public API

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128116#comment-16128116
 ] 

Hadoop QA commented on HDFS-10787:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
20s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
45s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
8s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
11s{color} | {color:green} HDFS-8707 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
54s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
57s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
41s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3117e2a |
| JIRA Issue | HDFS-10787 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882011/HDFS-10787.HDFS-8707.002.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 95eacbe9cb9d 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 5cee747 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_144 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| JDK v1.7.0_131  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20712/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20712/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: hdfs_configuration and configuration_loader should be accessible 
> from our public API
> ---
>
> Key: HDFS-10787
> URL: https://issues.apache.org/jira/browse/HDFS-10787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: James 

[jira] [Updated] (HDFS-12304) Remove unused parameter from FsDatasetImpl#addVolume

2017-08-15 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12304:
--
Attachment: HDFS-12304.001.patch

> Remove unused parameter from FsDatasetImpl#addVolume
> 
>
> Key: HDFS-12304
> URL: https://issues.apache.org/jira/browse/HDFS-12304
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HDFS-12304.001.patch
>
>
> FsDatasetImpl has this method
> {code}
>   private void addVolume(Collection dataLocations,
>   Storage.StorageDirectory sd) throws IOException
> {code}
> Parameter {{dataLocations}} was introduced in HDFS-6740, this variable was 
> used to get storage type info. But HDFS-10637 has changed the way of getting 
> storage type in this method, making dataLocations no longer being used at all 
> here. We should probably remove dataLocations for a cleaner interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12304) Remove unused parameter from FsDatasetImpl#addVolume

2017-08-15 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12304:
--
Status: Patch Available  (was: Open)

> Remove unused parameter from FsDatasetImpl#addVolume
> 
>
> Key: HDFS-12304
> URL: https://issues.apache.org/jira/browse/HDFS-12304
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HDFS-12304.001.patch
>
>
> FsDatasetImpl has this method
> {code}
>   private void addVolume(Collection dataLocations,
>   Storage.StorageDirectory sd) throws IOException
> {code}
> Parameter {{dataLocations}} was introduced in HDFS-6740, this variable was 
> used to get storage type info. But HDFS-10637 has changed the way of getting 
> storage type in this method, making dataLocations no longer being used at all 
> here. We should probably remove dataLocations for a cleaner interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11882) Client fails if acknowledged size is greater than bytes sent

2017-08-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128091#comment-16128091
 ] 

Andrew Wang commented on HDFS-11882:


I can fix the checkstyles in the next patch, anyone up for a review?

> Client fails if acknowledged size is greater than bytes sent
> 
>
> Key: HDFS-11882
> URL: https://issues.apache.org/jira/browse/HDFS-11882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11882.01.patch, HDFS-11882.02.patch, 
> HDFS-11882.03.patch, HDFS-11882.regressiontest.patch
>
>
> Some tests of erasure coding fails by the following exception. The following 
> test was removed by HDFS-11823, however, this type of error can happen in 
> real cluster.
> {noformat}
> Running 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure)
>   Time elapsed: 38.831 sec  <<< ERROR!
> java.lang.IllegalStateException: null
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12306) [branch-2]Separate class InnerNode from class NetworkTopology and make it extendable

2017-08-15 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12306:
--
Attachment: HDFS-12306-branch-2.001.patch

> [branch-2]Separate class InnerNode from class NetworkTopology and make it 
> extendable
> 
>
> Key: HDFS-12306
> URL: https://issues.apache.org/jira/browse/HDFS-12306
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12306-branch-2.001.patch
>
>
> This JIRA is to backport HDFS-11430 to branch-2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12306) [branch-2]Separate class InnerNode from class NetworkTopology and make it extendable

2017-08-15 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12306:
--
Status: Patch Available  (was: Open)

> [branch-2]Separate class InnerNode from class NetworkTopology and make it 
> extendable
> 
>
> Key: HDFS-12306
> URL: https://issues.apache.org/jira/browse/HDFS-12306
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12306-branch-2.001.patch
>
>
> This JIRA is to backport HDFS-11430 to branch-2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12306) [branch-2]Separate class InnerNode from class NetworkTopology and make it extendable

2017-08-15 Thread Chen Liang (JIRA)
Chen Liang created HDFS-12306:
-

 Summary: [branch-2]Separate class InnerNode from class 
NetworkTopology and make it extendable
 Key: HDFS-12306
 URL: https://issues.apache.org/jira/browse/HDFS-12306
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Chen Liang
Assignee: Chen Liang


This JIRA is to backport HDFS-11430 to branch-2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12301) NN File Browser UI: Navigate to a path when enter is pressed

2017-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128043#comment-16128043
 ] 

Hudson commented on HDFS-12301:
---

ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #12192 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12192/])
HDFS-12301. NN File Browser UI: Navigate to a path when enter is pressed 
(raviprak: rev f34646d652310442cb5339aa269f10dfa838)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js


> NN File Browser UI: Navigate to a path when enter is pressed
> 
>
> Key: HDFS-12301
> URL: https://issues.apache.org/jira/browse/HDFS-12301
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ui
>Affects Versions: 2.7.4
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>Priority: Trivial
> Fix For: 3.0.0-beta1, 2.8.2
>
> Attachments: HDFS-12301.01.patch
>
>
> Instead of forcing users to click on the "Go" button, we should accept the 
> Enter keypress as a directive to navigate to the directory too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12301) NN File Browser UI: Navigate to a path when enter is pressed

2017-08-15 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-12301:

   Resolution: Fixed
Fix Version/s: 2.8.2
   3.0.0-beta1
   Status: Resolved  (was: Patch Available)

> NN File Browser UI: Navigate to a path when enter is pressed
> 
>
> Key: HDFS-12301
> URL: https://issues.apache.org/jira/browse/HDFS-12301
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ui
>Affects Versions: 2.7.4
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>Priority: Trivial
> Fix For: 3.0.0-beta1, 2.8.2
>
> Attachments: HDFS-12301.01.patch
>
>
> Instead of forcing users to click on the "Go" button, we should accept the 
> Enter keypress as a directive to navigate to the directory too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Closed] (HDFS-12301) NN File Browser UI: Navigate to a path when enter is pressed

2017-08-15 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash closed HDFS-12301.
---

> NN File Browser UI: Navigate to a path when enter is pressed
> 
>
> Key: HDFS-12301
> URL: https://issues.apache.org/jira/browse/HDFS-12301
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ui
>Affects Versions: 2.7.4
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>Priority: Trivial
> Fix For: 3.0.0-beta1, 2.8.2
>
> Attachments: HDFS-12301.01.patch
>
>
> Instead of forcing users to click on the "Go" button, we should accept the 
> Enter keypress as a directive to navigate to the directory too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12301) NN File Browser UI: Navigate to a path when enter is pressed

2017-08-15 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128005#comment-16128005
 ] 

Ravi Prakash commented on HDFS-12301:
-

Thank you for the reviews Haohui and Allen!
I've committed this to trunk, branch-2, branch-2.8 and branch-2.8.2. 

> NN File Browser UI: Navigate to a path when enter is pressed
> 
>
> Key: HDFS-12301
> URL: https://issues.apache.org/jira/browse/HDFS-12301
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ui
>Affects Versions: 2.7.4
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>Priority: Trivial
> Fix For: 3.0.0-beta1, 2.8.2
>
> Attachments: HDFS-12301.01.patch
>
>
> Instead of forcing users to click on the "Go" button, we should accept the 
> Enter keypress as a directive to navigate to the directory too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12182) BlockManager.metaSave does not distinguish between "under replicated" and "missing" blocks

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127995#comment-16127995
 ] 

Hadoop QA commented on HDFS-12182:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 6s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_144 Failed junit tests | hadoop.hdfs.server.namenode.TestMetaSave |
| JDK v1.8.0_144 Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
| JDK v1.7.0_131 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain |
|   | hadoop.hdfs.server.namenode.TestMetaSave |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | HDFS-12182 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881999/HDFS-12182-branch-2.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e41080dda58b 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (HDFS-12290) Block Storage: Change dfs.cblock.jscsi.server.address default bind address to 0.0.0.0

2017-08-15 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127991#comment-16127991
 ] 

Chen Liang commented on HDFS-12290:
---

Thanks [~msingh] for the catch! The failed tests are unrelated. Committed to 
the feature branch.

> Block Storage: Change dfs.cblock.jscsi.server.address default bind address to 
> 0.0.0.0
> -
>
> Key: HDFS-12290
> URL: https://issues.apache.org/jira/browse/HDFS-12290
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12290-HDFS-7240.001.patch
>
>
> This improvement was realized by deploying cblock cluster.
> dfs.cblock.jscsi.server.address currently default to 127.0.0.1 which only 
> allows localhost to mount cblock drives, for external connections this config 
> needs to be changed to 0.0.0.0 to allow external connections.
> This jira suggests that the default bind address should be changed to 0.0.0.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12290) Block Storage: Change dfs.cblock.jscsi.server.address default bind address to 0.0.0.0

2017-08-15 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12290:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Block Storage: Change dfs.cblock.jscsi.server.address default bind address to 
> 0.0.0.0
> -
>
> Key: HDFS-12290
> URL: https://issues.apache.org/jira/browse/HDFS-12290
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12290-HDFS-7240.001.patch
>
>
> This improvement was realized by deploying cblock cluster.
> dfs.cblock.jscsi.server.address currently default to 127.0.0.1 which only 
> allows localhost to mount cblock drives, for external connections this config 
> needs to be changed to 0.0.0.0 to allow external connections.
> This jira suggests that the default bind address should be changed to 0.0.0.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12305) Ozone: SCM: Add StateMachine for pipeline/container

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127917#comment-16127917
 ] 

Hadoop QA commented on HDFS-12305:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
29s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
7s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
5s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 39s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.scm.TestArchive |
|   | 
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities |
|   | hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12305 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881995/HDFS-12305-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9718f760eb66 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / bfc49a4 |
| Default Java | 1.8.0_144 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20709/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20709/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20709/testReport/ |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20709/console |
| 

[jira] [Updated] (HDFS-10787) libhdfs++: hdfs_configuration and configuration_loader should be accessible from our public API

2017-08-15 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-10787:
-
Attachment: HDFS-10787.HDFS-8707.002.patch

Fixed loading and validating of configs and getting options.

> libhdfs++: hdfs_configuration and configuration_loader should be accessible 
> from our public API
> ---
>
> Key: HDFS-10787
> URL: https://issues.apache.org/jira/browse/HDFS-10787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: James Clampffer
> Attachments: HDFS-10787.HDFS-8707.000.patch, 
> HDFS-10787.HDFS-8707.001.patch, HDFS-10787.HDFS-8707.002.patch
>
>
> Currently, libhdfspp examples and tools all have this:
> #include "hdfspp/hdfspp.h"
> #include "common/hdfs_configuration.h"
> #include "common/configuration_loader.h"
> This is done in order to read configs and connect. We want  
> hdfs_configuration and configuration_loader to be accessible just by 
> including our hdfspp.h. One way to achieve that would be to create a builder, 
> and would include the above libraries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12188) TestDecommissioningStatus#testDecommissionStatus fails intermittently

2017-08-15 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127858#comment-16127858
 ] 

Ajay Kumar commented on HDFS-12188:
---

Hi [~brahmareddy] , i tested this in a loop but it passed each time.

> TestDecommissioningStatus#testDecommissionStatus fails intermittently
> -
>
> Key: HDFS-12188
> URL: https://issues.apache.org/jira/browse/HDFS-12188
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
> Attachments: TestFailure_Log.txt
>
>
> {noformat}
> java.lang.AssertionError: Unexpected num under-replicated blocks expected:<3> 
> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.checkDecommissionStatus(TestDecommissioningStatus.java:144)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.testDecommissionStatus(TestDecommissioningStatus.java:240)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12238) Ozone: Add valid trace ID check in sendCommandAsync

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127839#comment-16127839
 ] 

Hadoop QA commented on HDFS-12238:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
15s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
18s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
5s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 24s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
30s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.scm.TestArchive |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
|   | org.apache.hadoop.hdfs.server.namenode.TestFSImageWithAcl |
|   | org.apache.hadoop.hdfs.server.namenode.TestListOpenFiles |
|   | org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12238 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881987/HDFS-12238-HDFS-7240.02.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c715b4e79b23 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / bfc49a4 |
| Default Java | 

[jira] [Commented] (HDFS-12301) NN File Browser UI: Navigate to a path when enter is pressed

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127826#comment-16127826
 ] 

Hadoop QA commented on HDFS-12301:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12301 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881837/HDFS-12301.01.patch |
| Optional Tests |  asflicense  |
| uname | Linux 696a4c07a898 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dadb0c2 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20711/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> NN File Browser UI: Navigate to a path when enter is pressed
> 
>
> Key: HDFS-12301
> URL: https://issues.apache.org/jira/browse/HDFS-12301
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ui
>Affects Versions: 2.7.4
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>Priority: Trivial
> Attachments: HDFS-12301.01.patch
>
>
> Instead of forcing users to click on the "Go" button, we should accept the 
> Enter keypress as a directive to navigate to the directory too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-15 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127820#comment-16127820
 ] 

Manoj Govindassamy commented on HDFS-11912:
---

[~ghuangups],
* The added test has failed again with a time out. Can you please take a look? 
Assuming there are no deadlocks here, try adding an explicit timeout to the 
test or bring down the overall running time. 
* There are quite a few 
[checkstyle|https://builds.apache.org/job/PreCommit-HDFS-Build/20706/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt]
 issues in the newly added test. Can you please fix them all?

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Assignee: George Huang
>Priority: Minor
>  Labels: TestGap
> Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch, 
> HDFS-11912.003.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12301) NN File Browser UI: Navigate to a path when enter is pressed

2017-08-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-12301:

Status: Patch Available  (was: Open)

> NN File Browser UI: Navigate to a path when enter is pressed
> 
>
> Key: HDFS-12301
> URL: https://issues.apache.org/jira/browse/HDFS-12301
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ui
>Affects Versions: 2.7.4
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>Priority: Trivial
> Attachments: HDFS-12301.01.patch
>
>
> Instead of forcing users to click on the "Go" button, we should accept the 
> Enter keypress as a directive to navigate to the directory too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127789#comment-16127789
 ] 

Hadoop QA commented on HDFS-11912:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 66 new + 0 unchanged - 0 fixed = 66 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestMissingBlocksAlert |
|   | hadoop.hdfs.server.namenode.TestEditLogRace |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
|   | 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots |
|   | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11912 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875998/HDFS-11912.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 688a6645560f 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dadb0c2 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20706/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Commented] (HDFS-12117) HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST Interface

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127779#comment-16127779
 ] 

Hadoop QA commented on HDFS-12117:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 0s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-httpfs: The 
patch generated 38 new + 501 unchanged - 2 fixed = 539 total (was 503) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
47s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | HDFS-12117 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881991/HDFS-12117-branch-2.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 00f94e0439fc 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / b30522c |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_144 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| findbugs | 

[jira] [Updated] (HDFS-12182) BlockManager.metaSave does not distinguish between "under replicated" and "missing" blocks

2017-08-15 Thread Wellington Chevreuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HDFS-12182:

Attachment: HDFS-12182-branch-2.001.patch

Applied changes on top of branch-2 and generated patch 
*HDFS-12182-branch-2.001.patch*.

> BlockManager.metaSave does not distinguish between "under replicated" and 
> "missing" blocks
> --
>
> Key: HDFS-12182
> URL: https://issues.apache.org/jira/browse/HDFS-12182
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12182.001.patch, HDFS-12182.002.patch, 
> HDFS-12182.003.patch, HDFS-12182.004.patch, HDFS-12182-branch-2.001.patch
>
>
> Currently, *BlockManager.metaSave* method (which is called by "-metasave" dfs 
> CLI command) reports both "under replicated" and "missing" blocks under same 
> metric *Metasave: Blocks waiting for reconstruction:* as shown on below code 
> snippet:
> {noformat}
>synchronized (neededReconstruction) {
>   out.println("Metasave: Blocks waiting for reconstruction: "
>   + neededReconstruction.size());
>   for (Block block : neededReconstruction) {
> dumpBlockMeta(block, out);
>   }
> }
> {noformat}
> *neededReconstruction* is an instance of *LowRedundancyBlocks*, which 
> actually wraps 5 priority queues currently. 4 of these queues store different 
> under replicated scenarios, but the 5th one is dedicated for corrupt/missing 
> blocks. 
> Thus, metasave report may suggest some corrupt blocks are just under 
> replicated. This can be misleading for admins and operators trying to track 
> block missing/corruption issues, and/or other issues related to 
> *BlockManager* metrics.
> I would like to propose a patch with trivial changes that would report 
> corrupt blocks separately.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11738) Hedged pread takes more time when block moved from initial locations

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127755#comment-16127755
 ] 

Hadoop QA commented on HDFS-11738:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
56 unchanged - 4 fixed = 57 total (was 60) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
19s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11738 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881979/HDFS-11738-04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 721132749b0a 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dadb0c2 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20704/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20704/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20704/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 

[jira] [Updated] (HDFS-12305) Ozone: SCM: Add StateMachine for pipeline/container

2017-08-15 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12305:
--
Status: Patch Available  (was: Open)

> Ozone: SCM: Add StateMachine for pipeline/container
> ---
>
> Key: HDFS-12305
> URL: https://issues.apache.org/jira/browse/HDFS-12305
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-12305-HDFS-7240.001.patch
>
>
> Add a template class that can be shared by pipeline and container state 
> machine.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12305) Ozone: SCM: Add StateMachine for pipeline/container

2017-08-15 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12305:
--
Attachment: HDFS-12305-HDFS-7240.001.patch

> Ozone: SCM: Add StateMachine for pipeline/container
> ---
>
> Key: HDFS-12305
> URL: https://issues.apache.org/jira/browse/HDFS-12305
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-12305-HDFS-7240.001.patch
>
>
> Add a template class that can be shared by pipeline and container state 
> machine.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12117) HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST Interface

2017-08-15 Thread Wellington Chevreuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HDFS-12117:

Attachment: HDFS-12117-branch-2.001.patch

Nevermind, I had just applied my commit into branch-2 head, resolved the 
conflicts.

Uploading *HDFS-12117-branch-2.001.patch* with these changes on top of branch-2.

> HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST 
> Interface
> ---
>
> Key: HDFS-12117
> URL: https://issues.apache.org/jira/browse/HDFS-12117
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12117.003.patch, HDFS-12117.004.patch, 
> HDFS-12117.005.patch, HDFS-12117.006.patch, HDFS-12117-branch-2.001.patch, 
> HDFS-12117.patch.01, HDFS-12117.patch.02
>
>
> Currently, HttpFS is lacking implementation for SNAPSHOT related methods from 
> WebHDFS REST interface as defined by WebHDFS documentation [WebHDFS 
> documentation|https://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Snapshot_Operations]
> I would like to work on this implementation, following the existing design 
> approach already implemented by other WebHDFS methods on current HttpFS 
> project, so I'll be proposing an initial patch soon for reviews.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12238) Ozone: Add valid trace ID check in sendCommandAsync

2017-08-15 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127677#comment-16127677
 ] 

Ajay Kumar edited comment on HDFS-12238 at 8/15/17 6:29 PM:


[~anu], thanks for review. Uploaded new patch with suggested changes. 
{{Ozonebucket.java}} and {{TestKeys.java}} are not part of it.




was (Author: ajayydv):
[~anu], thanks for review. Uploaded new patch with suggested changes. 
{code:java}Ozonebucket.java{code} and {code:java}TestKeys.java {code} are not 
part of it.



> Ozone: Add valid trace ID check in sendCommandAsync
> ---
>
> Key: HDFS-12238
> URL: https://issues.apache.org/jira/browse/HDFS-12238
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HDFS-12238-HDFS-7240.01.patch, 
> HDFS-12238-HDFS-7240.02.patch
>
>
> In the function {{XceiverClientHandler#sendCommandAsync}} we should add a 
> check 
> {code}
>    if(StringUtils.isEmpty(request.getTraceID())) {
>   throw new IllegalArgumentException("Invalid trace ID");
> }
> {code}
> To ensure that ozone clients always send a valid trace ID. However, when you 
> do that a set of current tests that do add a valid trace ID will fail. So we 
> need to fix these tests too.
> {code}
>   TestContainerMetrics.testContainerMetrics
>   TestOzoneContainer.testBothGetandPutSmallFile
>   TestOzoneContainer.testCloseContainer
>   TestOzoneContainer.testOzoneContainerViaDataNode
>   TestOzoneContainer.testXcieverClientAsync
>   TestOzoneContainer.testCreateOzoneContainer
>   TestOzoneContainer.testDeleteContainer
>   TestContainerServer.testClientServer
>   TestContainerServer.testClientServerWithContainerDispatcher
>   TestKeys.testPutAndGetKeyWithDnRestart
> {code}
> This is based on a comment from [~vagarychen] in HDFS-11580.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12238) Ozone: Add valid trace ID check in sendCommandAsync

2017-08-15 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127677#comment-16127677
 ] 

Ajay Kumar commented on HDFS-12238:
---

[~anu], thanks for review. Uploaded new patch with suggested changes. 
{code:java}Ozonebucket.java{code} and {code:java}TestKeys.java {code} are not 
part of it.



> Ozone: Add valid trace ID check in sendCommandAsync
> ---
>
> Key: HDFS-12238
> URL: https://issues.apache.org/jira/browse/HDFS-12238
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HDFS-12238-HDFS-7240.01.patch, 
> HDFS-12238-HDFS-7240.02.patch
>
>
> In the function {{XceiverClientHandler#sendCommandAsync}} we should add a 
> check 
> {code}
>    if(StringUtils.isEmpty(request.getTraceID())) {
>   throw new IllegalArgumentException("Invalid trace ID");
> }
> {code}
> To ensure that ozone clients always send a valid trace ID. However, when you 
> do that a set of current tests that do add a valid trace ID will fail. So we 
> need to fix these tests too.
> {code}
>   TestContainerMetrics.testContainerMetrics
>   TestOzoneContainer.testBothGetandPutSmallFile
>   TestOzoneContainer.testCloseContainer
>   TestOzoneContainer.testOzoneContainerViaDataNode
>   TestOzoneContainer.testXcieverClientAsync
>   TestOzoneContainer.testCreateOzoneContainer
>   TestOzoneContainer.testDeleteContainer
>   TestContainerServer.testClientServer
>   TestContainerServer.testClientServerWithContainerDispatcher
>   TestKeys.testPutAndGetKeyWithDnRestart
> {code}
> This is based on a comment from [~vagarychen] in HDFS-11580.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12305) Ozone: SCM: Add StateMachine for pipeline/container

2017-08-15 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-12305:
-

 Summary: Ozone: SCM: Add StateMachine for pipeline/container
 Key: HDFS-12305
 URL: https://issues.apache.org/jira/browse/HDFS-12305
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: HDFS-7240
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


Add a template class that can be shared by pipeline and container state machine.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12238) Ozone: Add valid trace ID check in sendCommandAsync

2017-08-15 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12238:
--
Attachment: HDFS-12238-HDFS-7240.02.patch

> Ozone: Add valid trace ID check in sendCommandAsync
> ---
>
> Key: HDFS-12238
> URL: https://issues.apache.org/jira/browse/HDFS-12238
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HDFS-12238-HDFS-7240.01.patch, 
> HDFS-12238-HDFS-7240.02.patch
>
>
> In the function {{XceiverClientHandler#sendCommandAsync}} we should add a 
> check 
> {code}
>    if(StringUtils.isEmpty(request.getTraceID())) {
>   throw new IllegalArgumentException("Invalid trace ID");
> }
> {code}
> To ensure that ozone clients always send a valid trace ID. However, when you 
> do that a set of current tests that do add a valid trace ID will fail. So we 
> need to fix these tests too.
> {code}
>   TestContainerMetrics.testContainerMetrics
>   TestOzoneContainer.testBothGetandPutSmallFile
>   TestOzoneContainer.testCloseContainer
>   TestOzoneContainer.testOzoneContainerViaDataNode
>   TestOzoneContainer.testXcieverClientAsync
>   TestOzoneContainer.testCreateOzoneContainer
>   TestOzoneContainer.testDeleteContainer
>   TestContainerServer.testClientServer
>   TestContainerServer.testClientServerWithContainerDispatcher
>   TestKeys.testPutAndGetKeyWithDnRestart
> {code}
> This is based on a comment from [~vagarychen] in HDFS-11580.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12214) [SPS]: Fix review comments of StoragePolicySatisfier feature

2017-08-15 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127659#comment-16127659
 ] 

Surendra Singh Lilhore commented on HDFS-12214:
---

Thanks [~rakeshr] for the patch. 
Patch LGTM, one minor comment. 

{code}
+String nsId = DFSUtil.getNamenodeNameServiceId(conf);
+boolean isHaEnabled = HAUtil.isHAEnabled(conf, nsId);
+if (!isHaEnabled) {
+  startSPS();
 }
{code}
In {{BlockManager#activate()}} we no need to start SPS thread, for HA and 
Non-HA both the case {{FSNamesystem#startActiveServices()}} will start the SPS 
services. 

> [SPS]: Fix review comments of StoragePolicySatisfier feature
> 
>
> Key: HDFS-12214
> URL: https://issues.apache.org/jira/browse/HDFS-12214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-12214-HDFS-10285-00.patch, 
> HDFS-12214-HDFS-10285-01.patch, HDFS-12214-HDFS-10285-02.patch, 
> HDFS-12214-HDFS-10285-03.patch, HDFS-12214-HDFS-10285-04.patch, 
> HDFS-12214-HDFS-10285-05.patch, HDFS-12214-HDFS-10285-06.patch, 
> HDFS-12214-HDFS-10285-07.patch
>
>
> This sub-task is to address [~andrew.wang]'s review comments. Please refer 
> the [review 
> comment|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16103734=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16103734]
>  in HDFS-10285 umbrella jira.
> # Rename configuration property 'dfs.storage.policy.satisfier.activate' to 
> 'dfs.storage.policy.satisfier.enabled'
> # Disable SPS feature by default.
> # Rather than using the acronym (which a user might not know), maybe rename 
> "-isSpsRunning" to "-isSatisfierRunning"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12304) Remove unused parameter from FsDatasetImpl#addVolume

2017-08-15 Thread Chen Liang (JIRA)
Chen Liang created HDFS-12304:
-

 Summary: Remove unused parameter from FsDatasetImpl#addVolume
 Key: HDFS-12304
 URL: https://issues.apache.org/jira/browse/HDFS-12304
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Chen Liang
Assignee: Chen Liang
Priority: Minor


FsDatasetImpl has this method
{code}
  private void addVolume(Collection dataLocations,
  Storage.StorageDirectory sd) throws IOException
{code}
Parameter {{dataLocations}} was introduced in HDFS-6740, this variable was used 
to get storage type info. But HDFS-10637 has changed the way of getting storage 
type in this method, making dataLocations no longer being used at all here. We 
should probably remove dataLocations for a cleaner interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-15 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127658#comment-16127658
 ] 

Manoj Govindassamy commented on HDFS-11912:
---

[~ghuangups],
Thanks for working on the patch revision. Looks good overall. +1 pending 
following items.
* In the pre-commit test for v03 patch, the test added in the patch 
TestRandomOpsWithSnapshots timed out. Any reasons why the test failed? I tried 
running the test manually and it ran for 5 minutes and passed. May be on the 
slower machines its taking a longer time to run. 
* I can see some checkstyle issues in the patch. Can you please take care of 
them? The previous checkstyle results from the precommit test expired. 
Triggered a new precommit test 
https://builds.apache.org/job/PreCommit-HDFS-Build/20706/, and we will have the 
results soon.


> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Assignee: George Huang
>Priority: Minor
>  Labels: TestGap
> Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch, 
> HDFS-11912.003.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12255) Block Storage: Cblock should generated unique trace ID for the ops

2017-08-15 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12255:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Failed tests are unrelated. Committed to the feature, thanks [~msingh] for the 
contribution!

> Block Storage: Cblock should generated unique trace ID for the ops
> --
>
> Key: HDFS-12255
> URL: https://issues.apache.org/jira/browse/HDFS-12255
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12255-HDFS-7240.001.patch, 
> HDFS-12255-HDFS-7240.002.patch, HDFS-12255-HDFS-7240.003.patch
>
>
> Cblock tests fails because cblock does not generate unique trace id for each 
> op.
> {code}
> java.lang.AssertionError: expected:<0> but was:<1051>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.cblock.TestBufferManager.testRepeatedBlockWrites(TestBufferManager.java:448)
> {code}
> This failure is because of following error.
> {code}
> 017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
> scm.XceiverClientHandler (XceiverClientHandler.java:sendCommandAsync(134)) - 
> Command with Trace already exists. Ignoring this command. . Previous Command: 
> java.util.concurrent.CompletableFuture@7847fc2d[Not completed, 1 dependents]
> 2017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
> jscsiHelper.ContainerCacheFlusher (BlockWriterTask.java:run(108)) - Writing 
> of block:44 failed, We have attempted to write this block 7 tim
> es to the container container2483304118.Trace ID:
> java.lang.IllegalStateException: Duplicate trace ID. Command with this trace 
> ID is already executing. Please ensure that trace IDs are not reused. ID: 
> at 
> org.apache.hadoop.scm.XceiverClientHandler.sendCommandAsync(XceiverClientHandler.java:139)
> at 
> org.apache.hadoop.scm.XceiverClientHandler.sendCommand(XceiverClientHandler.java:114)
> at 
> org.apache.hadoop.scm.XceiverClient.sendCommand(XceiverClient.java:132)
> at 
> org.apache.hadoop.scm.storage.ContainerProtocolCalls.writeSmallFile(ContainerProtocolCalls.java:225)
> at 
> org.apache.hadoop.cblock.jscsiHelper.BlockWriterTask.run(BlockWriterTask.java:97)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2017-08-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127645#comment-16127645
 ] 

Andrew Wang commented on HDFS-10285:


Hi Uma,

bq. Now quick question on your example above “a user calling the shell command 
and polling until it's done” , you mean command should blocked by polling 
internally? or user will call status check periodically? How much time server 
should hold the status?

This would be a user periodically asking for status. From what I know of async 
API design, callbacks are preferred over polling since it solves the question 
about how long the server needs to hold the status.

I'd be open to any proposal here, I just think the current "isSpsRunning" API 
is insufficient. Did you end up filing a ticket to track this?

bq. 

If I were to paraphrase, the NN is the ultimate arbiter, and the operations 
being performed by C-DNs are idempotent, so duplicate work gets dropped safely. 
I think this still makes it harder to reason about from a debugging POV, 
particularly if we want to extend this to something like EC conversion that 
might not be idempotent.

bq. Mainly our motivation to offload work was mainly to avoid tracking at block 
level results. Now NN just tracks file level results. C-DN track all block 
level movements and send the result back. We also thinking to use this kind of 
model for converting regular files to EC and also HDFS-12090 is one of the use 
case they wanted to use it. I think keeping all such monitoring logic into NN 
will be definitely overhead on NN. My feeling is, we should think to offload 
work as much as possible from NN.

I like the idea of offloading work in the abstract, but I don't know how much 
work we really offload in this situation. The NN still needs to track 
everything at the file level, which is the same order of magnitude as the block 
level. The NN is still doing blockmanagement and processing IBRs for the block 
movement. Distributing tracking work to the C-DNs adds latency and makes the 
system more complicated.

> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10285-consolidated-merge-patch-00.patch, 
> HDFS-10285-consolidated-merge-patch-01.patch, 
> HDFS-SPS-TestReport-20170708.pdf, 
> Storage-Policy-Satisfier-in-HDFS-June-20-2017.pdf, 
> Storage-Policy-Satisfier-in-HDFS-May10.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These 
> policies can be set on directory/file to specify the user preference, where 
> to store the physical block. When user set the storage policy before writing 
> data, then the blocks could take advantage of storage policy preferences and 
> stores physical block accordingly. 
> If user set the storage policy after writing and completing the file, then 
> the blocks would have been written with default storage policy (nothing but 
> DISK). User has to run the ‘Mover tool’ explicitly by specifying all such 
> file names as a list. In some distributed system scenarios (ex: HBase) it 
> would be difficult to collect all the files and run the tool as different 
> nodes can write files separately and file can have different paths.
> Another scenarios is, when user rename the files from one effected storage 
> policy file (inherited policy from parent directory) to another storage 
> policy effected directory, it will not copy inherited storage policy from 
> source. So it will take effect from destination file/dir parent storage 
> policy. This rename operation is just a metadata change in Namenode. The 
> physical blocks still remain with source storage policy.
> So, Tracking all such business logic based file names could be difficult for 
> admins from distributed nodes(ex: region servers) and running the Mover tool. 
> Here the proposal is to provide an API from Namenode itself for trigger the 
> storage policy satisfaction. A Daemon thread inside Namenode should track 
> such calls and process to DN as movement commands. 
> Will post the detailed design thoughts document soon. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11738) Hedged pread takes more time when block moved from initial locations

2017-08-15 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127631#comment-16127631
 ] 

Vinayakumar B commented on HDFS-11738:
--

Thanks [~jojochuang] for reviewing changes.
bq.  IIUC, the client would stuck in chooseDataNode() in such a scenario? 
Yeah, reader thread goes on retry until max retries, and gets 
{{BlockMissedException}}. But since this is a hedged read, already read would 
have completed with actual host. So read will completes successfully, but call 
will return to user only after all retries exhausted. Non-hedge case, read 
would fail. It was fixed in HDFS-11708.

bq. The method chooseDataNode should add a @Nullable to indicate a null return 
value is valid.
I tried to add @Nullable, but my IDE started showing some javadoc error. So 
added the whole javadoc mentioning about possible null return value. Hope that 
satisfies you.
 
bq. can be simplified as {{chosenNode = chooseDataNode(block, ignored, false);}}
Thats a good catch. changed.

{quote}The timeout of 30 seconds seems a little short. On my laptop this test 
takes approximately 20 seconds, so on a busy host the unit test might 
potentially run slightly over time. Or would it be reasonable to reduce some 
wait time?
E.g. reduce dfs.client.retry.window.base from 3000 to 1000?{quote}
Yeah, increased the timeout to 6 and reduced the window time to 1000 as 
well. Thank you for the hint.

please check updated patch

> Hedged pread takes more time when block moved from initial locations
> 
>
> Key: HDFS-11738
> URL: https://issues.apache.org/jira/browse/HDFS-11738
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-11738-01.patch, HDFS-11738-02.patch, 
> HDFS-11738-03.patch, HDFS-11738-04.patch
>
>
> Scenario : 
> Same as HDFS-11708.
> During Hedge read, 
> 1. First two locations fails to read the data in hedged mode.
> 2. chooseData refetches locations and adds a future to read from DN3.
> 3. after adding future to DN3, main thread goes for refetching locations in 
> loop and stucks there till all 3  retries to fetch locations exhausted, which 
> consumes ~20 seconds with exponential retry time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11738) Hedged pread takes more time when block moved from initial locations

2017-08-15 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-11738:
-
Attachment: HDFS-11738-04.patch

Updated the patch addressing above comments.

> Hedged pread takes more time when block moved from initial locations
> 
>
> Key: HDFS-11738
> URL: https://issues.apache.org/jira/browse/HDFS-11738
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-11738-01.patch, HDFS-11738-02.patch, 
> HDFS-11738-03.patch, HDFS-11738-04.patch
>
>
> Scenario : 
> Same as HDFS-11708.
> During Hedge read, 
> 1. First two locations fails to read the data in hedged mode.
> 2. chooseData refetches locations and adds a future to read from DN3.
> 3. after adding future to DN3, main thread goes for refetching locations in 
> loop and stucks there till all 3  retries to fetch locations exhausted, which 
> consumes ~20 seconds with exponential retry time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10787) libhdfs++: hdfs_configuration and configuration_loader should be accessible from our public API

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127607#comment-16127607
 ] 

Hadoop QA commented on HDFS-10787:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
43s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
26s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
46s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
33s{color} | {color:green} HDFS-8707 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
40s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
58s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
48s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}157m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3117e2a |
| JIRA Issue | HDFS-10787 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881944/HDFS-10787.HDFS-8707.001.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 8473eedeb5b2 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 5cee747 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_144 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| JDK v1.7.0_131  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20702/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20702/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: hdfs_configuration and configuration_loader should be accessible 
> from our public API
> ---
>
> Key: HDFS-10787
> URL: https://issues.apache.org/jira/browse/HDFS-10787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: James 

[jira] [Commented] (HDFS-12225) [SPS]: Optimize extended attributes for tracking SPS movements

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127590#comment-16127590
 ] 

Hadoop QA commented on HDFS-12225:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10285 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
31s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-10285 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
48s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-10285 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-10285 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 9 
unchanged - 0 fixed = 10 total (was 9) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestBlockStorageMovementAttemptedItems |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12225 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881951/HDFS-12225-HDFS-10285-02.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8b1d4adcd1eb 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / d8122f2 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20703/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20703/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20703/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20703/testReport/ |
| 

[jira] [Commented] (HDFS-12159) Ozone: SCM: Add create replication pipeline RPC

2017-08-15 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127582#comment-16127582
 ] 

Xiaoyu Yao commented on HDFS-12159:
---

Thanks [~anu] for the update. Patch looks good to me. There are few checkstyle 
and javadoc warnings from Jenkins. Also, can you confirm if 
TestContainerPlacement#testContainerPlacementCapacity is related or known issue 
tracked by separate ticket. 

> Ozone: SCM: Add create replication pipeline RPC
> ---
>
> Key: HDFS-12159
> URL: https://issues.apache.org/jira/browse/HDFS-12159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-12159-HDFS-7240.001.patch, 
> HDFS-12159-HDFS-7240.002.patch, HDFS-12159-HDFS-7240.003.patch, 
> HDFS-12159-HDFS-7240.004.patch, HDFS-12159-HDFS-7240.005.patch
>
>
> Add an API that allows users to create replication pipelines using SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12117) HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST Interface

2017-08-15 Thread Wellington Chevreuil (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124983#comment-16124983
 ] 

Wellington Chevreuil edited comment on HDFS-12117 at 8/15/17 4:50 PM:
--

HDFS-12052 seems to depend on HDFS-10860, which in turn depends on 
HADOOP-13597, which depends on HADOOP-9902. None of these are in branch-2 yet.

||id||date||author||message||
|69b23632c48297ac844c58fd3e21aad10c093cc8|Mon Feb 6 13:33:32 2017|Xiao 
Chen|HDFS-10860. Switch HttpFS from Tomcat to Jetty. Contributed by John Zhuge.|
|5d182949badb2eb80393de7ba3838102d006488b|Thu Jan 5 17:21:57 2017|Xiao 
Chen|HADOOP-13597. Switch KMS from Tomcat to Jetty. Contributed by John Zhuge.|
|31467453ec453f5df0ae29b5296c96d2514f2751|Tue Aug 19 12:11:17 2014|Allen 
Wittenauer|HADOOP-9902. Shell script rewrite (aw)|


was (Author: wchevreuil):
HDFS-12052 seems to depend on HDFS-10860, which in turn depends on 
HADOOP-13597. None of these are in branch-2 yet.

||id||date||author||message||
|69b23632c48297ac844c58fd3e21aad10c093cc8|Mon Feb 6 13:33:32 2017|Xiao 
Chen|HDFS-10860. Switch HttpFS from Tomcat to Jetty. Contributed by John Zhuge.|
|5d182949badb2eb80393de7ba3838102d006488b|Thu Jan 5 17:21:57 2017|Xiao 
Chen|HADOOP-13597. Switch KMS from Tomcat to Jetty. Contributed by John Zhuge.|

> HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST 
> Interface
> ---
>
> Key: HDFS-12117
> URL: https://issues.apache.org/jira/browse/HDFS-12117
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12117.003.patch, HDFS-12117.004.patch, 
> HDFS-12117.005.patch, HDFS-12117.006.patch, HDFS-12117.patch.01, 
> HDFS-12117.patch.02
>
>
> Currently, HttpFS is lacking implementation for SNAPSHOT related methods from 
> WebHDFS REST interface as defined by WebHDFS documentation [WebHDFS 
> documentation|https://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Snapshot_Operations]
> I would like to work on this implementation, following the existing design 
> approach already implemented by other WebHDFS methods on current HttpFS 
> project, so I'll be proposing an initial patch soon for reviews.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10787) libhdfs++: hdfs_configuration and configuration_loader should be accessible from our public API

2017-08-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127465#comment-16127465
 ] 

ASF GitHub Bot commented on HDFS-10787:
---

Github user jamesclampffer commented on the issue:

https://github.com/apache/orc/pull/134
  
Those errors should go away after HDFS-10787 is finished (or applied to the 
PR temporarily) since it exposes an API to get at the configuration without 
using std::tr2::optional.  We had to suppress a bunch of errors coming from 
optional when it was added to libhdfs++; eventually I'd like to get rid of the 
remaining uses of it in the implementation as well.


> libhdfs++: hdfs_configuration and configuration_loader should be accessible 
> from our public API
> ---
>
> Key: HDFS-10787
> URL: https://issues.apache.org/jira/browse/HDFS-10787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: James Clampffer
> Attachments: HDFS-10787.HDFS-8707.000.patch, 
> HDFS-10787.HDFS-8707.001.patch
>
>
> Currently, libhdfspp examples and tools all have this:
> #include "hdfspp/hdfspp.h"
> #include "common/hdfs_configuration.h"
> #include "common/configuration_loader.h"
> This is done in order to read configs and connect. We want  
> hdfs_configuration and configuration_loader to be accessible just by 
> including our hdfspp.h. One way to achieve that would be to create a builder, 
> and would include the above libraries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12225) [SPS]: Optimize extended attributes for tracking SPS movements

2017-08-15 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-12225:
--
Attachment: HDFS-12225-HDFS-10285-02.patch

Attached v2 patch.
Fixed checkstyle and test failure. Findbugs are unrelated to this patch 

> [SPS]: Optimize extended attributes for tracking SPS movements
> --
>
> Key: HDFS-12225
> URL: https://issues.apache.org/jira/browse/HDFS-12225
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-12225-HDFS-10285-01.patch, 
> HDFS-12225-HDFS-10285-02.patch
>
>
> We have discussed to optimize number extended attributes and asked to report 
> separate JIRA while implementing [HDFS-11150 | 
> https://issues.apache.org/jira/browse/HDFS-11150?focusedCommentId=15766127=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15766127]
> This is the JIRA to track that work 
> For the context, comment copied from HDFS-11150
> {quote}
> [~yuanbo] wrote : I've tried that before. There is an issue here if we only 
> mark the directory. When recovering from FsImage, the InodeMap isn't built 
> up, so we don't know the sub-inode of a given inode, in the end, We cannot 
> add these inodes to movement queue in FSDirectory#addToInodeMap, any 
> thoughts?{quote}
> {quote}
> [~umamaheswararao] wrote: I got what you are saying. Ok for simplicity we can 
> add for all Inodes now. For this to handle 100%, we may need intermittent 
> processing, like first we should add them to some intermittentList while 
> loading fsImage, once fully loaded and when starting active services, we 
> should process that list and do required stuff. But it would add some 
> additional complexity may be. Let's do with all file inodes now and we can 
> revisit later if it is really creating issues. How about you raise a JIRA for 
> it and think to optimize separately?
> {quote}
> {quote}
> [~andrew.wang] wrote in HDFS-10285 merge time review comment : HDFS-10899 
> also the cursor of the iterator in the EZ root xattr to track progress and 
> handle restarts. I wonder if we can do something similar here to avoid having 
> an xattr-per-file being moved.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10787) libhdfs++: hdfs_configuration and configuration_loader should be accessible from our public API

2017-08-15 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127395#comment-16127395
 ] 

James Clampffer commented on HDFS-10787:


Thanks for the initial patch [~mtracy] and the options loading extention 
[~anatoli.shein].

Here's some feedback:

-This is a little problematic because it doesn't have a way of signalling an 
error.  For example the way it's used in tools if the default path has invalid 
xml what should the result Options object look like?  Can you make this look 
like the other methods that take a reference to assign to and return a success 
flag?  That way we have a way of indicating failure but don't have to use 
std::tr2::optional in the public api.
{code}
Class ConfigParser {
  ...
  Options get_options() const;
{code}

-Can you add a load_defaults method to ConfigParser rather than implicitly 
doing parsing files it in the default constructor?  There's places we assume 
that the Options object will get initialized to it's default values and doing 
this implicitly introduces dependencies on the environment in a way that's not 
obvious.  Generally you want the default constructor to be inexpensive because 
it's going to get called when std containers are reserving memory; 
"vector foo; foo.resize(10);" will go parse the configs 10 times right 
now.

-Nit:  Comments about what headers are being used for often don't age well 
unless people spend a lot of time looking to see if stuff is still there.  
There's tools to prune out unused headers that we should most likely get into 
the build.
{code}
#include   // std::accumulate
{code}

-Might be worth adding a sanity check with an upper bound for the memory being 
reserved for the xml document here.
{code}
  std::vector raw_bytes((int64_t)length + 1);
{code}



> libhdfs++: hdfs_configuration and configuration_loader should be accessible 
> from our public API
> ---
>
> Key: HDFS-10787
> URL: https://issues.apache.org/jira/browse/HDFS-10787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: James Clampffer
> Attachments: HDFS-10787.HDFS-8707.000.patch, 
> HDFS-10787.HDFS-8707.001.patch
>
>
> Currently, libhdfspp examples and tools all have this:
> #include "hdfspp/hdfspp.h"
> #include "common/hdfs_configuration.h"
> #include "common/configuration_loader.h"
> This is done in order to read configs and connect. We want  
> hdfs_configuration and configuration_loader to be accessible just by 
> including our hdfspp.h. One way to achieve that would be to create a builder, 
> and would include the above libraries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12066) When Namenode is in safemode,may not allowed to remove an user's erasure coding policy

2017-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127365#comment-16127365
 ] 

Hudson commented on HDFS-12066:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12189 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12189/])
HDFS-12066. When Namenode is in safemode,may not allowed to remove an (weichiu: 
rev e3ae3e26446c2e98b7aebc4ea66256cfdb4a397f)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java


> When Namenode is in safemode,may not allowed to remove an user's erasure 
> coding policy
> --
>
> Key: HDFS-12066
> URL: https://issues.apache.org/jira/browse/HDFS-12066
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12066.001.patch, HDFS-12066.002.patch, 
> HDFS-12066.003.patch, HDFS-12066.004.patch
>
>
> FSNamesystem#removeErasureCodingPolicy should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12054) FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to ensure Namenode is not in safemode

2017-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127364#comment-16127364
 ] 

Hudson commented on HDFS-12054:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12189 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12189/])
HDFS-12054. FSNamesystem#addErasureCodingPolicies should call (weichiu: rev 
1040bae6fcbae7079d8126368cdeac60831a4d0c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java


> FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode
> ---
>
> Key: HDFS-12054
> URL: https://issues.apache.org/jira/browse/HDFS-12054
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12054.001.patch, HDFS-12054.002.patch, 
> HDFS-12054.003.patch, HDFS-12054.004.patch
>
>
> In the process of FSNamesystem#addErasureCodingPolicies, it would be better 
> to  call checkNameNodeSafeMode() to ensure NN is not in safemode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12054) FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to ensure Namenode is not in safemode

2017-08-15 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12054:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Thx! Patch was committed in trunk.

> FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode
> ---
>
> Key: HDFS-12054
> URL: https://issues.apache.org/jira/browse/HDFS-12054
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12054.001.patch, HDFS-12054.002.patch, 
> HDFS-12054.003.patch, HDFS-12054.004.patch
>
>
> In the process of FSNamesystem#addErasureCodingPolicies, it would be better 
> to  call checkNameNodeSafeMode() to ensure NN is not in safemode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12066) When Namenode is in safemode,may not allowed to remove an user's erasure coding policy

2017-08-15 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12066:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Thx! Patch was committed in trunk.

> When Namenode is in safemode,may not allowed to remove an user's erasure 
> coding policy
> --
>
> Key: HDFS-12066
> URL: https://issues.apache.org/jira/browse/HDFS-12066
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12066.001.patch, HDFS-12066.002.patch, 
> HDFS-12066.003.patch, HDFS-12066.004.patch
>
>
> FSNamesystem#removeErasureCodingPolicy should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10787) libhdfs++: hdfs_configuration and configuration_loader should be accessible from our public API

2017-08-15 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-10787:
-
Attachment: HDFS-10787.HDFS-8707.001.patch

I added a getter for hdfs::Options and a constructor with no parameters for 
hdfs::ConfigParser that just uses the default paths. I also updated the tools 
and c++ examples to use this interface.

> libhdfs++: hdfs_configuration and configuration_loader should be accessible 
> from our public API
> ---
>
> Key: HDFS-10787
> URL: https://issues.apache.org/jira/browse/HDFS-10787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: James Clampffer
> Attachments: HDFS-10787.HDFS-8707.000.patch, 
> HDFS-10787.HDFS-8707.001.patch
>
>
> Currently, libhdfspp examples and tools all have this:
> #include "hdfspp/hdfspp.h"
> #include "common/hdfs_configuration.h"
> #include "common/configuration_loader.h"
> This is done in order to read configs and connect. We want  
> hdfs_configuration and configuration_loader to be accessible just by 
> including our hdfspp.h. One way to achieve that would be to create a builder, 
> and would include the above libraries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11738) Hedged pread takes more time when block moved from initial locations

2017-08-15 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127273#comment-16127273
 ] 

Wei-Chiu Chuang commented on HDFS-11738:


Hello [~vinayrpet] thanks for the patch!
I reviewed the patch and I think I grasp the gist of the patch. IIUC, the 
client would stuck in {{chooseDataNode()}} in such a scenario? 

The method {{chooseDataNode}} should add a {{@Nullable}} to indicate a null 
return value is valid.

It seems the following code
{code:title=DFSInputStream#hedgedFetchBlockByteRange}
  chosenNode = getBestNodeDNAddrPair(block, ignored);
  if (chosenNode == null) {
chosenNode = chooseDataNode(block, ignored, false);
  }
{code}
can be simplified as
{code}
chosenNode = chooseDataNode(block, ignored, false);
{code}
I ran the patch with the simplified code and it passed as well.

The timeout of 30 seconds seems a little short. On my laptop this test takes 
approximately 20 seconds, so on a busy host the unit test might potentially run 
slightly over time. Or would it be reasonable to reduce some wait time?
E.g. reduce dfs.client.retry.window.base from 3000 to 1000?
{code}
conf.setInt(HdfsClientConfigKeys.Retry.WINDOW_BASE_KEY, 1000);
{code}

> Hedged pread takes more time when block moved from initial locations
> 
>
> Key: HDFS-11738
> URL: https://issues.apache.org/jira/browse/HDFS-11738
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-11738-01.patch, HDFS-11738-02.patch, 
> HDFS-11738-03.patch
>
>
> Scenario : 
> Same as HDFS-11708.
> During Hedge read, 
> 1. First two locations fails to read the data in hedged mode.
> 2. chooseData refetches locations and adds a future to read from DN3.
> 3. after adding future to DN3, main thread goes for refetching locations in 
> loop and stucks there till all 3  retries to fetch locations exhausted, which 
> consumes ~20 seconds with exponential retry time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12214) [SPS]: Fix review comments of StoragePolicySatisfier feature

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127186#comment-16127186
 ] 

Hadoop QA commented on HDFS-12214:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
6s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10285 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
13s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} HDFS-10285 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
16s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-10285 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-10285 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 865 unchanged - 1 fixed = 865 total (was 866) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
27s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12214 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881911/HDFS-12214-HDFS-10285-07.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  compile  
javac  javadoc  mvninstall  findbugs  checkstyle  xml  |
| uname | Linux c150f761514f 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / d8122f2 |
| Default Java | 1.8.0_144 |
| shellcheck | v0.4.6 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20701/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 

[jira] [Commented] (HDFS-12303) Change default EC cell size to 1MB for better performance

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127168#comment-16127168
 ] 

Hadoop QA commented on HDFS-12303:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
29s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.TestDFSInputStream |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | 

[jira] [Updated] (HDFS-12214) [SPS]: Fix review comments of StoragePolicySatisfier feature

2017-08-15 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-12214:

Attachment: HDFS-12214-HDFS-10285-07.patch

Fixed checkstyle warning and attached new patch.

cc: [~umamaheswararao], [~andrew.wang]

> [SPS]: Fix review comments of StoragePolicySatisfier feature
> 
>
> Key: HDFS-12214
> URL: https://issues.apache.org/jira/browse/HDFS-12214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-12214-HDFS-10285-00.patch, 
> HDFS-12214-HDFS-10285-01.patch, HDFS-12214-HDFS-10285-02.patch, 
> HDFS-12214-HDFS-10285-03.patch, HDFS-12214-HDFS-10285-04.patch, 
> HDFS-12214-HDFS-10285-05.patch, HDFS-12214-HDFS-10285-06.patch, 
> HDFS-12214-HDFS-10285-07.patch
>
>
> This sub-task is to address [~andrew.wang]'s review comments. Please refer 
> the [review 
> comment|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16103734=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16103734]
>  in HDFS-10285 umbrella jira.
> # Rename configuration property 'dfs.storage.policy.satisfier.activate' to 
> 'dfs.storage.policy.satisfier.enabled'
> # Disable SPS feature by default.
> # Rather than using the acronym (which a user might not know), maybe rename 
> "-isSpsRunning" to "-isSatisfierRunning"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12066) When Namenode is in safemode,may not allowed to remove an user's erasure coding policy

2017-08-15 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127081#comment-16127081
 ] 

Wei-Chiu Chuang commented on HDFS-12066:


+1

> When Namenode is in safemode,may not allowed to remove an user's erasure 
> coding policy
> --
>
> Key: HDFS-12066
> URL: https://issues.apache.org/jira/browse/HDFS-12066
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12066.001.patch, HDFS-12066.002.patch, 
> HDFS-12066.003.patch, HDFS-12066.004.patch
>
>
> FSNamesystem#removeErasureCodingPolicy should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12054) FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to ensure Namenode is not in safemode

2017-08-15 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127079#comment-16127079
 ] 

Wei-Chiu Chuang commented on HDFS-12054:


+1

> FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode
> ---
>
> Key: HDFS-12054
> URL: https://issues.apache.org/jira/browse/HDFS-12054
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12054.001.patch, HDFS-12054.002.patch, 
> HDFS-12054.003.patch, HDFS-12054.004.patch
>
>
> In the process of FSNamesystem#addErasureCodingPolicies, it would be better 
> to  call checkNameNodeSafeMode() to ensure NN is not in safemode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12303) Change default EC cell size to 1MB for better performance

2017-08-15 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou updated HDFS-12303:

Attachment: HDFS-12303.00.patch

Initial patch.

> Change default EC cell size to 1MB for better performance
> -
>
> Key: HDFS-12303
> URL: https://issues.apache.org/jira/browse/HDFS-12303
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12303.00.patch
>
>
> As discussed in HDFS-11814, 1MB cell size shows better performance than 
> others during the tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12303) Change default EC cell size to 1MB for better performance

2017-08-15 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou updated HDFS-12303:

Status: Patch Available  (was: Open)

> Change default EC cell size to 1MB for better performance
> -
>
> Key: HDFS-12303
> URL: https://issues.apache.org/jira/browse/HDFS-12303
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12303.00.patch
>
>
> As discussed in HDFS-11814, 1MB cell size shows better performance than 
> others during the tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12258) ec -listPolicies should list all policies in system, no matter it's enabled or disabled

2017-08-15 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127002#comment-16127002
 ] 

Rakesh R commented on HDFS-12258:
-

Thanks [~zhouwei] for the patch. Following are few comments, please take care.
# Typo: {{Diabled}} -> {{Disabled}}
{code}
/**
* EC policy state.
*/
enum ErasureCodingPolicyState {
  Diabled = 1;
{code}
# Please use captial letters.
{{ErasureCodingPolicyState#Disabled, #Enabled, #Removed}} ==> 
{{ErasureCodingPolicyState#DISABLED, #ENABLED, #REMOVED}}
# Please add annotation to enum state
{code}
@InterfaceAudience.Public
@InterfaceStability.Evolving
public enum ErasureCodingPolicyState
{code}
# {{ErasureCodingPolicyManager#addPolicy(ErasureCodingPolicy policy)}} function 
is not considering the state. What if user sets {{ErasureCodingPolicy#state}} 
and invoked {{dfs#addErasureCodingPolicies()}} API?

> ec -listPolicies should list all policies in system, no matter it's enabled 
> or disabled
> ---
>
> Key: HDFS-12258
> URL: https://issues.apache.org/jira/browse/HDFS-12258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: Wei Zhou
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12258.01.patch, HDFS-12258.02.patch
>
>
> ec -listPolicies should list all policies in system, no matter it's enabled 
> or disabled



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12298) Ozone: Block deletion service floods the log when deleting large number of block files

2017-08-15 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12298:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

I have committed this to the feature branch, thanks [~linyiqun] for the 
contribution!

> Ozone: Block deletion service floods the log when deleting large number of 
> block files
> --
>
> Key: HDFS-12298
> URL: https://issues.apache.org/jira/browse/HDFS-12298
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: HDFS-7240
>
> Attachments: HDFS-12298-HDFS-7240.001.patch, 
> HDFS-12298-HDFS-7240.002.patch
>
>
> Block deletion service will floods the log when deleting large number of 
> blocks.
> We can write a simple performance test (create a bunch of blocks file meta 
> data but don't create real files) to reproduce this.
> The following are the logs in my local.
> {noformat}
> 2017-08-14 16:20:59,992 [BlockDeletingService#3] INFO  
> utils.BackgroundService (BackgroundService.java:run(103)) - Number of 
> background tasks to execute : 10
> 2017-08-14 16:20:59,993 [BlockDeletingService#8] INFO  
> background.BlockDeletingService (BlockDeletingService.java:call(173))  - 
> Container : c555fc61-d4fa-4db8-afed-3837510d301d, To-Delete blocks : 2
> 2017-08-14 16:20:59,993 [BlockDeletingService#1] INFO  
> background.BlockDeletingService (BlockDeletingService.java:call(173))  - 
> Container : db7d6a21-e0da-4409-9b72-2ade7f1fad53, To-Delete blocks : 2
> 2017-08-14 16:20:59,993 [BlockDeletingService#8] INFO  
> background.BlockDeletingService (BlockDeletingService.java:lambda$0(177)) 
>  - Deleting block #deleting#c555fc61-d4fa-4db8-afed-3837510d301db89
> 2017-08-14 16:20:59,993 [BlockDeletingService#1] INFO  
> background.BlockDeletingService (BlockDeletingService.java:lambda$0(177)) 
>  - Deleting block #deleting#db7d6a21-e0da-4409-9b72-2ade7f1fad53b89
> 2017-08-14 16:20:59,997 [BlockDeletingService#2] INFO  
> background.BlockDeletingService (BlockDeletingService.java:call(173))  - 
> Container : 005b32b3-c383-451b-a780-5a1a284a35a3, To-Delete blocks : 2
> 2017-08-14 16:20:59,997 [BlockDeletingService#2] INFO  
> background.BlockDeletingService (BlockDeletingService.java:lambda$0(177)) 
>  - Deleting block #deleting#005b32b3-c383-451b-a780-5a1a284a35a3b89
> 2017-08-14 16:20:59,998 [BlockDeletingService#6] INFO  
> background.BlockDeletingService (BlockDeletingService.java:call(173))  - 
> Container : 9a7a9005-26a6-445e-a887-bee693240cb2, To-Delete blocks : 2
> 2017-08-14 16:21:00,005 [BlockDeletingService#6] INFO  
> background.BlockDeletingService (BlockDeletingService.java:lambda$0(177)) 
>  - Deleting block #deleting#9a7a9005-26a6-445e-a887-bee693240cb2b89
> 2017-08-14 16:21:00,005 [BlockDeletingService#1] INFO  
> background.BlockDeletingService (BlockDeletingService.java:lambda$0(177)) 
>  - Deleting block #deleting#db7d6a21-e0da-4409-9b72-2ade7f1fad53b9
> 2017-08-14 16:21:00,004 [BlockDeletingService#2] INFO  
> background.BlockDeletingService (BlockDeletingService.java:lambda$0(177)) 
>  - Deleting block #deleting#005b32b3-c383-451b-a780-5a1a284a35a3b9
> 2017-08-14 16:21:00,003 [BlockDeletingService#9] INFO  
> background.BlockDeletingService (BlockDeletingService.java:call(173))  - 
> Container : 20f37c36-9ea3-46d7-9eaf-c7d3db478870, To-Delete blocks : 2
> 2017-08-14 16:21:00,007 [BlockDeletingService#9] INFO  
> background.BlockDeletingService (BlockDeletingService.java:lambda$0(177)) 
>  - Deleting block #deleting#20f37c36-9ea3-46d7-9eaf-c7d3db478870b89
> 2017-08-14 16:21:00,002 [BlockDeletingService#5] INFO  
> background.BlockDeletingService (BlockDeletingService.java:call(173))  - 
> Container : 2b3f389c-d7fd-4990-83b5-02d8d40569e9, To-Delete blocks : 2
> 2017-08-14 16:21:00,001 [BlockDeletingService#0] INFO  
> background.BlockDeletingService (BlockDeletingService.java:call(173))  - 
> Container : 6eabd191-54b6-4acd-bdc7-1bb05e81bd68, To-Delete blocks : 2
> 2017-08-14 16:21:00,000 [BlockDeletingService#4] INFO  
> background.BlockDeletingService (BlockDeletingService.java:call(173))  - 
> Container : 2e16e39d-c000-41ae-afb3-1a1b3a43e9bc, To-Delete blocks : 2
> 2017-08-14 16:21:00,009 [BlockDeletingService#4] INFO  
> background.BlockDeletingService (BlockDeletingService.java:lambda$0(177)) 
>  - Deleting block #deleting#2e16e39d-c000-41ae-afb3-1a1b3a43e9bcb89
> 2017-08-14 16:21:00,009 [BlockDeletingService#0] INFO  
> background.BlockDeletingService (BlockDeletingService.java:lambda$0(177)) 
>  - Deleting block 

[jira] [Commented] (HDFS-11696) Fix warnings from Spotbugs in hadoop-hdfs

2017-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126982#comment-16126982
 ] 

Hudson commented on HDFS-11696:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12188 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12188/])
HDFS-11696. Fix warnings from Spotbugs in hadoop-hdfs. Contributed by (yqlin: 
rev 2e43c28e01fe006210e71aab179527669f6412ed)
* (edit) hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeOptionParsing.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/protocol/SlowDiskReports.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorageRetentionManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java


> Fix warnings from Spotbugs in hadoop-hdfs
> -
>
> Key: HDFS-11696
> URL: https://issues.apache.org/jira/browse/HDFS-11696
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 3.0.0-beta1
>
> Attachments: findbugsHtml.html, HADOOP-14337.001.patch, 
> HADOOP-14337.002.patch, HADOOP-14337.003.patch, HDFS-11696.004.patch, 
> HDFS-11696.005.patch, HDFS-11696.006.patch, HDFS-11696.007.patch, 
> HDFS-11696.008.patch, HDFS-11696.009.patch, HDFS-11696.010.patch
>
>
> There are totally 12 findbugs issues generated after switching from Findbugs 
> to Spotbugs across the project in HADOOP-14316. This JIRA focus on cleaning 
> up the part of warnings under scope of HDFS(mainly contained in hadoop-hdfs 
> and hadoop-hdfs-client).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12298) Ozone: Block deletion service floods the log when deleting large number of block files

2017-08-15 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126972#comment-16126972
 ] 

Weiwei Yang commented on HDFS-12298:


Looks good to me, the v2 patch, I will commit this shortly. Thanks [~linyiqun]

> Ozone: Block deletion service floods the log when deleting large number of 
> block files
> --
>
> Key: HDFS-12298
> URL: https://issues.apache.org/jira/browse/HDFS-12298
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12298-HDFS-7240.001.patch, 
> HDFS-12298-HDFS-7240.002.patch
>
>
> Block deletion service will floods the log when deleting large number of 
> blocks.
> We can write a simple performance test (create a bunch of blocks file meta 
> data but don't create real files) to reproduce this.
> The following are the logs in my local.
> {noformat}
> 2017-08-14 16:20:59,992 [BlockDeletingService#3] INFO  
> utils.BackgroundService (BackgroundService.java:run(103)) - Number of 
> background tasks to execute : 10
> 2017-08-14 16:20:59,993 [BlockDeletingService#8] INFO  
> background.BlockDeletingService (BlockDeletingService.java:call(173))  - 
> Container : c555fc61-d4fa-4db8-afed-3837510d301d, To-Delete blocks : 2
> 2017-08-14 16:20:59,993 [BlockDeletingService#1] INFO  
> background.BlockDeletingService (BlockDeletingService.java:call(173))  - 
> Container : db7d6a21-e0da-4409-9b72-2ade7f1fad53, To-Delete blocks : 2
> 2017-08-14 16:20:59,993 [BlockDeletingService#8] INFO  
> background.BlockDeletingService (BlockDeletingService.java:lambda$0(177)) 
>  - Deleting block #deleting#c555fc61-d4fa-4db8-afed-3837510d301db89
> 2017-08-14 16:20:59,993 [BlockDeletingService#1] INFO  
> background.BlockDeletingService (BlockDeletingService.java:lambda$0(177)) 
>  - Deleting block #deleting#db7d6a21-e0da-4409-9b72-2ade7f1fad53b89
> 2017-08-14 16:20:59,997 [BlockDeletingService#2] INFO  
> background.BlockDeletingService (BlockDeletingService.java:call(173))  - 
> Container : 005b32b3-c383-451b-a780-5a1a284a35a3, To-Delete blocks : 2
> 2017-08-14 16:20:59,997 [BlockDeletingService#2] INFO  
> background.BlockDeletingService (BlockDeletingService.java:lambda$0(177)) 
>  - Deleting block #deleting#005b32b3-c383-451b-a780-5a1a284a35a3b89
> 2017-08-14 16:20:59,998 [BlockDeletingService#6] INFO  
> background.BlockDeletingService (BlockDeletingService.java:call(173))  - 
> Container : 9a7a9005-26a6-445e-a887-bee693240cb2, To-Delete blocks : 2
> 2017-08-14 16:21:00,005 [BlockDeletingService#6] INFO  
> background.BlockDeletingService (BlockDeletingService.java:lambda$0(177)) 
>  - Deleting block #deleting#9a7a9005-26a6-445e-a887-bee693240cb2b89
> 2017-08-14 16:21:00,005 [BlockDeletingService#1] INFO  
> background.BlockDeletingService (BlockDeletingService.java:lambda$0(177)) 
>  - Deleting block #deleting#db7d6a21-e0da-4409-9b72-2ade7f1fad53b9
> 2017-08-14 16:21:00,004 [BlockDeletingService#2] INFO  
> background.BlockDeletingService (BlockDeletingService.java:lambda$0(177)) 
>  - Deleting block #deleting#005b32b3-c383-451b-a780-5a1a284a35a3b9
> 2017-08-14 16:21:00,003 [BlockDeletingService#9] INFO  
> background.BlockDeletingService (BlockDeletingService.java:call(173))  - 
> Container : 20f37c36-9ea3-46d7-9eaf-c7d3db478870, To-Delete blocks : 2
> 2017-08-14 16:21:00,007 [BlockDeletingService#9] INFO  
> background.BlockDeletingService (BlockDeletingService.java:lambda$0(177)) 
>  - Deleting block #deleting#20f37c36-9ea3-46d7-9eaf-c7d3db478870b89
> 2017-08-14 16:21:00,002 [BlockDeletingService#5] INFO  
> background.BlockDeletingService (BlockDeletingService.java:call(173))  - 
> Container : 2b3f389c-d7fd-4990-83b5-02d8d40569e9, To-Delete blocks : 2
> 2017-08-14 16:21:00,001 [BlockDeletingService#0] INFO  
> background.BlockDeletingService (BlockDeletingService.java:call(173))  - 
> Container : 6eabd191-54b6-4acd-bdc7-1bb05e81bd68, To-Delete blocks : 2
> 2017-08-14 16:21:00,000 [BlockDeletingService#4] INFO  
> background.BlockDeletingService (BlockDeletingService.java:call(173))  - 
> Container : 2e16e39d-c000-41ae-afb3-1a1b3a43e9bc, To-Delete blocks : 2
> 2017-08-14 16:21:00,009 [BlockDeletingService#4] INFO  
> background.BlockDeletingService (BlockDeletingService.java:lambda$0(177)) 
>  - Deleting block #deleting#2e16e39d-c000-41ae-afb3-1a1b3a43e9bcb89
> 2017-08-14 16:21:00,009 [BlockDeletingService#0] INFO  
> background.BlockDeletingService (BlockDeletingService.java:lambda$0(177)) 
>  - Deleting block #deleting#6eabd191-54b6-4acd-bdc7-1bb05e81bd68b89
> 2017-08-14 16:20:59,999 [BlockDeletingService#7] INFO  
> background.BlockDeletingService 

[jira] [Updated] (HDFS-11696) Fix warnings from Spotbugs in hadoop-hdfs

2017-08-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11696:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~andrew.wang] for the review!

> Fix warnings from Spotbugs in hadoop-hdfs
> -
>
> Key: HDFS-11696
> URL: https://issues.apache.org/jira/browse/HDFS-11696
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 3.0.0-beta1
>
> Attachments: findbugsHtml.html, HADOOP-14337.001.patch, 
> HADOOP-14337.002.patch, HADOOP-14337.003.patch, HDFS-11696.004.patch, 
> HDFS-11696.005.patch, HDFS-11696.006.patch, HDFS-11696.007.patch, 
> HDFS-11696.008.patch, HDFS-11696.009.patch, HDFS-11696.010.patch
>
>
> There are totally 12 findbugs issues generated after switching from Findbugs 
> to Spotbugs across the project in HADOOP-14316. This JIRA focus on cleaning 
> up the part of warnings under scope of HDFS(mainly contained in hadoop-hdfs 
> and hadoop-hdfs-client).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12298) Ozone: Block deletion service floods the log when deleting large number of block files

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126880#comment-16126880
 ] 

Hadoop QA commented on HDFS-12298:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
15s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
17s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cblock.TestBufferManager |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12298 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881869/HDFS-12298-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bc58e0d70ec5 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 57dfe49 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20699/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20699/testReport/ |
| asflicense | 

[jira] [Updated] (HDFS-12230) Ozone: KSM: Add creation time field in bucket info

2017-08-15 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12230:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

Thanks [~linyiqun] for the contribution, I have committed this to the feature 
branch.

> Ozone: KSM: Add creation time field in bucket info
> --
>
> Key: HDFS-12230
> URL: https://issues.apache.org/jira/browse/HDFS-12230
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: HDFS-7240
>
> Attachments: HDFS-12230-HDFS-7240.001.patch, 
> HDFS-12230-HDFS-7240.002.patch, HDFS-12230-HDFS-7240.003.patch
>
>
> It would be nice to add creation time in bucket info which will be returned 
> in {{infoBucket}} and {{listBicket}} command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11814) Benchmark and tune for prefered default cell size

2017-08-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-11814.

Resolution: Done

Thanks Wei! Let's resolve this one and continue with changing the defaults in 
HDFS-12303.

> Benchmark and tune for prefered default cell size
> -
>
> Key: HDFS-11814
> URL: https://issues.apache.org/jira/browse/HDFS-11814
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: Wei Zhou
>  Labels: hdfs-ec-3.0-must-do
> Attachments: RS-6-3-Concurrent.png, RS-Read.png, RS-Write.png
>
>
> Doing some benchmarking to see which cell size is more desirable, other than 
> current 64k



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12303) Change default EC cell size to 1MB for better performance

2017-08-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12303:
---
  Labels: hdfs-ec-3.0-must-do  (was: )
Priority: Blocker  (was: Major)
Hadoop Flags: Incompatible change

> Change default EC cell size to 1MB for better performance
> -
>
> Key: HDFS-12303
> URL: https://issues.apache.org/jira/browse/HDFS-12303
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
>
> As discussed in HDFS-11814, 1MB cell size shows better performance than 
> others during the tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-08-15 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126845#comment-16126845
 ] 

Xiao Chen commented on HDFS-10899:
--

Looked at the Daryn comment about edit log buffering and experimented.

For a {{SetXAttrOp}}, since re-encryption always updates the 1 encryption 
xattr, the size of one {{logEdit}} is between size 100 ~ 200 bytes (including 
the 17 bytes overhead which is always there). Default buffer size is 512KB 
non-configurable for both the 
[qjm|https://github.com/apache/hadoop/blob/branch-3.0.0-alpha4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumJournalManager.java#L99]
 and 
[fjm|https://github.com/apache/hadoop/blob/branch-3.0.0-alpha4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java#L68],
 so the default batch size of 1000 is safe - as long as we {{logSync}} after 
each batch, which is current behavior.

As a safety check, I'm adding a scary warning if batch size is larger than 2000.

Also benchmarking to save our guts about NN throughput. Will post when all 
results are ready.

> Add functionality to re-encrypt EDEKs
> -
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.09.patch, HDFS-10899.10.patch, HDFS-10899.10.wip.patch, 
> HDFS-10899.11.patch, HDFS-10899.12.patch, HDFS-10899.13.patch, 
> HDFS-10899.wip.2.patch, HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf, 
> Re-encrypt edek design doc V2.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12230) Ozone: KSM: Add creation time field in bucket info

2017-08-15 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126842#comment-16126842
 ] 

Weiwei Yang commented on HDFS-12230:


+1 on v3 patch, I will commit this shortly.

> Ozone: KSM: Add creation time field in bucket info
> --
>
> Key: HDFS-12230
> URL: https://issues.apache.org/jira/browse/HDFS-12230
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12230-HDFS-7240.001.patch, 
> HDFS-12230-HDFS-7240.002.patch, HDFS-12230-HDFS-7240.003.patch
>
>
> It would be nice to add creation time in bucket info which will be returned 
> in {{infoBucket}} and {{listBicket}} command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12230) Ozone: KSM: Add creation time field in bucket info

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126840#comment-16126840
 ] 

Hadoop QA commented on HDFS-12230:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 40s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
17s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.scm.TestArchive |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.ozone.web.client.TestKeys |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
|   | org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12230 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881866/HDFS-12230-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux cd8b04c957a6 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 57dfe49 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|