[jira] [Commented] (HDFS-12098) Ozone: Datanode is unable to register with scm if scm starts later

2017-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089336#comment-16089336
 ] 

Hadoop QA commented on HDFS-12098:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 16 new + 154 unchanged - 0 fixed = 170 total (was 154) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
0s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Inconsistent synchronization of 
org.apache.hadoop.hdfs.server.datanode.DataNode.datanodeStateMachine; locked 
42% of time  Unsynchronized access at DataNode.java:42% of time  Unsynchronized 
access at DataNode.java:[line 3228] |
| Failed junit tests | 
hadoop.ozone.container.replication.TestContainerReplicationManager |
|   | hadoop.ozone.TestMiniOzoneCluster |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.ozone.TestStorageContainerManager |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12098 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877520/HDFS-12098-HDFS-7240.testcase.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a4fe1c2f42ae 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 1bec6a1 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20299/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 

[jira] [Commented] (HDFS-12149) Ozone: RocksDB implementation of ozone metadata store

2017-07-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089327#comment-16089327
 ] 

Allen Wittenauer commented on HDFS-12149:
-

FWIW, I'd really like for us to rip LevelDB completely out of Hadoop. It 
absolutely destroys us on portability.  See HADOOP-11790 for more.

> Ozone: RocksDB implementation of ozone metadata store
> -
>
> Key: HDFS-12149
> URL: https://issues.apache.org/jira/browse/HDFS-12149
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> HDFS-12069 added a general interface for ozone metadata store, we already 
> have a leveldb implementation, this JIRA is to track the work of rocksdb 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12098) Ozone: Datanode is unable to register with scm if scm starts later

2017-07-16 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089283#comment-16089283
 ] 

Weiwei Yang edited comment on HDFS-12098 at 7/17/17 4:01 AM:
-

Attached a test case patch to reproduce this issue. Please take a look at 
[^HDFS-12098-HDFS-7240.testcase.patch]. This patch simulates the scenario

# Start mini ozone cluster without starting scm
# Datanode is unable to register to scm
# Start scm, waiting for datanode to register
# Wait a while but datanode is still unable to successfully register to scm

if you apply this patch, it's gonna to fail. Some log from step 4 is 
interesting,

{noformat}
2017-07-17 11:46:02,451 [Datanode State Machine Thread - 0] INFO  ipc.Client 
(Client.java:handleConnectionFailure(933)) - Retrying connect to server: 
localhost/127.0.0.1:51183. Already tried 2 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-17 11:46:02,467 [Datanode State Machine Thread - 0] INFO  
endpoint.VersionEndpointTask (VersionEndpointTask.java:call(61))  - Version 
endpoint task (localhost/127.0.0.1:51183) transited to state REGISTER
2017-07-17 11:46:02,468 [Datanode State Machine Thread - 1] INFO  
endpoint.VersionEndpointTask (VersionEndpointTask.java:call(61))  - Version 
endpoint task (localhost/127.0.0.1:51183) transited to state HEARTBEAT
2017-07-17 11:46:02,469 [Datanode State Machine Thread - 2] INFO  
endpoint.VersionEndpointTask (VersionEndpointTask.java:call(61))  - Version 
endpoint task (localhost/127.0.0.1:51183) transited to state SHUTDOWN
2017-07-17 11:46:02,471 [Datanode State Machine Thread - 3] INFO  
endpoint.VersionEndpointTask (VersionEndpointTask.java:call(61))  - Version 
endpoint task (localhost/127.0.0.1:51183) transited to state SHUTDOWN
{noformat}

Instead of transiting to state {{HEARTBEAT}}, it transited to {{SHUTDOWN}}.

You might have noticed the patch changes some more code than just adding a 
test, that is because the reason I mentioned earlier. I also have added a 
method to check if a datanode is registered to scm so that we can check 
datanode state even scm is not started.

I have a patch to fix this also, if applied that patch, this test will pass. I 
am  ready to share that as well.

Thanks


was (Author: cheersyang):
Attached a test case patch to reproduce this issue. Please take a look at 
[^HDFS-12098-HDFS-7240.testcase.patch]. This patch simulates the scenario

# Start mini ozone cluster without starting scm
# Datanode is unable to register to scm
# Start scm, waiting for datanode to register
# Wait a while but datanode is still unable to successfully register to scm

Step 4 will print log

{noformat}
2017-07-17 11:46:02,451 [Datanode State Machine Thread - 0] INFO  ipc.Client 
(Client.java:handleConnectionFailure(933)) - Retrying connect to server: 
localhost/127.0.0.1:51183. Already tried 2 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-17 11:46:02,467 [Datanode State Machine Thread - 0] INFO  
endpoint.VersionEndpointTask (VersionEndpointTask.java:call(61))  - Version 
endpoint task (localhost/127.0.0.1:51183) transited to state REGISTER
2017-07-17 11:46:02,468 [Datanode State Machine Thread - 1] INFO  
endpoint.VersionEndpointTask (VersionEndpointTask.java:call(61))  - Version 
endpoint task (localhost/127.0.0.1:51183) transited to state HEARTBEAT
2017-07-17 11:46:02,469 [Datanode State Machine Thread - 2] INFO  
endpoint.VersionEndpointTask (VersionEndpointTask.java:call(61))  - Version 
endpoint task (localhost/127.0.0.1:51183) transited to state SHUTDOWN
2017-07-17 11:46:02,471 [Datanode State Machine Thread - 3] INFO  
endpoint.VersionEndpointTask (VersionEndpointTask.java:call(61))  - Version 
endpoint task (localhost/127.0.0.1:51183) transited to state SHUTDOWN
2017-07-17 11:46:03,457 [Datanode State Machine Thread - 0] INFO  
statemachine.DatanodeStateMachine 
(DatanodeStateMachine.java:lambda$startDaemon$0(272))  - Ozone container 
server started.
{noformat}

if you apply this patch, it's gonna to fail. You might have noticed the patch 
changes some more code than just adding a test, that is because the reason I 
mentioned earlier. I also have added a method to check if a datanode is 
registered to scm so that we can check datanode state even scm is not started.

I have a patch to fix this also, if applied that patch, this test will pass. I 
am  ready to share that as well.

Thanks

> Ozone: Datanode is unable to register with scm if scm starts later
> --
>
> Key: HDFS-12098
> URL: https://issues.apache.org/jira/browse/HDFS-12098
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>

[jira] [Comment Edited] (HDFS-12098) Ozone: Datanode is unable to register with scm if scm starts later

2017-07-16 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089283#comment-16089283
 ] 

Weiwei Yang edited comment on HDFS-12098 at 7/17/17 3:59 AM:
-

Attached a test case patch to reproduce this issue. Please take a look at 
[^HDFS-12098-HDFS-7240.testcase.patch]. This patch simulates the scenario

# Start mini ozone cluster without starting scm
# Datanode is unable to register to scm
# Start scm, waiting for datanode to register
# Wait a while but datanode is still unable to successfully register to scm

Step 4 will print log

{noformat}
2017-07-17 11:46:02,451 [Datanode State Machine Thread - 0] INFO  ipc.Client 
(Client.java:handleConnectionFailure(933)) - Retrying connect to server: 
localhost/127.0.0.1:51183. Already tried 2 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-17 11:46:02,467 [Datanode State Machine Thread - 0] INFO  
endpoint.VersionEndpointTask (VersionEndpointTask.java:call(61))  - Version 
endpoint task (localhost/127.0.0.1:51183) transited to state REGISTER
2017-07-17 11:46:02,468 [Datanode State Machine Thread - 1] INFO  
endpoint.VersionEndpointTask (VersionEndpointTask.java:call(61))  - Version 
endpoint task (localhost/127.0.0.1:51183) transited to state HEARTBEAT
2017-07-17 11:46:02,469 [Datanode State Machine Thread - 2] INFO  
endpoint.VersionEndpointTask (VersionEndpointTask.java:call(61))  - Version 
endpoint task (localhost/127.0.0.1:51183) transited to state SHUTDOWN
2017-07-17 11:46:02,471 [Datanode State Machine Thread - 3] INFO  
endpoint.VersionEndpointTask (VersionEndpointTask.java:call(61))  - Version 
endpoint task (localhost/127.0.0.1:51183) transited to state SHUTDOWN
2017-07-17 11:46:03,457 [Datanode State Machine Thread - 0] INFO  
statemachine.DatanodeStateMachine 
(DatanodeStateMachine.java:lambda$startDaemon$0(272))  - Ozone container 
server started.
{noformat}

if you apply this patch, it's gonna to fail. You might have noticed the patch 
changes some more code than just adding a test, that is because the reason I 
mentioned earlier. I also have added a method to check if a datanode is 
registered to scm so that we can check datanode state even scm is not started.

I have a patch to fix this also, if applied that patch, this test will pass. I 
am  ready to share that as well.

Thanks


was (Author: cheersyang):
Attached a test case patch to reproduce this issue. Please take a look at 
[^HDFS-12098-HDFS-7240.testcase.patch]. This patch simulates the scenario

# Start mini ozone cluster without starting scm
# Datanode is unable to register to scm
# Start scm, waiting for datanode to register
# Wait a while but datanode is still unable to successfully register to scm

if you apply this patch, it's gonna to fail. You might have noticed the patch 
changes some more code than just adding a test, that is because the reason I 
mentioned earlier. I also have added a method to check if a datanode is 
registered to scm so that we can check datanode state even scm is not started.

I have a patch to fix this also, if applied that patch, this test will pass. I 
am  ready to share that as well.

Thanks

> Ozone: Datanode is unable to register with scm if scm starts later
> --
>
> Key: HDFS-12098
> URL: https://issues.apache.org/jira/browse/HDFS-12098
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: disabled-scm-test.patch, HDFS-12098-HDFS-7240.001.patch, 
> HDFS-12098-HDFS-7240.002.patch, HDFS-12098-HDFS-7240.testcase.patch, Screen 
> Shot 2017-07-11 at 4.58.08 PM.png, thread_dump.log
>
>
> Reproducing steps
> 1. Start namenode
> {{./bin/hdfs --daemon start namenode}}
> 2. Start datanode
> {{./bin/hdfs datanode}}
> will see following connection issues
> {noformat}
> 17/07/13 21:16:48 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 0 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> 17/07/13 21:16:49 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 1 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> 17/07/13 21:16:50 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 2 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> 17/07/13 21:16:51 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 3 time(s); retry 
> 

[jira] [Comment Edited] (HDFS-12098) Ozone: Datanode is unable to register with scm if scm starts later

2017-07-16 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089283#comment-16089283
 ] 

Weiwei Yang edited comment on HDFS-12098 at 7/17/17 3:58 AM:
-

Attached a test case patch to reproduce this issue. Please take a look at 
[^HDFS-12098-HDFS-7240.testcase.patch]. This patch simulates the scenario

# Start mini ozone cluster without starting scm
# Datanode is unable to register to scm
# Start scm, waiting for datanode to register
# Wait a while but datanode is still unable to successfully register to scm

if you apply this patch, it's gonna to fail. You might have noticed the patch 
changes some more code than just adding a test, that is because the reason I 
mentioned earlier. I also have added a method to check if a datanode is 
registered to scm so that we can check datanode state even scm is not started.

I have a patch to fix this also, if applied that patch, this test will pass. I 
am  ready to share that as well.

Thanks


was (Author: cheersyang):
Attached a test case patch to reproduce this issue. Please take a look at 
[^HDFS-12098-HDFS-7240.testcase.patch]. This patch simulates the scenario

# Start mini ozone cluster without starting scm
# Datanode is unable to register to scm
# Start scm, waiting for datanode to register
# Wait a while but datanode is still unable to successfully register to scm

if you apply this patch, it's gonna fail. You might have noticed the patch 
changes some more code than just adding a test, that is because the reason I 
mentioned earlier. I also have added a method to check if a datanode is 
registered to scm so that we can check datanode state even scm is not started.

I have a patch to fix this also, if applied that patch, this test will pass. I 
am  ready to share that as well.

Thanks

> Ozone: Datanode is unable to register with scm if scm starts later
> --
>
> Key: HDFS-12098
> URL: https://issues.apache.org/jira/browse/HDFS-12098
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: disabled-scm-test.patch, HDFS-12098-HDFS-7240.001.patch, 
> HDFS-12098-HDFS-7240.002.patch, HDFS-12098-HDFS-7240.testcase.patch, Screen 
> Shot 2017-07-11 at 4.58.08 PM.png, thread_dump.log
>
>
> Reproducing steps
> 1. Start namenode
> {{./bin/hdfs --daemon start namenode}}
> 2. Start datanode
> {{./bin/hdfs datanode}}
> will see following connection issues
> {noformat}
> 17/07/13 21:16:48 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 0 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> 17/07/13 21:16:49 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 1 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> 17/07/13 21:16:50 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 2 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> 17/07/13 21:16:51 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 3 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> {noformat}
> this is expected because scm is not started yet
> 3. Start scm
> {{./bin/hdfs scm}}
> expecting datanode can register to this scm, expecting the log in scm
> {noformat}
> 17/07/13 21:22:30 INFO node.SCMNodeManager: Data node with ID: 
> af22862d-aafa-4941-9073-53224ae43e2c Registered.
> {noformat}
> but did *NOT* see this log. (_I debugged into the code and found the datanode 
> state was transited SHUTDOWN unexpectedly because the thread leaks, each of 
> those threads counted to set to next state and they all set to SHUTDOWN 
> state_)
> 4. Create a container from scm CLI
> {{./bin/hdfs scm -container -create -c 20170714c0}}
> this fails with following exception
> {noformat}
> Creating container : 20170714c0.
> Error executing 
> command:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.scm.exceptions.SCMException):
>  Unable to create container while in chill mode
>   at 
> org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:241)
>   at 
> org.apache.hadoop.ozone.scm.StorageContainerManager.allocateContainer(StorageContainerManager.java:392)
>   at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.allocateContainer(StorageContainerLocationProtocolServerSideTranslatorPB.java:73)
> {noformat}
> 

[jira] [Updated] (HDFS-12098) Ozone: Datanode is unable to register with scm if scm starts later

2017-07-16 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12098:
---
Status: Patch Available  (was: In Progress)

> Ozone: Datanode is unable to register with scm if scm starts later
> --
>
> Key: HDFS-12098
> URL: https://issues.apache.org/jira/browse/HDFS-12098
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: disabled-scm-test.patch, HDFS-12098-HDFS-7240.001.patch, 
> HDFS-12098-HDFS-7240.002.patch, HDFS-12098-HDFS-7240.testcase.patch, Screen 
> Shot 2017-07-11 at 4.58.08 PM.png, thread_dump.log
>
>
> Reproducing steps
> 1. Start namenode
> {{./bin/hdfs --daemon start namenode}}
> 2. Start datanode
> {{./bin/hdfs datanode}}
> will see following connection issues
> {noformat}
> 17/07/13 21:16:48 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 0 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> 17/07/13 21:16:49 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 1 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> 17/07/13 21:16:50 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 2 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> 17/07/13 21:16:51 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 3 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> {noformat}
> this is expected because scm is not started yet
> 3. Start scm
> {{./bin/hdfs scm}}
> expecting datanode can register to this scm, expecting the log in scm
> {noformat}
> 17/07/13 21:22:30 INFO node.SCMNodeManager: Data node with ID: 
> af22862d-aafa-4941-9073-53224ae43e2c Registered.
> {noformat}
> but did *NOT* see this log. (_I debugged into the code and found the datanode 
> state was transited SHUTDOWN unexpectedly because the thread leaks, each of 
> those threads counted to set to next state and they all set to SHUTDOWN 
> state_)
> 4. Create a container from scm CLI
> {{./bin/hdfs scm -container -create -c 20170714c0}}
> this fails with following exception
> {noformat}
> Creating container : 20170714c0.
> Error executing 
> command:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.scm.exceptions.SCMException):
>  Unable to create container while in chill mode
>   at 
> org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:241)
>   at 
> org.apache.hadoop.ozone.scm.StorageContainerManager.allocateContainer(StorageContainerManager.java:392)
>   at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.allocateContainer(StorageContainerLocationProtocolServerSideTranslatorPB.java:73)
> {noformat}
> datanode was not registered to scm, thus it's still in chill mode.
> *Note*, if we start scm first, there is no such issue, I can create container 
> from CLI without any problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12098) Ozone: Datanode is unable to register with scm if scm starts later

2017-07-16 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089283#comment-16089283
 ] 

Weiwei Yang commented on HDFS-12098:


Attached a test case patch to reproduce this issue. Please take a look at 
[^HDFS-12098-HDFS-7240.testcase.patch]. This patch simulates the scenario

# Start mini ozone cluster without starting scm
# Datanode is unable to register to scm
# Start scm, waiting for datanode to register
# Wait a while but datanode is still unable to successfully register to scm

if you apply this patch, it's gonna fail. You might have noticed the patch 
changes some more code than just adding a test, that is because the reason I 
mentioned earlier. I also have added a method to check if a datanode is 
registered to scm so that we can check datanode state even scm is not started.

I have a patch to fix this also, if applied that patch, this test will pass. I 
am  ready to share that as well.

Thanks

> Ozone: Datanode is unable to register with scm if scm starts later
> --
>
> Key: HDFS-12098
> URL: https://issues.apache.org/jira/browse/HDFS-12098
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: disabled-scm-test.patch, HDFS-12098-HDFS-7240.001.patch, 
> HDFS-12098-HDFS-7240.002.patch, HDFS-12098-HDFS-7240.testcase.patch, Screen 
> Shot 2017-07-11 at 4.58.08 PM.png, thread_dump.log
>
>
> Reproducing steps
> 1. Start namenode
> {{./bin/hdfs --daemon start namenode}}
> 2. Start datanode
> {{./bin/hdfs datanode}}
> will see following connection issues
> {noformat}
> 17/07/13 21:16:48 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 0 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> 17/07/13 21:16:49 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 1 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> 17/07/13 21:16:50 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 2 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> 17/07/13 21:16:51 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 3 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> {noformat}
> this is expected because scm is not started yet
> 3. Start scm
> {{./bin/hdfs scm}}
> expecting datanode can register to this scm, expecting the log in scm
> {noformat}
> 17/07/13 21:22:30 INFO node.SCMNodeManager: Data node with ID: 
> af22862d-aafa-4941-9073-53224ae43e2c Registered.
> {noformat}
> but did *NOT* see this log. (_I debugged into the code and found the datanode 
> state was transited SHUTDOWN unexpectedly because the thread leaks, each of 
> those threads counted to set to next state and they all set to SHUTDOWN 
> state_)
> 4. Create a container from scm CLI
> {{./bin/hdfs scm -container -create -c 20170714c0}}
> this fails with following exception
> {noformat}
> Creating container : 20170714c0.
> Error executing 
> command:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.scm.exceptions.SCMException):
>  Unable to create container while in chill mode
>   at 
> org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:241)
>   at 
> org.apache.hadoop.ozone.scm.StorageContainerManager.allocateContainer(StorageContainerManager.java:392)
>   at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.allocateContainer(StorageContainerLocationProtocolServerSideTranslatorPB.java:73)
> {noformat}
> datanode was not registered to scm, thus it's still in chill mode.
> *Note*, if we start scm first, there is no such issue, I can create container 
> from CLI without any problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12098) Ozone: Datanode is unable to register with scm if scm starts later

2017-07-16 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12098:
---
Attachment: HDFS-12098-HDFS-7240.testcase.patch

> Ozone: Datanode is unable to register with scm if scm starts later
> --
>
> Key: HDFS-12098
> URL: https://issues.apache.org/jira/browse/HDFS-12098
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: disabled-scm-test.patch, HDFS-12098-HDFS-7240.001.patch, 
> HDFS-12098-HDFS-7240.002.patch, HDFS-12098-HDFS-7240.testcase.patch, Screen 
> Shot 2017-07-11 at 4.58.08 PM.png, thread_dump.log
>
>
> Reproducing steps
> 1. Start namenode
> {{./bin/hdfs --daemon start namenode}}
> 2. Start datanode
> {{./bin/hdfs datanode}}
> will see following connection issues
> {noformat}
> 17/07/13 21:16:48 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 0 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> 17/07/13 21:16:49 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 1 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> 17/07/13 21:16:50 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 2 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> 17/07/13 21:16:51 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 3 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> {noformat}
> this is expected because scm is not started yet
> 3. Start scm
> {{./bin/hdfs scm}}
> expecting datanode can register to this scm, expecting the log in scm
> {noformat}
> 17/07/13 21:22:30 INFO node.SCMNodeManager: Data node with ID: 
> af22862d-aafa-4941-9073-53224ae43e2c Registered.
> {noformat}
> but did *NOT* see this log. (_I debugged into the code and found the datanode 
> state was transited SHUTDOWN unexpectedly because the thread leaks, each of 
> those threads counted to set to next state and they all set to SHUTDOWN 
> state_)
> 4. Create a container from scm CLI
> {{./bin/hdfs scm -container -create -c 20170714c0}}
> this fails with following exception
> {noformat}
> Creating container : 20170714c0.
> Error executing 
> command:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.scm.exceptions.SCMException):
>  Unable to create container while in chill mode
>   at 
> org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:241)
>   at 
> org.apache.hadoop.ozone.scm.StorageContainerManager.allocateContainer(StorageContainerManager.java:392)
>   at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.allocateContainer(StorageContainerLocationProtocolServerSideTranslatorPB.java:73)
> {noformat}
> datanode was not registered to scm, thus it's still in chill mode.
> *Note*, if we start scm first, there is no such issue, I can create container 
> from CLI without any problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12098) Ozone: Datanode is unable to register with scm if scm starts later

2017-07-16 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12098:
---
Attachment: (was: HDFS-12098-HDFS-7240.testcase.patch)

> Ozone: Datanode is unable to register with scm if scm starts later
> --
>
> Key: HDFS-12098
> URL: https://issues.apache.org/jira/browse/HDFS-12098
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: disabled-scm-test.patch, HDFS-12098-HDFS-7240.001.patch, 
> HDFS-12098-HDFS-7240.002.patch, Screen Shot 2017-07-11 at 4.58.08 PM.png, 
> thread_dump.log
>
>
> Reproducing steps
> 1. Start namenode
> {{./bin/hdfs --daemon start namenode}}
> 2. Start datanode
> {{./bin/hdfs datanode}}
> will see following connection issues
> {noformat}
> 17/07/13 21:16:48 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 0 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> 17/07/13 21:16:49 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 1 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> 17/07/13 21:16:50 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 2 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> 17/07/13 21:16:51 INFO ipc.Client: Retrying connect to server: 
> ozone1.fyre.ibm.com/172.16.165.133:9861. Already tried 3 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
> SECONDS)
> {noformat}
> this is expected because scm is not started yet
> 3. Start scm
> {{./bin/hdfs scm}}
> expecting datanode can register to this scm, expecting the log in scm
> {noformat}
> 17/07/13 21:22:30 INFO node.SCMNodeManager: Data node with ID: 
> af22862d-aafa-4941-9073-53224ae43e2c Registered.
> {noformat}
> but did *NOT* see this log. (_I debugged into the code and found the datanode 
> state was transited SHUTDOWN unexpectedly because the thread leaks, each of 
> those threads counted to set to next state and they all set to SHUTDOWN 
> state_)
> 4. Create a container from scm CLI
> {{./bin/hdfs scm -container -create -c 20170714c0}}
> this fails with following exception
> {noformat}
> Creating container : 20170714c0.
> Error executing 
> command:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.scm.exceptions.SCMException):
>  Unable to create container while in chill mode
>   at 
> org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:241)
>   at 
> org.apache.hadoop.ozone.scm.StorageContainerManager.allocateContainer(StorageContainerManager.java:392)
>   at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.allocateContainer(StorageContainerLocationProtocolServerSideTranslatorPB.java:73)
> {noformat}
> datanode was not registered to scm, thus it's still in chill mode.
> *Note*, if we start scm first, there is no such issue, I can create container 
> from CLI without any problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12146) [SPS] : Fix TestStoragePolicySatisfierWithStripedFile#testSPSWhenFileHasLowRedundancyBlocks

2017-07-16 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089263#comment-16089263
 ] 

Rakesh R commented on HDFS-12146:
-

Thank you  [~surendrasingh]. Do you mind correcting 
{{TestStoragePolicySatisfier#testSPSWhenFileHasLowRedundancyBlocks}} test as 
well?
{code}
  cluster.restartNameNodes();
  cluster.restartDataNode(list.get(0), true);
  cluster.restartDataNode(list.get(1), true);
  cluster.waitActive();
  fs.satisfyStoragePolicy(filePath);
  Thread.sleep(3000 * 6);
  cluster.restartDataNode(list.get(2), true);
{code}

Also, I noticed {{Thread.sleep(3000 * 6);}} in these two {{LowRedundancy}} test 
cases. Do you think to replace this constant sleep time with the following way 
or a better logic than this?
{code}
  fs.satisfyStoragePolicy(filePath);
  DFSTestUtil.waitExpectedStorageType(filePath.toString(),
  StorageType.ARCHIVE, 2, 3, cluster.getFileSystem());
  cluster.restartDataNode(list.get(2), false);
  DFSTestUtil.waitExpectedStorageType(filePath.toString(),
  StorageType.ARCHIVE, 3, 3, cluster.getFileSystem());
{code}

> [SPS] : Fix 
> TestStoragePolicySatisfierWithStripedFile#testSPSWhenFileHasLowRedundancyBlocks
> ---
>
> Key: HDFS-12146
> URL: https://issues.apache.org/jira/browse/HDFS-12146
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-12146-HDFS-10285.001.patch
>
>
> TestStoragePolicySatisfierWithStripedFile#testSPSWhenFileHasLowRedundancyBlocks
>  failed in many build with port bind exception. I feel we no need to restart 
> datanodes on same port, just we checking the block redundancy scenario.. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12149) Ozone: RocksDB implementation of ozone metadata store

2017-07-16 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089218#comment-16089218
 ] 

Weiwei Yang commented on HDFS-12149:


Thanks [~anu] for the message, I will work on this.

bq. I am aware that what we have is a generic plugin layer which can use most 
key value stores, and RocksDB is just a specific instance of it and it is 
trivial for us to revert it, even if it is committed.

That's correct. We will follow Legal team's decision like you mentioned. It is 
trivial to revert this with a simple switch. Thank you.

> Ozone: RocksDB implementation of ozone metadata store
> -
>
> Key: HDFS-12149
> URL: https://issues.apache.org/jira/browse/HDFS-12149
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> HDFS-12069 added a general interface for ozone metadata store, we already 
> have a leveldb implementation, this JIRA is to track the work of rocksdb 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12149) Ozone: RocksDB implementation of ozone metadata store

2017-07-16 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088815#comment-16088815
 ] 

Anu Engineer edited comment on HDFS-12149 at 7/17/17 1:38 AM:
--

[~jnp], [~szetszwo], [~cheersyang], [~xyao], [~msingh], [~nandakumar131], 
[~linyiqun], [~vagarychen], [~yuanbo], [~arpitagarwal]

There is an interesting thread here in the Legal section, which we should be 
aware of before we post this patch which bring in RocksDB dependency.

TL: DR; Due to the RocksDB License change we might be able to use RocksDB. 
Approximately 3 hours ago RocksDB switched over to using Apache 2 License, 
before that RocksDB was declared as *persona non grata* in the Apache world.

Details: In the Apache JIRA LEGAL-303, it was clarified that Facebook uses a 
license -- "Facebook BSD+Patents license". This was deemed to be not suitable 
for use in Apache. 

Facebook also clarified that the intent of this license was to be different 
from Apache license.  
https://issues.apache.org/jira/browse/LEGAL-303?focusedCommentId=16046579=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16046579


Around 12 hours ago (Sat, Jul 15th, 2017)  Chris Mattmann send out a circular 
to all Apache PMCs that RocksDB falls under Category X license, which are 
licenses that cannot be included with Apache products.  This naturally means 
that we cannot commit a change that has a RocksDB dependency.

 Fortunately, around 3 hours ago (Sat, Jul 15th, 2017 7:00 PM), Facebook has 
committed a new license to RocksDB. That is a classic Apache 2 license. So I 
think we are good to use the RocksDB since it is relicensed as Apache 2 (IANAL).

Please be aware that we might have to revert this change if Apache finally 
decides that we cannot use RocksDB.




was (Author: anu):
[~jnp], [~szetszwo], [~cheersyang], [~xyao], [~msingh], [~nandakumar131], 
[~linyiqun], [~vagarychen], [~yuanbo], [~arpitagarwal]

There is an interesting thread here in the Legal section, which we should be 
aware of before we post this patch which bring in RocksDB dependency.

TL: DR; Due to the RocksDB License change we might be able to use RocksDB. 
Approximately 3 hours ago RocksDB switched over to using Apache 2 License, 
before that RocksDB was declared as *persona non grata* in the Apache world.

Details: In the Apache JIRA LEGAL-303, it was clarified that Facebook uses a 
license -- "Facebook BSD+Patents license". This was deemed to be not suitable 
for use in Apache. 

Facebook also clarified that the intent of this license was to be different 
from Apache license.  
https://issues.apache.org/jira/browse/LEGAL-303?focusedCommentId=16046579=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16046579


Around 12 hours ago (Sat, Jul 15th, 2017)  Chris Mattmann send out a circular 
to all Apache PMCs that RocksDB falls under Category X license, which are 
licenses that cannot be included with Apache products.  This naturally means 
that we cannot commit a change that has a RocksDB dependency.

 Fortunately, around 3 hours ago (Sat, Jul 15th, 2017 7:00 PM), Facebook has 
committed a new license to RocksDB. That is a classic Apache 2 license. So I 
think we are good to use the RocksDB since it is relicensed in Apache (IANAL).

Please be aware that we might have to revert this change if Apache finally 
decides that we cannot use RocksDB.



> Ozone: RocksDB implementation of ozone metadata store
> -
>
> Key: HDFS-12149
> URL: https://issues.apache.org/jira/browse/HDFS-12149
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> HDFS-12069 added a general interface for ozone metadata store, we already 
> have a leveldb implementation, this JIRA is to track the work of rocksdb 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12149) Ozone: RocksDB implementation of ozone metadata store

2017-07-16 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089216#comment-16089216
 ] 

Anu Engineer commented on HDFS-12149:
-

[~cheersyang] Based on comments in LEGAL-303, I think we are clear to use 
RocksDB. Please go ahead with your patch.

> Ozone: RocksDB implementation of ozone metadata store
> -
>
> Key: HDFS-12149
> URL: https://issues.apache.org/jira/browse/HDFS-12149
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> HDFS-12069 added a general interface for ozone metadata store, we already 
> have a leveldb implementation, this JIRA is to track the work of rocksdb 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12067) Add command help description about 'hdfs dfsadmin -help getVolumeReport' command.

2017-07-16 Thread steven-wugang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089207#comment-16089207
 ] 

steven-wugang commented on HDFS-12067:
--

[~brahma]Thanks for your review,I'd be happy to do it.

> Add command help description about 'hdfs dfsadmin -help getVolumeReport' 
> command.
> -
>
> Key: HDFS-12067
> URL: https://issues.apache.org/jira/browse/HDFS-12067
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: steven-wugang
>Assignee: steven-wugang
> Attachments: HDFS_12067.001.patch, HDFS_12067.002.patch, 
> HDFS_12067.003.patch, HDFS_12067.004.patch, HDFS-12067.patch
>
>
> When I use the command,I see the command help description,but the help 
> description doesn't make it clear,especially the argument 'port',It's easy to 
> mistake for port (default 9866) in 'dfs.datanode.address'.Therefore, in order 
> to use this command better,I add some descriptions about the arguments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12067) Add command help description about 'hdfs dfsadmin -help getVolumeReport' command.

2017-07-16 Thread steven-wugang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089206#comment-16089206
 ] 

steven-wugang commented on HDFS-12067:
--

[~brahma] ok

> Add command help description about 'hdfs dfsadmin -help getVolumeReport' 
> command.
> -
>
> Key: HDFS-12067
> URL: https://issues.apache.org/jira/browse/HDFS-12067
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: steven-wugang
>Assignee: steven-wugang
> Attachments: HDFS_12067.001.patch, HDFS_12067.002.patch, 
> HDFS_12067.003.patch, HDFS_12067.004.patch, HDFS-12067.patch
>
>
> When I use the command,I see the command help description,but the help 
> description doesn't make it clear,especially the argument 'port',It's easy to 
> mistake for port (default 9866) in 'dfs.datanode.address'.Therefore, in order 
> to use this command better,I add some descriptions about the arguments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12147) Ozone: KSM: Add checkBucketAccess

2017-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089095#comment-16089095
 ] 

Hadoop QA commented on HDFS-12147:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 1 
unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.scm.TestContainerSQLCli |
|   | hadoop.ozone.web.client.TestBuckets |
|   | hadoop.ozone.container.replication.TestContainerReplicationManager |
|   | hadoop.ozone.web.client.TestBucketsRatis |
|   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
|   | hadoop.ozone.container.common.TestDatanodeStateMachine |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12147 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877484/HDFS-12147-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 107a3a47f41a 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 1bec6a1 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (HDFS-12115) Ozone: SCM: Add queryNode RPC Call

2017-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089091#comment-16089091
 ] 

Hadoop QA commented on HDFS-12115:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project: The patch generated 7 new + 
153 unchanged - 0 fixed = 160 total (was 153) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | 
hadoop.hdfs.server.datanode.metrics.TestDataNodeOutlierDetectionViaMetrics |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.ozone.web.client.TestBucketsRatis |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.ozone.container.placement.TestContainerPlacement |
|   | hadoop.ozone.container.replication.TestContainerReplicationManager |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12115 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-11786) Add support to make copyFromLocal multi threaded

2017-07-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089062#comment-16089062
 ] 

Hudson commented on HDFS-11786:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12017 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12017/])
HDFS-11786. Add support to make copyFromLocal multi threaded. (aengineer: rev 
02b141ac6059323ec43e472ca36dc570fdca386f)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestCopyPreserveFlag.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestCopyFromLocal.java
* (edit) hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java


> Add support to make copyFromLocal multi threaded
> 
>
> Key: HDFS-11786
> URL: https://issues.apache.org/jira/browse/HDFS-11786
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-11786.001.patch, HDFS-11786.002.patch, 
> HDFS-11786.003.patch, HDFS-11786.004.patch, HDFS-11786.005.patch
>
>
> CopyFromLocal/Put is not currently multithreaded.
> In case, where there are multiple files which need to be uploaded to the 
> hdfs, a single thread reads the file and then copies the data to the cluster.
> This copy to hdfs can be made faster by uploading multiple files in parallel.
> I am attaching the initial patch so that I can get some initial feedback.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11786) Add support to make copyFromLocal multi threaded

2017-07-16 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11786:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 3.0.0-beta1
Target Version/s: 3.0.0-beta1
  Status: Resolved  (was: Patch Available)

[~msingh] Thank you for the contribution. I have committed this to the trunk.

> Add support to make copyFromLocal multi threaded
> 
>
> Key: HDFS-11786
> URL: https://issues.apache.org/jira/browse/HDFS-11786
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-11786.001.patch, HDFS-11786.002.patch, 
> HDFS-11786.003.patch, HDFS-11786.004.patch, HDFS-11786.005.patch
>
>
> CopyFromLocal/Put is not currently multithreaded.
> In case, where there are multiple files which need to be uploaded to the 
> hdfs, a single thread reads the file and then copies the data to the cluster.
> This copy to hdfs can be made faster by uploading multiple files in parallel.
> I am attaching the initial patch so that I can get some initial feedback.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12147) Ozone: KSM: Add checkBucketAccess

2017-07-16 Thread Nandakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089044#comment-16089044
 ] 

Nandakumar commented on HDFS-12147:
---

Thanks [~anu] for the update, patch v1 is on top of latest commit after rebase.

> Ozone: KSM: Add checkBucketAccess
> -
>
> Key: HDFS-12147
> URL: https://issues.apache.org/jira/browse/HDFS-12147
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12147-HDFS-7240.000.patch, 
> HDFS-12147-HDFS-7240.001.patch
>
>
> Checks if the caller has access to a given bucket.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12117) HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST Interface

2017-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089041#comment-16089041
 ] 

Hadoop QA commented on HDFS-12117:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-12117 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12117 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877483/HDFS-12117.003.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20297/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST 
> Interface
> ---
>
> Key: HDFS-12117
> URL: https://issues.apache.org/jira/browse/HDFS-12117
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
> Attachments: HDFS-12117.003.patch, HDFS-12117.patch.01, 
> HDFS-12117.patch.02
>
>
> Currently, HttpFS is lacking implementation for SNAPSHOT related methods from 
> WebHDFS REST interface as defined by WebHDFS documentation [WebHDFS 
> documentation|https://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Snapshot_Operations]
> I would like to work on this implementation, following the existing design 
> approach already implemented by other WebHDFS methods on current HttpFS 
> project, so I'll be proposing an initial patch soon for reviews.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12147) Ozone: KSM: Add checkBucketAccess

2017-07-16 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12147:
--
Attachment: HDFS-12147-HDFS-7240.001.patch

> Ozone: KSM: Add checkBucketAccess
> -
>
> Key: HDFS-12147
> URL: https://issues.apache.org/jira/browse/HDFS-12147
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12147-HDFS-7240.000.patch, 
> HDFS-12147-HDFS-7240.001.patch
>
>
> Checks if the caller has access to a given bucket.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12117) HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST Interface

2017-07-16 Thread Wellington Chevreuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HDFS-12117:

Attachment: HDFS-12117.003.patch

Renamed the patch.

> HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST 
> Interface
> ---
>
> Key: HDFS-12117
> URL: https://issues.apache.org/jira/browse/HDFS-12117
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
> Attachments: HDFS-12117.003.patch, HDFS-12117.patch.01, 
> HDFS-12117.patch.02
>
>
> Currently, HttpFS is lacking implementation for SNAPSHOT related methods from 
> WebHDFS REST interface as defined by WebHDFS documentation [WebHDFS 
> documentation|https://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Snapshot_Operations]
> I would like to work on this implementation, following the existing design 
> approach already implemented by other WebHDFS methods on current HttpFS 
> project, so I'll be proposing an initial patch soon for reviews.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12117) HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST Interface

2017-07-16 Thread Wellington Chevreuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HDFS-12117:

Status: Patch Available  (was: In Progress)

> HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST 
> Interface
> ---
>
> Key: HDFS-12117
> URL: https://issues.apache.org/jira/browse/HDFS-12117
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
> Attachments: HDFS-12117.003.patch, HDFS-12117.patch.01, 
> HDFS-12117.patch.02
>
>
> Currently, HttpFS is lacking implementation for SNAPSHOT related methods from 
> WebHDFS REST interface as defined by WebHDFS documentation [WebHDFS 
> documentation|https://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Snapshot_Operations]
> I would like to work on this implementation, following the existing design 
> approach already implemented by other WebHDFS methods on current HttpFS 
> project, so I'll be proposing an initial patch soon for reviews.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12117) HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST Interface

2017-07-16 Thread Wellington Chevreuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HDFS-12117:

Status: In Progress  (was: Patch Available)

> HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST 
> Interface
> ---
>
> Key: HDFS-12117
> URL: https://issues.apache.org/jira/browse/HDFS-12117
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
> Attachments: HDFS-12117.003.patch, HDFS-12117.patch.01, 
> HDFS-12117.patch.02
>
>
> Currently, HttpFS is lacking implementation for SNAPSHOT related methods from 
> WebHDFS REST interface as defined by WebHDFS documentation [WebHDFS 
> documentation|https://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Snapshot_Operations]
> I would like to work on this implementation, following the existing design 
> approach already implemented by other WebHDFS methods on current HttpFS 
> project, so I'll be proposing an initial patch soon for reviews.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12148) Ozone: TestOzoneConfigurationFields is failing because ozone-default.xml has some missing properties

2017-07-16 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089035#comment-16089035
 ] 

Anu Engineer commented on HDFS-12148:
-

Thank you for taking care of this, I have updated HDFS-12115, and removed 
changes to ozone-default.xml.


> Ozone: TestOzoneConfigurationFields is failing because ozone-default.xml has 
> some missing properties
> 
>
> Key: HDFS-12148
> URL: https://issues.apache.org/jira/browse/HDFS-12148
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Fix For: HDFS-7240
>
> Attachments: HDFS-12148-HDFS-7240.001.patch
>
>
> Following properties added by HDFS-11493 is missing in ozone-default.xml
> {noformat}
> ozone.scm.max.container.report.threads
> ozone.scm.container.report.processing.interval.seconds
> ozone.scm.container.reports.wait.timeout.seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12115) Ozone: SCM: Add queryNode RPC Call

2017-07-16 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12115:

Attachment: HDFS-12115-HDFS-7240.005.patch

updating the patch since HDFS-12148 fixed the same issue this patch was fixing 
too. Removed Ozone-defaults.xml changes from this new patch

> Ozone: SCM: Add queryNode RPC Call
> --
>
> Key: HDFS-12115
> URL: https://issues.apache.org/jira/browse/HDFS-12115
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-12115-HDFS-7240.001.patch, 
> HDFS-12115-HDFS-7240.002.patch, HDFS-12115-HDFS-7240.003.patch, 
> HDFS-12115-HDFS-7240.004.patch, HDFS-12115-HDFS-7240.005.patch
>
>
> Add queryNode RPC to Storage container location protocol. This allows 
> applications like SCM CLI to get the list of nodes in various states, like 
> Healthy, live or Dead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12146) [SPS] : Fix TestStoragePolicySatisfierWithStripedFile#testSPSWhenFileHasLowRedundancyBlocks

2017-07-16 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089031#comment-16089031
 ] 

Surendra Singh Lilhore commented on HDFS-12146:
---

{{hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier}} failed 
because of {{out of space}} issue

{noformat}
2017-07-15 10:55:00,808 [DataXceiver for client /127.0.0.1:47222 [Replacing 
block BP-1486486435-172.17.0.2-1500116097405:blk_1073741827_1003 from 
c7dfc551-78d1-4f31-8eb2-db9c223be27d]] ERROR datanode.DataNode 
(DataXceiver.java:run(323)) - 127.0.0.1:57555:DataXceiver error processing 
REPLACE_BLOCK operation  src: /127.0.0.1:47222 dst: /127.0.0.1:57555
org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: Out of space: The 
volume with the most available space (=0 B) is less than the block size (=1024 
B).
{noformat}

> [SPS] : Fix 
> TestStoragePolicySatisfierWithStripedFile#testSPSWhenFileHasLowRedundancyBlocks
> ---
>
> Key: HDFS-12146
> URL: https://issues.apache.org/jira/browse/HDFS-12146
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-12146-HDFS-10285.001.patch
>
>
> TestStoragePolicySatisfierWithStripedFile#testSPSWhenFileHasLowRedundancyBlocks
>  failed in many build with port bind exception. I feel we no need to restart 
> datanodes on same port, just we checking the block redundancy scenario.. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12146) [SPS] : Fix TestStoragePolicySatisfierWithStripedFile#testSPSWhenFileHasLowRedundancyBlocks

2017-07-16 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-12146:
--
Summary: [SPS] : Fix 
TestStoragePolicySatisfierWithStripedFile#testSPSWhenFileHasLowRedundancyBlocks 
 (was: Fix 
TestStoragePolicySatisfierWithStripedFile#testSPSWhenFileHasLowRedundancyBlocks)

> [SPS] : Fix 
> TestStoragePolicySatisfierWithStripedFile#testSPSWhenFileHasLowRedundancyBlocks
> ---
>
> Key: HDFS-12146
> URL: https://issues.apache.org/jira/browse/HDFS-12146
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-12146-HDFS-10285.001.patch
>
>
> TestStoragePolicySatisfierWithStripedFile#testSPSWhenFileHasLowRedundancyBlocks
>  failed in many build with port bind exception. I feel we no need to restart 
> datanodes on same port, just we checking the block redundancy scenario.. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12147) Ozone: KSM: Add checkBucketAccess

2017-07-16 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089016#comment-16089016
 ] 

Anu Engineer commented on HDFS-12147:
-

[~nandakumar131] I think this issue might be due to the fact that this patch 
might need a rebase.

> Ozone: KSM: Add checkBucketAccess
> -
>
> Key: HDFS-12147
> URL: https://issues.apache.org/jira/browse/HDFS-12147
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12147-HDFS-7240.000.patch
>
>
> Checks if the caller has access to a given bucket.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-11578) AccessControlExceptions not logged in two files

2017-07-16 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-11578:
--
Comment: was deleted

(was: HI [~MehranHassani],
Which code files you are talking about? As in the description file names are 
missing.)

> AccessControlExceptions not logged in two files
> ---
>
> Key: HDFS-11578
> URL: https://issues.apache.org/jira/browse/HDFS-11578
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Mehran Hassani
>Priority: Minor
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. 
> AccessControlExceptions occurred 114 times in Hadoop 2.7 source code and in 
> 97% of the time they include a log statement. However in later releases, 
> these new files include AccessControlExceptions exceptions without any log 
> statements:
> hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CachePool.java
> /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11578) AccessControlExceptions not logged in two files

2017-07-16 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089011#comment-16089011
 ] 

Bharat Viswanadham edited comment on HDFS-11578 at 7/16/17 4:39 PM:


HI [~MehranHassani],
Which code files you are talking about? As in the description file names are 
missing.


was (Author: bharatviswa):
HI Mehran,
Which code files you are talking about? As in the description file names are 
missing.

> AccessControlExceptions not logged in two files
> ---
>
> Key: HDFS-11578
> URL: https://issues.apache.org/jira/browse/HDFS-11578
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Mehran Hassani
>Priority: Minor
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. 
> AccessControlExceptions occurred 114 times in Hadoop 2.7 source code and in 
> 97% of the time they include a log statement. However in later releases, 
> these new files include AccessControlExceptions exceptions without any log 
> statements:
> hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CachePool.java
> /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11578) AccessControlExceptions not logged in two files

2017-07-16 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089011#comment-16089011
 ] 

Bharat Viswanadham commented on HDFS-11578:
---

HI Jyothi,
Which code files you are talking about? As in the description file names are 
missing.

> AccessControlExceptions not logged in two files
> ---
>
> Key: HDFS-11578
> URL: https://issues.apache.org/jira/browse/HDFS-11578
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Mehran Hassani
>Priority: Minor
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. 
> AccessControlExceptions occurred 114 times in Hadoop 2.7 source code and in 
> 97% of the time they include a log statement. However in later releases, 
> these new files include AccessControlExceptions exceptions without any log 
> statements:
> hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CachePool.java
> /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11578) AccessControlExceptions not logged in two files

2017-07-16 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089011#comment-16089011
 ] 

Bharat Viswanadham edited comment on HDFS-11578 at 7/16/17 4:38 PM:


HI Mehran,
Which code files you are talking about? As in the description file names are 
missing.


was (Author: bharatviswa):
HI Jyothi,
Which code files you are talking about? As in the description file names are 
missing.

> AccessControlExceptions not logged in two files
> ---
>
> Key: HDFS-11578
> URL: https://issues.apache.org/jira/browse/HDFS-11578
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Mehran Hassani
>Priority: Minor
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. 
> AccessControlExceptions occurred 114 times in Hadoop 2.7 source code and in 
> 97% of the time they include a log statement. However in later releases, 
> these new files include AccessControlExceptions exceptions without any log 
> statements:
> hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CachePool.java
> /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12148) Ozone: TestOzoneConfigurationFields is failing because ozone-default.xml has some missing properties

2017-07-16 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088946#comment-16088946
 ] 

Weiwei Yang commented on HDFS-12148:


Thanks [~anu], the jenkins result the UT failure now is fixed, I am committing 
this now. Thanks for the quick response!

> Ozone: TestOzoneConfigurationFields is failing because ozone-default.xml has 
> some missing properties
> 
>
> Key: HDFS-12148
> URL: https://issues.apache.org/jira/browse/HDFS-12148
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Fix For: HDFS-7240
>
> Attachments: HDFS-12148-HDFS-7240.001.patch
>
>
> Following properties added by HDFS-11493 is missing in ozone-default.xml
> {noformat}
> ozone.scm.max.container.report.threads
> ozone.scm.container.report.processing.interval.seconds
> ozone.scm.container.reports.wait.timeout.seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12148) Ozone: TestOzoneConfigurationFields is failing because ozone-default.xml has some missing properties

2017-07-16 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12148:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

> Ozone: TestOzoneConfigurationFields is failing because ozone-default.xml has 
> some missing properties
> 
>
> Key: HDFS-12148
> URL: https://issues.apache.org/jira/browse/HDFS-12148
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Fix For: HDFS-7240
>
> Attachments: HDFS-12148-HDFS-7240.001.patch
>
>
> Following properties added by HDFS-11493 is missing in ozone-default.xml
> {noformat}
> ozone.scm.max.container.report.threads
> ozone.scm.container.report.processing.interval.seconds
> ozone.scm.container.reports.wait.timeout.seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12148) Ozone: TestOzoneConfigurationFields is failing because ozone-default.xml has some missing properties

2017-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088892#comment-16088892
 ] 

Hadoop QA commented on HDFS-12148:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.ozone.container.replication.TestContainerReplicationManager |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
| Timed out junit tests | 
org.apache.hadoop.ozone.container.ozoneimpl.TestRatisManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12148 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877461/HDFS-12148-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux b49a22fab385 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 8f122a7 |
| Default Java | 1.8.0_131 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20293/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20293/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20293/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: TestOzoneConfigurationFields is failing because ozone-default.xml has 
> some missing properties
> 
>
> Key: HDFS-12148
> URL: https://issues.apache.org/jira/browse/HDFS-12148
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>

[jira] [Commented] (HDFS-12147) Ozone: KSM: Add checkBucketAccess

2017-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1604#comment-1604
 ] 

Hadoop QA commented on HDFS-12147:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
31s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
33s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 55s{color} | 
{color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 55s{color} 
| {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 1 
unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12147 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877467/HDFS-12147-HDFS-7240.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux df4273e1a5f2 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 8f122a7 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20294/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20294/artifact/patchprocess/patch-compile-hadoop-hdfs-project.txt
 |
| cc | 

[jira] [Commented] (HDFS-12117) HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST Interface

2017-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088873#comment-16088873
 ] 

Hadoop QA commented on HDFS-12117:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
2s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-12117 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12117 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877460/HDFS-12117.patch.02 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20295/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST 
> Interface
> ---
>
> Key: HDFS-12117
> URL: https://issues.apache.org/jira/browse/HDFS-12117
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
> Attachments: HDFS-12117.patch.01, HDFS-12117.patch.02
>
>
> Currently, HttpFS is lacking implementation for SNAPSHOT related methods from 
> WebHDFS REST interface as defined by WebHDFS documentation [WebHDFS 
> documentation|https://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Snapshot_Operations]
> I would like to work on this implementation, following the existing design 
> approach already implemented by other WebHDFS methods on current HttpFS 
> project, so I'll be proposing an initial patch soon for reviews.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12133) Correct ContentSummaryComputationContext Logger class name.

2017-07-16 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088863#comment-16088863
 ] 

Brahma Reddy Battula commented on HDFS-12133:
-

[~surendrasingh] thanks for reporting this..Straight forward change. +1, will 
commit later this week.

> Correct ContentSummaryComputationContext Logger class name.
> ---
>
> Key: HDFS-12133
> URL: https://issues.apache.org/jira/browse/HDFS-12133
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0-alpha4
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Minor
> Attachments: HDFS-12133.001.patch
>
>
> Now it is {code}public static final Log LOG = 
> LogFactory.getLog(INode.class){code}
> It should be  {{ContentSummaryComputationContext.class}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12067) Add command help description about 'hdfs dfsadmin -help getVolumeReport' command.

2017-07-16 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088862#comment-16088862
 ] 

Brahma Reddy Battula edited comment on HDFS-12067 at 7/16/17 9:24 AM:
--

[~steven-wugang] thanks for reporting and working on this.

how about fixing for {{refreshNamenodes}},{{refresh}} and {{deleteBlockPool}} 
also ..? these also commands didn't mentioned about ipc port.


was (Author: brahmareddy):
[~steven-wugang] thanks for reporting and working on this.

how about fixing for {{refreshNamenodes}},{{refresh}}and {{deleteBlockPool}} 
also ..? these also commands didn't mentioned about ipc port.

> Add command help description about 'hdfs dfsadmin -help getVolumeReport' 
> command.
> -
>
> Key: HDFS-12067
> URL: https://issues.apache.org/jira/browse/HDFS-12067
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: steven-wugang
>Assignee: steven-wugang
> Attachments: HDFS_12067.001.patch, HDFS_12067.002.patch, 
> HDFS_12067.003.patch, HDFS_12067.004.patch, HDFS-12067.patch
>
>
> When I use the command,I see the command help description,but the help 
> description doesn't make it clear,especially the argument 'port',It's easy to 
> mistake for port (default 9866) in 'dfs.datanode.address'.Therefore, in order 
> to use this command better,I add some descriptions about the arguments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12067) Add command help description about 'hdfs dfsadmin -help getVolumeReport' command.

2017-07-16 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088862#comment-16088862
 ] 

Brahma Reddy Battula edited comment on HDFS-12067 at 7/16/17 9:24 AM:
--

[~steven-wugang] thanks for reporting and working on this.

how about fixing for {{refreshNamenodes}},{{refresh}} and {{deleteBlockPool}} 
also ..? these commands also didn't mentioned about ipc port.


was (Author: brahmareddy):
[~steven-wugang] thanks for reporting and working on this.

how about fixing for {{refreshNamenodes}},{{refresh}} and {{deleteBlockPool}} 
also ..? these also commands didn't mentioned about ipc port.

> Add command help description about 'hdfs dfsadmin -help getVolumeReport' 
> command.
> -
>
> Key: HDFS-12067
> URL: https://issues.apache.org/jira/browse/HDFS-12067
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: steven-wugang
>Assignee: steven-wugang
> Attachments: HDFS_12067.001.patch, HDFS_12067.002.patch, 
> HDFS_12067.003.patch, HDFS_12067.004.patch, HDFS-12067.patch
>
>
> When I use the command,I see the command help description,but the help 
> description doesn't make it clear,especially the argument 'port',It's easy to 
> mistake for port (default 9866) in 'dfs.datanode.address'.Therefore, in order 
> to use this command better,I add some descriptions about the arguments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12067) Add command help description about 'hdfs dfsadmin -help getVolumeReport' command.

2017-07-16 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088862#comment-16088862
 ] 

Brahma Reddy Battula commented on HDFS-12067:
-

[~steven-wugang] thanks for reporting and working on this.

how about fixing for {{refreshNamenodes}},{{refresh}}and {{deleteBlockPool}} 
also ..? these also commands didn't mentioned about ipc port.

> Add command help description about 'hdfs dfsadmin -help getVolumeReport' 
> command.
> -
>
> Key: HDFS-12067
> URL: https://issues.apache.org/jira/browse/HDFS-12067
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: steven-wugang
>Assignee: steven-wugang
> Attachments: HDFS_12067.001.patch, HDFS_12067.002.patch, 
> HDFS_12067.003.patch, HDFS_12067.004.patch, HDFS-12067.patch
>
>
> When I use the command,I see the command help description,but the help 
> description doesn't make it clear,especially the argument 'port',It's easy to 
> mistake for port (default 9866) in 'dfs.datanode.address'.Therefore, in order 
> to use this command better,I add some descriptions about the arguments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12147) Ozone: KSM: Add checkBucketAccess

2017-07-16 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12147:
--
Status: Patch Available  (was: Open)

> Ozone: KSM: Add checkBucketAccess
> -
>
> Key: HDFS-12147
> URL: https://issues.apache.org/jira/browse/HDFS-12147
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12147-HDFS-7240.000.patch
>
>
> Checks if the caller has access to a given bucket.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12147) Ozone: KSM: Add checkBucketAccess

2017-07-16 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12147:
--
Attachment: HDFS-12147-HDFS-7240.000.patch

> Ozone: KSM: Add checkBucketAccess
> -
>
> Key: HDFS-12147
> URL: https://issues.apache.org/jira/browse/HDFS-12147
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12147-HDFS-7240.000.patch
>
>
> Checks if the caller has access to a given bucket.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12112) TestBlockManager#testBlockManagerMachinesArray sometimes fails with NPE

2017-07-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088834#comment-16088834
 ] 

Hudson commented on HDFS-12112:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12016 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12016/])
HDFS-12112. TestBlockManager#testBlockManagerMachinesArray sometimes (brahma: 
rev b778887af59d96f1fac30cae14be1cabbdb74c8b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java


> TestBlockManager#testBlockManagerMachinesArray sometimes fails with NPE
> ---
>
> Key: HDFS-12112
> URL: https://issues.apache.org/jira/browse/HDFS-12112
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
> Environment: CDH5.12.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.3
>
> Attachments: HDFS-12112.001.patch
>
>
> Found the following error:
> {quote}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockManagerMachinesArray(TestBlockManager.java:1202)
> {quote}
> The NPE suggests corruptStorageDataNode in the following code snippet could 
> be null.
> {code}
> for(int i=0; i {code}
> Looking at the code, the test does not wait for file replication to happen, 
> which is why corruptStorageDataNode (the DN of the second replica) is null.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org