[jira] [Commented] (HDFS-14723) Add helper method FSNamesystem#setBlockManagerForTesting() in branch-2

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905475#comment-16905475
 ] 

Hadoop QA commented on HDFS-14723:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
52s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  8m  
3s{color} | {color:red} root in branch-2 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
52s{color} | {color:red} hadoop-hdfs in branch-2 failed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
43s{color} | {color:red} hadoop-hdfs in branch-2 failed with JDK v1.8.0_212. 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
52s{color} | {color:red} hadoop-hdfs in branch-2 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs in branch-2 failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} branch-2 passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 48s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_212 with JDK v1.8.0_212 
generated 1 new + 370 unchanged - 2 fixed = 371 total (was 372) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:da675796017 |
| JIRA Issue | HDFS-14723 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977381/HDFS-14723.branch-2.001.patch
 |
| Optional Tests |  dupname

[jira] [Updated] (HDDS-1908) TestMultiBlockWritesWithDnFailures is failing

2019-08-12 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1908:

Status: Patch Available  (was: In Progress)

> TestMultiBlockWritesWithDnFailures is failing
> -
>
> Key: HDDS-1908
> URL: https://issues.apache.org/jira/browse/HDDS-1908
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> TestMultiBlockWritesWithDnFailures is failing with the following exception
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 30.992 s <<< FAILURE! - in 
> org.apache.hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures
> [ERROR] 
> testMultiBlockWritesWithDnFailures(org.apache.hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures)
>   Time elapsed: 30.941 s  <<< ERROR!
> INTERNAL_ERROR org.apache.hadoop.ozone.om.exceptions.OMException: Allocated 0 
> blocks. Requested 1 blocks
>   at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:720)
>   at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.allocateBlock(OzoneManagerProtocolClientSideTranslatorPB.java:752)
>   at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntryPool.allocateNewBlock(BlockOutputStreamEntryPool.java:248)
>   at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntryPool.allocateBlockIfNeeded(BlockOutputStreamEntryPool.java:296)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:201)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleRetry(KeyOutputStream.java:376)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleException(KeyOutputStream.java:325)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:231)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:193)
>   at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49)
>   at java.io.OutputStream.write(OutputStream.java:75)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures.testMultiBlockWritesWithDnFailures(TestMultiBlockWritesWithDnFailures.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#7

[jira] [Work started] (HDDS-1954) StackOverflowError in OzoneClientInvocationHandler

2019-08-12 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1954 started by Doroszlai, Attila.
---
> StackOverflowError in OzoneClientInvocationHandler
> --
>
> Key: HDDS-1954
> URL: https://issues.apache.org/jira/browse/HDDS-1954
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Trivial
>
> Happens if log level for {{org.apache.hadoop.ozone.client}} is set to TRACE.
> {code}
> SLF4J: Failed toString() invocation on an object of type 
> [com.sun.proxy.$Proxy85]
> Reported exception:
> java.lang.StackOverflowError
> ...
>   at org.slf4j.impl.Log4jLoggerAdapter.trace(Log4jLoggerAdapter.java:156)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:51)
>   at com.sun.proxy.$Proxy85.toString(Unknown Source)
>   at 
> org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:299)
>   at 
> org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:271)
>   at 
> org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:233)
>   at 
> org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:173)
>   at org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:151)
>   at org.slf4j.impl.Log4jLoggerAdapter.trace(Log4jLoggerAdapter.java:156)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:51)
>   at com.sun.proxy.$Proxy85.toString(Unknown Source)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1105) Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager.

2019-08-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1105?focusedWorklogId=293313&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-293313
 ]

ASF GitHub Bot logged work on HDDS-1105:


Author: ASF GitHub Bot
Created on: 12/Aug/19 18:54
Start Date: 12/Aug/19 18:54
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1259: 
HDDS-1105 : Add mechanism in Recon to obtain DB snapshot 'delta' updates from 
Ozone Manager
URL: https://github.com/apache/hadoop/pull/1259#discussion_r313076918
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/OMDBUpdatesHandler.java
 ##
 @@ -101,7 +106,7 @@ private void processEvent(int cfIndex, byte[] keyBytes, 
byte[]
   builder.setAction(action);
   OMDBUpdateEvent event = builder.build();
   LOG.info("Generated OM update Event for table : " + event.getTable()
 
 Review comment:
   Can we change this to debug?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 293313)
Time Spent: 1.5h  (was: 1h 20m)

> Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone 
> Manager.
> 
>
> Key: HDDS-1105
> URL: https://issues.apache.org/jira/browse/HDDS-1105
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> *Some context*
> The FSCK server will periodically invoke this OM API passing in the most 
> recent sequence number of its own RocksDB instance. The OM will use the 
> RockDB getUpdateSince() API to answer this query. Since the getUpdateSince 
> API only works against the RocksDB WAL, we have to configure OM RocksDB WAL 
> (https://github.com/facebook/rocksdb/wiki/Write-Ahead-Log) with sufficient 
> max size to make this API useful. If the OM cannot get all transactions since 
> the given sequence number (due to WAL flushing), it can error out. In that 
> case the FSCK server can fall back to getting the entire checkpoint snapshot 
> implemented in HDDS-1085.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1917) TestOzoneRpcClientAbstract is failing

2019-08-12 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1917:
--
Summary: TestOzoneRpcClientAbstract is failing  (was: Ignore failing 
test-cases in TestSecureOzoneRpcClient)

> TestOzoneRpcClientAbstract is failing
> -
>
> Key: HDDS-1917
> URL: https://issues.apache.org/jira/browse/HDDS-1917
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Ignore failing test-cases in TestSecureOzoneRpcClient. This will be fixed 
> when HA support is added to acl operations.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1917) TestOzoneRpcClientAbstract is failing

2019-08-12 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1917:
--
Description: 
{noformat}
[ERROR] 
testNativeAclsForKey(org.apache.hadoop.ozone.client.rpc.TestSecureOzoneRpcClient)
  Time elapsed: 0.113 s  <<< FAILURE!
java.lang.AssertionError: READ_ACL should exist in current 
acls:group:jenkins:a[ACCESS]
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.validateOzoneAccessAcl(TestOzoneRpcClientAbstract.java:2466)
at 
org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testNativeAclsForKey(TestOzoneRpcClientAbstract.java:2300)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
{noformat}

  was:Ignore failing test-cases in TestSecureOzoneRpcClient. This will be fixed 
when HA support is added to acl operations.


> TestOzoneRpcClientAbstract is failing
> -
>
> Key: HDDS-1917
> URL: https://issues.apache.org/jira/browse/HDDS-1917
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {noformat}
> [ERROR] 
> testNativeAclsForKey(org.apache.hadoop.ozone.client.rpc.TestSecureOzoneRpcClient)
>   Time elapsed: 0.113 s  <<< FAILURE!
> java.lang.AssertionError: READ_ACL should exist in current 
> acls:group:jenkins:a[ACCESS]
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.validateOzoneAccessAcl(TestOzoneRpcClientAbstract.java:2466)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testNativeAclsForKey(TestOzoneRpcClientAbstract.java:2300)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1917) TestOzoneRpcClientAbstract is failing

2019-08-12 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1917:
--
Description: 
TestOzoneRpcClientAbstract is failing with the below error
{noformat}
[ERROR] 
testNativeAclsForKey(org.apache.hadoop.ozone.client.rpc.TestSecureOzoneRpcClient)
  Time elapsed: 0.113 s  <<< FAILURE!
java.lang.AssertionError: READ_ACL should exist in current 
acls:group:jenkins:a[ACCESS]
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.validateOzoneAccessAcl(TestOzoneRpcClientAbstract.java:2466)
at 
org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testNativeAclsForKey(TestOzoneRpcClientAbstract.java:2300)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
{noformat}

  was:
{noformat}
[ERROR] 
testNativeAclsForKey(org.apache.hadoop.ozone.client.rpc.TestSecureOzoneRpcClient)
  Time elapsed: 0.113 s  <<< FAILURE!
java.lang.AssertionError: READ_ACL should exist in current 
acls:group:jenkins:a[ACCESS]
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.validateOzoneAccessAcl(TestOzoneRpcClientAbstract.java:2466)
at 
org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testNativeAclsForKey(TestOzoneRpcClientAbstract.java:2300)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
{noformat}


> TestOzoneRpcClientAbstract is failing
> -
>
> Key: HDDS-1917
> URL: https://issues.apache.org/jira/browse/HDDS-1917
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> TestOzoneRpcClientAbstract is failing with the below error
> {noformat}
> [ERROR] 
> testNativeAclsForKey(org.apache.hadoop.ozone.client.rpc.TestSecureOzoneRpcClient)
>   Time elapsed: 0.113 s  <<< FAILURE!
> java.lang.AssertionError: READ_ACL should exist in current 
> acls:group:jenkins:a[ACCESS]
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.validateOzoneAccessAcl(TestOzoneRpcClientAbstract.java:2466)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testNativeAclsForKey(TestOzoneRpcClientAbstract.java:2300)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-12 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-14423:
--

Assignee: Masatake Iwasaki  (was: Wei-Chiu Chuang)

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HDFS-14423.001.patch, HDFS-14423.002.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-12 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905501#comment-16905501
 ] 

Wei-Chiu Chuang commented on HDFS-14423:


Thanks for helping out. [~iwasakims] I assign this jira to you.

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HDFS-14423.001.patch, HDFS-14423.002.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1955) TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of assertion error

2019-08-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1955?focusedWorklogId=293321&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-293321
 ]

ASF GitHub Bot logged work on HDDS-1955:


Author: ASF GitHub Bot
Created on: 12/Aug/19 19:11
Start Date: 12/Aug/19 19:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1281: HDDS-1955. 
TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of 
assertion error. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1281#issuecomment-520556687
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 113 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 654 | trunk passed |
   | +1 | compile | 406 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1002 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 196 | trunk passed |
   | 0 | spotbugs | 494 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 743 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 675 | the patch passed |
   | +1 | compile | 448 | the patch passed |
   | +1 | javac | 448 | the patch passed |
   | +1 | checkstyle | 89 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 835 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 207 | the patch passed |
   | +1 | findbugs | 810 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 342 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2090 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 8930 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1281 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3b2521ac563d 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e4b538b |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/1/testReport/ |
   | Max. process+thread count | 5392 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 293321)
Time Spent: 20m  (was: 10m)

> TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of 
> assertion error
> --
>
> Key: HDDS-1955
> URL: https://issues.apache.org/jira/bro

[jira] [Commented] (HDFS-14724) Fix JDK7 compatibility in branch-2

2019-08-12 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905505#comment-16905505
 ] 

Wei-Chiu Chuang commented on HDFS-14724:


+1 pending Jenkins. passed my local build after this patch.

> Fix JDK7 compatibility in branch-2
> --
>
> Key: HDFS-14724
> URL: https://issues.apache.org/jira/browse/HDFS-14724
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: Wei-Chiu Chuang
>Assignee: Chen Liang
>Priority: Blocker
> Attachments: HDFS-14724-branch-2.001.patch
>
>
> With the feature Consistent Read from Standby now in branch-2, I found it 
> breaks the build when building with JDK7, because it uses 
> java.util.concurrent.atomic.LongAccumulator which is only in JDK8 and above.
>  
> We should figure out if we want to fix it, or give up JDK7 compatibility.
> [~xkrogen] [~shv] [~vagarychen]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1780) TestFailureHandlingByClient tests are flaky

2019-08-12 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1780:

Fix Version/s: 0.4.1

> TestFailureHandlingByClient tests are flaky
> ---
>
> Key: HDDS-1780
> URL: https://issues.apache.org/jira/browse/HDDS-1780
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The tests seem to fail bcoz , when the datanode goes down with stale node 
> interval being set to a low value, containers may get closed early and client 
> writes might fail with closed container exception rather than pipeline 
> failure/Timeout exceptions as excepted in the tests. The fix made here is to 
> tune the stale node interval.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1935) Improve the visibility with Ozone Insight tool

2019-08-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1935?focusedWorklogId=293340&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-293340
 ]

ASF GitHub Bot logged work on HDDS-1935:


Author: ASF GitHub Bot
Created on: 12/Aug/19 19:54
Start Date: 12/Aug/19 19:54
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1255: HDDS-1935. 
Improve the visibility with Ozone Insight tool
URL: https://github.com/apache/hadoop/pull/1255#issuecomment-520570841
 
 
   Let us sync up some time. If I get an overview of the code layout, it will 
be easier for me to review this. I really appreciate you doing this. Thank you 
... I will sync with you when you are back
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 293340)
Time Spent: 0.5h  (was: 20m)

> Improve the visibility with Ozone Insight tool
> --
>
> Key: HDDS-1935
> URL: https://issues.apache.org/jira/browse/HDDS-1935
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Visibility is a key aspect for the operation of any Ozone cluster. We need 
> better visibility to improve correctnes and performance. While the 
> distributed tracing is a good tool for improving the visibility of 
> performance we have no powerful tool which can be used to check the internal 
> state of the Ozone cluster and debug certain correctness issues.
> To improve the visibility of the internal components I propose to introduce a 
> new command line application `ozone insight`.
> The new tool will show the selected metrics / logs / configuration for any of 
> the internal components (like replication-manager, pipeline, etc.).
> For each insight points we can define the required logs and log levels, 
> metrics and configuration and the tool can display only the component 
> specific information during the debug.
> h2. Usage
> First we can check the available insight point:
> {code}
> bash-4.2$ ozone insight list
> Available insight points:
>   scm.node-manager SCM Datanode management related 
> information.
>   scm.replica-manager  SCM closed container replication 
> manager
>   scm.event-queue  Information about the internal async 
> event delivery
>   scm.protocol.block-location  SCM Block location protocol endpoint
>   scm.protocol.container-location  Planned insight point which is not yet 
> implemented.
>   scm.protocol.datanodePlanned insight point which is not yet 
> implemented.
>   scm.protocol.securityPlanned insight point which is not yet 
> implemented.
>   scm.http Planned insight point which is not yet 
> implemented.
>   om.key-manager   OM Key Manager
>   om.protocol.client   Ozone Manager RPC endpoint
>   om.http  Planned insight point which is not yet 
> implemented.
>   datanode.pipeline[id]More information about one ratis 
> datanode ring.
>   datanode.rocksdb More information about one ratis 
> datanode ring.
>   s3g.http Planned insight point which is not yet 
> implemented.
> {code}
> Insight points can define configuration, metrics and/or logs. Configuration 
> can be displayed based on the configuration objects:
> {code}
> ozone insight config scm.protocol.block-location
> Configuration for `scm.protocol.block-location` (SCM Block location protocol 
> endpoint)
> >>> ozone.scm.block.client.bind.host
>default: 0.0.0.0
>current: 0.0.0.0
> The hostname or IP address used by the SCM block client  endpoint to bind
> >>> ozone.scm.block.client.port
>default: 9863
>current: 9863
> The port number of the Ozone SCM block client service.
> >>> ozone.scm.block.client.address
>default: ${ozone.scm.client.address}
>current: scm
> The address of the Ozone SCM block client service. If not defined value of 
> ozone.scm.client.address is used
> {code}
> Metrics can be retrieved from the prometheus entrypoint:
> {code}
> ozone insight metrics scm.protocol.block-location
> Metrics for `scm.protocol.block-location` (SCM Block location protocol 
> endpoint)
> RPC connections
>   Open connections: 0
>  

[jira] [Work logged] (HDDS-1908) TestMultiBlockWritesWithDnFailures is failing

2019-08-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1908?focusedWorklogId=293348&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-293348
 ]

ASF GitHub Bot logged work on HDDS-1908:


Author: ASF GitHub Bot
Created on: 12/Aug/19 20:01
Start Date: 12/Aug/19 20:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1282: HDDS-1908. 
TestMultiBlockWritesWithDnFailures is failing
URL: https://github.com/apache/hadoop/pull/1282#issuecomment-520573285
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 74 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 604 | trunk passed |
   | +1 | compile | 369 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 963 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | trunk passed |
   | 0 | spotbugs | 459 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 679 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 556 | the patch passed |
   | +1 | compile | 374 | the patch passed |
   | +1 | javac | 374 | the patch passed |
   | +1 | checkstyle | 78 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 745 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 185 | the patch passed |
   | +1 | findbugs | 714 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 357 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2135 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 8307 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1282/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1282 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 912b3b8abb04 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e4b538b |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1282/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1282/1/testReport/ |
   | Max. process+thread count | 5297 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1282/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 293348)
Time Spent: 50m  (was: 40m)

> TestMultiBlockWritesWithDnFailures is failing
> -
>
> Key: HDDS-1908
> URL: https://issues.apache.org/jira/browse/HDDS-1908
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> TestMultiBlockWritesWithDnFailures is failing with the following exception
> {noforma

[jira] [Updated] (HDDS-1954) StackOverflowError in OzoneClientInvocationHandler

2019-08-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1954:
-
Labels: pull-request-available  (was: )

> StackOverflowError in OzoneClientInvocationHandler
> --
>
> Key: HDDS-1954
> URL: https://issues.apache.org/jira/browse/HDDS-1954
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Trivial
>  Labels: pull-request-available
>
> Happens if log level for {{org.apache.hadoop.ozone.client}} is set to TRACE.
> {code}
> SLF4J: Failed toString() invocation on an object of type 
> [com.sun.proxy.$Proxy85]
> Reported exception:
> java.lang.StackOverflowError
> ...
>   at org.slf4j.impl.Log4jLoggerAdapter.trace(Log4jLoggerAdapter.java:156)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:51)
>   at com.sun.proxy.$Proxy85.toString(Unknown Source)
>   at 
> org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:299)
>   at 
> org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:271)
>   at 
> org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:233)
>   at 
> org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:173)
>   at org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:151)
>   at org.slf4j.impl.Log4jLoggerAdapter.trace(Log4jLoggerAdapter.java:156)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:51)
>   at com.sun.proxy.$Proxy85.toString(Unknown Source)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1954) StackOverflowError in OzoneClientInvocationHandler

2019-08-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1954?focusedWorklogId=293349&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-293349
 ]

ASF GitHub Bot logged work on HDDS-1954:


Author: ASF GitHub Bot
Created on: 12/Aug/19 20:01
Start Date: 12/Aug/19 20:01
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1283: HDDS-1954. 
StackOverflowError in OzoneClientInvocationHandler
URL: https://github.com/apache/hadoop/pull/1283
 
 
   ## What changes were proposed in this pull request?
   
   Including `proxy` in the trace message causes stack overflow, since it 
results in a call to `proxy.toString()`, which also wants to log, etc.
   
   I would also argue that `target` is more interesting for logging than 
`proxy`: eg. `org.apache.hadoop.ozone.client.rpc.RpcClient@5c3d4f05` vs. 
`com.sun.proxy.$Proxy87`.
   
   https://issues.apache.org/jira/browse/HDDS-1954
   
   ## How was this patch tested?
   
   Set root log level to TRACE and ran some integration tests via Maven (eg. 
`TestOzoneRpcClientWithRatis`).  Verified that `surefire-reports` has no 
`StackOverflowError`, but has messages like:
   
   ```
   TRACE client.OzoneClient (OzoneClientInvocationHandler.java:invoke(51)) - 
Invoking method public abstract org.apache.hadoop.ozone.client.OzoneVolume 
org.apache.hadoop.ozone.client.protocol.ClientProtocol.getVolumeDetails(java.lang.String)
 throws java.io.IOException on target 
org.apache.hadoop.ozone.client.rpc.RpcClient@5c3d4f05
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 293349)
Time Spent: 10m
Remaining Estimate: 0h

> StackOverflowError in OzoneClientInvocationHandler
> --
>
> Key: HDDS-1954
> URL: https://issues.apache.org/jira/browse/HDDS-1954
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Happens if log level for {{org.apache.hadoop.ozone.client}} is set to TRACE.
> {code}
> SLF4J: Failed toString() invocation on an object of type 
> [com.sun.proxy.$Proxy85]
> Reported exception:
> java.lang.StackOverflowError
> ...
>   at org.slf4j.impl.Log4jLoggerAdapter.trace(Log4jLoggerAdapter.java:156)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:51)
>   at com.sun.proxy.$Proxy85.toString(Unknown Source)
>   at 
> org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:299)
>   at 
> org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:271)
>   at 
> org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:233)
>   at 
> org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:173)
>   at org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:151)
>   at org.slf4j.impl.Log4jLoggerAdapter.trace(Log4jLoggerAdapter.java:156)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:51)
>   at com.sun.proxy.$Proxy85.toString(Unknown Source)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1954) StackOverflowError in OzoneClientInvocationHandler

2019-08-12 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1954:

Status: Patch Available  (was: In Progress)

> StackOverflowError in OzoneClientInvocationHandler
> --
>
> Key: HDDS-1954
> URL: https://issues.apache.org/jira/browse/HDDS-1954
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Happens if log level for {{org.apache.hadoop.ozone.client}} is set to TRACE.
> {code}
> SLF4J: Failed toString() invocation on an object of type 
> [com.sun.proxy.$Proxy85]
> Reported exception:
> java.lang.StackOverflowError
> ...
>   at org.slf4j.impl.Log4jLoggerAdapter.trace(Log4jLoggerAdapter.java:156)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:51)
>   at com.sun.proxy.$Proxy85.toString(Unknown Source)
>   at 
> org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:299)
>   at 
> org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:271)
>   at 
> org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:233)
>   at 
> org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:173)
>   at org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:151)
>   at org.slf4j.impl.Log4jLoggerAdapter.trace(Log4jLoggerAdapter.java:156)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:51)
>   at com.sun.proxy.$Proxy85.toString(Unknown Source)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14725) Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks until next report)

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905546#comment-16905546
 ] 

Hadoop QA commented on HDFS-14725:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
58s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
49s{color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-hdfs in branch-2 failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 191 unchanged - 0 fixed = 193 total (was 191) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:da67579 |
| JIRA Issue | HDFS-14725 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977393/HDFS-14725.branch-2.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux da38648a39ed 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:5

[jira] [Created] (HDFS-14726) Fix JN incompatibility issue in branch-2 due to backport of HDFS-10519

2019-08-12 Thread Chen Liang (JIRA)
Chen Liang created HDFS-14726:
-

 Summary: Fix JN incompatibility issue in branch-2 due to backport 
of HDFS-10519
 Key: HDFS-14726
 URL: https://issues.apache.org/jira/browse/HDFS-14726
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: journal-node
Affects Versions: 2.10.0
Reporter: Chen Liang
Assignee: Chen Liang


HDFS-10519 has been backported to branch-2. However HDFS-10519 introduced an 
incompatibility issue between NN and JN due to the new protobuf field 
{{committedTxnId}} in {{HdfsServer.proto}}. This field was introduced as a 
required field so if JN and NN are not on same version, it will run into 
missing field exception. Although currently we can get around by making sure JN 
always gets upgraded properly before NN, we can potentially fix this 
incompatibility by changing the field to optional. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14726) Fix JN incompatibility issue in branch-2 due to backport of HDFS-10519

2019-08-12 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14726:
--
Priority: Blocker  (was: Major)

> Fix JN incompatibility issue in branch-2 due to backport of HDFS-10519
> --
>
> Key: HDFS-14726
> URL: https://issues.apache.org/jira/browse/HDFS-14726
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node
>Affects Versions: 2.10.0
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Blocker
>
> HDFS-10519 has been backported to branch-2. However HDFS-10519 introduced an 
> incompatibility issue between NN and JN due to the new protobuf field 
> {{committedTxnId}} in {{HdfsServer.proto}}. This field was introduced as a 
> required field so if JN and NN are not on same version, it will run into 
> missing field exception. Although currently we can get around by making sure 
> JN always gets upgraded properly before NN, we can potentially fix this 
> incompatibility by changing the field to optional. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14726) Fix JN incompatibility issue in branch-2 due to backport of HDFS-10519

2019-08-12 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905568#comment-16905568
 ] 

Chen Liang commented on HDFS-14726:
---

Marked as blocker as we should solve this before 2.10 release so 2.10 NN and JN 
stays compatible with older versions.

> Fix JN incompatibility issue in branch-2 due to backport of HDFS-10519
> --
>
> Key: HDFS-14726
> URL: https://issues.apache.org/jira/browse/HDFS-14726
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node
>Affects Versions: 2.10.0
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Blocker
>
> HDFS-10519 has been backported to branch-2. However HDFS-10519 introduced an 
> incompatibility issue between NN and JN due to the new protobuf field 
> {{committedTxnId}} in {{HdfsServer.proto}}. This field was introduced as a 
> required field so if JN and NN are not on same version, it will run into 
> missing field exception. Although currently we can get around by making sure 
> JN always gets upgraded properly before NN, we can potentially fix this 
> incompatibility by changing the field to optional. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14726) Fix JN incompatibility issue in branch-2 due to backport of HDFS-10519

2019-08-12 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14726:
--
Status: Patch Available  (was: Open)

> Fix JN incompatibility issue in branch-2 due to backport of HDFS-10519
> --
>
> Key: HDFS-14726
> URL: https://issues.apache.org/jira/browse/HDFS-14726
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node
>Affects Versions: 2.10.0
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Blocker
> Attachments: HDFS-14726-branch-2.001.patch
>
>
> HDFS-10519 has been backported to branch-2. However HDFS-10519 introduced an 
> incompatibility issue between NN and JN due to the new protobuf field 
> {{committedTxnId}} in {{HdfsServer.proto}}. This field was introduced as a 
> required field so if JN and NN are not on same version, it will run into 
> missing field exception. Although currently we can get around by making sure 
> JN always gets upgraded properly before NN, we can potentially fix this 
> incompatibility by changing the field to optional. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14726) Fix JN incompatibility issue in branch-2 due to backport of HDFS-10519

2019-08-12 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14726:
--
Attachment: HDFS-14726-branch-2.001.patch

> Fix JN incompatibility issue in branch-2 due to backport of HDFS-10519
> --
>
> Key: HDFS-14726
> URL: https://issues.apache.org/jira/browse/HDFS-14726
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node
>Affects Versions: 2.10.0
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Blocker
> Attachments: HDFS-14726-branch-2.001.patch
>
>
> HDFS-10519 has been backported to branch-2. However HDFS-10519 introduced an 
> incompatibility issue between NN and JN due to the new protobuf field 
> {{committedTxnId}} in {{HdfsServer.proto}}. This field was introduced as a 
> required field so if JN and NN are not on same version, it will run into 
> missing field exception. Although currently we can get around by making sure 
> JN always gets upgraded properly before NN, we can potentially fix this 
> incompatibility by changing the field to optional. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14375) DataNode cannot serve BlockPool to multiple NameNodes in the different realm

2019-08-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905573#comment-16905573
 ] 

Eric Yang commented on HDFS-14375:
--

{quote}I think the main issue is DataNode only authorize its own realm, even if 
the realms are set cross-realm trust.
 To solve this issue, clientPrincipal should be checked multiple cross-realms 
in authorize method.
{quote}
Authorize method is looking into [krbInfo to find the hostname from the service 
principal to find a 
match|https://github.com/apache/hadoop/blame/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ServiceAuthorizationManager.java#L109].
 If client access datanode and passed authentication negotiation, client ticket 
cache will have datanode hostname in ticket cache. Hadoop code does not inspect 
realm part of the principal name in authorize method, but merely validate that 
client ticket cache contains the hostname name of datanode. One way to validate 
that cross-realm authentication is to look at klist output and make sure that:
{code:java}
klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hdfs-d...@example.com

Valid starting   Expires  Service principal
08/12/2019 19:28:17  08/13/2019 19:28:17  krbtgt/example@example.com
renew until 08/19/2019 19:28:17
08/12/2019 20:37:49  08/13/2019 19:28:17  
HTTP/datanode.example2@example2.com
renew until 08/19/2019 19:28:17
{code}
In this example, ticket cache contains user's own krbtgt and also granted 
service principal for host in a different realm.

> DataNode cannot serve BlockPool to multiple NameNodes in the different realm
> 
>
> Key: HDFS-14375
> URL: https://issues.apache.org/jira/browse/HDFS-14375
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.1
>Reporter: Jihyun Cho
>Assignee: Jihyun Cho
>Priority: Major
> Attachments: authorize.patch
>
>
> Let me explain the environment for a description.
> {noformat}
> KDC(TEST1.COM) <-- Cross-realm trust -->  KDC(TEST2.COM)
>| |
> NameNode1 NameNode2
>| |
>-- DataNodes (federated) --
> {noformat}
> We configured the secure clusters and federated them.
> * Principal
> ** NameNode1 : nn/_h...@test1.com 
> ** NameNode2 : nn/_h...@test2.com 
> ** DataNodes : dn/_h...@test2.com 
> But DataNodes could not connect to NameNode1 with below error.
> {noformat}
> WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for dn/hadoop-datanode.test@test2.com 
> (auth:KERBEROS) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/hadoop-datanode.test@test1.com
> {noformat}
> We have avoided the error with attached patch.
> The patch checks only using {{username}} and {{hostname}} except {{realm}}.
> I think there is no problem. Because if realms are different and no 
> cross-realm setting, they cannot communication each other. If you are worried 
> about this, please let me know.
> In the long run, it would be better if I could set multiple realms for 
> authorize. Like this;
> {noformat}
> 
>   dfs.namenode.kerberos.trust-realms
>   TEST1.COM,TEST2.COM
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14727) Typo in analysesErrorMsg

2019-08-12 Thread David Mollitor (JIRA)
David Mollitor created HDFS-14727:
-

 Summary: Typo in analysesErrorMsg
 Key: HDFS-14727
 URL: https://issues.apache.org/jira/browse/HDFS-14727
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 3.2.0
Reporter: David Mollitor


{code:java}
  analysis.append("Please check whether your etc/hadoop/mapred-site.xml "
  + "contains the below configuration:\n");
{code}

I think it should be {{/etc/hadoop/mapred-site.xml}}

https://github.com/apache/hadoop/blob/2064ca015d1584263aac0cc20c60b925a3aff612/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java#L788-L789



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1952) TestMiniChaosOzoneCluster may run until OOME

2019-08-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1952:
-
Labels: pull-request-available  (was: )

> TestMiniChaosOzoneCluster may run until OOME
> 
>
> Key: HDDS-1952
> URL: https://issues.apache.org/jira/browse/HDDS-1952
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Priority: Critical
>  Labels: pull-request-available
>
> {{TestMiniChaosOzoneCluster}} runs load generator on a cluster for supposedly 
> 1 minute, but it may run indefinitely until JVM crashes due to 
> OutOfMemoryError.
> In 0.4.1 nightly build it crashed 29/30 times (and no tests were executed in 
> the remaining one run due to some other error).
> Latest:
> https://github.com/elek/ozone-ci/blob/3f553ed6ad358ba61a302967617de737d7fea01a/byscane/byscane-nightly-wggqd/integration/output.log#L5661-L5662
> When it crashes, it leaves GBs of data lying around.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1952) TestMiniChaosOzoneCluster may run until OOME

2019-08-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1952?focusedWorklogId=293399&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-293399
 ]

ASF GitHub Bot logged work on HDDS-1952:


Author: ASF GitHub Bot
Created on: 12/Aug/19 21:23
Start Date: 12/Aug/19 21:23
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1284: HDDS-1952. 
Disable TestMiniChaosOzoneCluster in integration.sh
URL: https://github.com/apache/hadoop/pull/1284
 
 
   ## What changes were proposed in this pull request?
   
   Disable `TestMiniChaosOzoneCluster` in integration test (`integration.sh`) 
run by CI and nightly builds, since it always crashes after running for a long 
time.
   
   It can still be run manually.
   
   https://issues.apache.org/jira/browse/HDDS-1952
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 293399)
Time Spent: 10m
Remaining Estimate: 0h

> TestMiniChaosOzoneCluster may run until OOME
> 
>
> Key: HDDS-1952
> URL: https://issues.apache.org/jira/browse/HDDS-1952
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{TestMiniChaosOzoneCluster}} runs load generator on a cluster for supposedly 
> 1 minute, but it may run indefinitely until JVM crashes due to 
> OutOfMemoryError.
> In 0.4.1 nightly build it crashed 29/30 times (and no tests were executed in 
> the remaining one run due to some other error).
> Latest:
> https://github.com/elek/ozone-ci/blob/3f553ed6ad358ba61a302967617de737d7fea01a/byscane/byscane-nightly-wggqd/integration/output.log#L5661-L5662
> When it crashes, it leaves GBs of data lying around.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1917) TestOzoneRpcClientAbstract is failing

2019-08-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1917?focusedWorklogId=293401&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-293401
 ]

ASF GitHub Bot logged work on HDDS-1917:


Author: ASF GitHub Bot
Created on: 12/Aug/19 21:24
Start Date: 12/Aug/19 21:24
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1234: HDDS-1917. 
TestOzoneRpcClientAbstract is failing.
URL: https://github.com/apache/hadoop/pull/1234#issuecomment-520601462
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 96 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 665 | trunk passed |
   | +1 | compile | 403 | trunk passed |
   | +1 | checkstyle | 79 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1031 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 204 | trunk passed |
   | 0 | spotbugs | 491 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 731 | trunk passed |
   | -0 | patch | 539 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 578 | the patch passed |
   | +1 | compile | 373 | the patch passed |
   | +1 | javac | 373 | the patch passed |
   | +1 | checkstyle | 77 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 731 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | the patch passed |
   | +1 | findbugs | 657 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 364 | hadoop-hdds in the patch passed. |
   | -1 | unit |  | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 8594 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1234/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1234 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7812e22191bb 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e4b538b |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1234/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1234/5/testReport/ |
   | Max. process+thread count | 5344 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1234/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 293401)
Time Spent: 1.5h  (was: 1h 20m)

> TestOzoneRpcClientAbstract is failing
> -
>
> Key: HDDS-1917
> URL: https://issues.apache.org/jira/browse/HDDS-1917
> Project: H

[jira] [Work logged] (HDDS-1952) TestMiniChaosOzoneCluster may run until OOME

2019-08-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1952?focusedWorklogId=293400&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-293400
 ]

ASF GitHub Bot logged work on HDDS-1952:


Author: ASF GitHub Bot
Created on: 12/Aug/19 21:24
Start Date: 12/Aug/19 21:24
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1284: HDDS-1952. Disable 
TestMiniChaosOzoneCluster in integration.sh
URL: https://github.com/apache/hadoop/pull/1284#issuecomment-520601381
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 293400)
Time Spent: 20m  (was: 10m)

> TestMiniChaosOzoneCluster may run until OOME
> 
>
> Key: HDDS-1952
> URL: https://issues.apache.org/jira/browse/HDDS-1952
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{TestMiniChaosOzoneCluster}} runs load generator on a cluster for supposedly 
> 1 minute, but it may run indefinitely until JVM crashes due to 
> OutOfMemoryError.
> In 0.4.1 nightly build it crashed 29/30 times (and no tests were executed in 
> the remaining one run due to some other error).
> Latest:
> https://github.com/elek/ozone-ci/blob/3f553ed6ad358ba61a302967617de737d7fea01a/byscane/byscane-nightly-wggqd/integration/output.log#L5661-L5662
> When it crashes, it leaves GBs of data lying around.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14717) Junit not found in hadoop-dynamometer-infra

2019-08-12 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDFS-14717:

Attachment: HDFS-14717.002.patch

> Junit not found in hadoop-dynamometer-infra
> ---
>
> Key: HDFS-14717
> URL: https://issues.apache.org/jira/browse/HDFS-14717
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: kevin su
>Assignee: kevin su
>Priority: Major
> Attachments: HDFS-14717.001.patch, HDFS-14717.002.patch
>
>
> {code:java}
> $ hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
> org.apache.hadoop.tools.dynamometer.Client
> {code}
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
>  at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
> Caused by: java.lang.ClassNotFoundException: org.junit.Assert
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  ... 7 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14726) Fix JN incompatibility issue in branch-2 due to backport of HDFS-10519

2019-08-12 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-14726:
---
Target Version/s: 2.10.0

> Fix JN incompatibility issue in branch-2 due to backport of HDFS-10519
> --
>
> Key: HDFS-14726
> URL: https://issues.apache.org/jira/browse/HDFS-14726
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node
>Affects Versions: 2.10.0
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Blocker
> Attachments: HDFS-14726-branch-2.001.patch
>
>
> HDFS-10519 has been backported to branch-2. However HDFS-10519 introduced an 
> incompatibility issue between NN and JN due to the new protobuf field 
> {{committedTxnId}} in {{HdfsServer.proto}}. This field was introduced as a 
> required field so if JN and NN are not on same version, it will run into 
> missing field exception. Although currently we can get around by making sure 
> JN always gets upgraded properly before NN, we can potentially fix this 
> incompatibility by changing the field to optional. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14726) Fix JN incompatibility issue in branch-2 due to backport of HDFS-10519

2019-08-12 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905633#comment-16905633
 ] 

Erik Krogen commented on HDFS-14726:


Thanks for filing this [~vagarychen]. The idea seems good to me. Maybe we can 
create a constant representing the {{-1}} (inside of {{RemoteEditLogManifest}} 
?) instead of having the same magic number appearing in both places?

> Fix JN incompatibility issue in branch-2 due to backport of HDFS-10519
> --
>
> Key: HDFS-14726
> URL: https://issues.apache.org/jira/browse/HDFS-14726
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node
>Affects Versions: 2.10.0
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Blocker
> Attachments: HDFS-14726-branch-2.001.patch
>
>
> HDFS-10519 has been backported to branch-2. However HDFS-10519 introduced an 
> incompatibility issue between NN and JN due to the new protobuf field 
> {{committedTxnId}} in {{HdfsServer.proto}}. This field was introduced as a 
> required field so if JN and NN are not on same version, it will run into 
> missing field exception. Although currently we can get around by making sure 
> JN always gets upgraded properly before NN, we can potentially fix this 
> incompatibility by changing the field to optional. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1954) StackOverflowError in OzoneClientInvocationHandler

2019-08-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1954?focusedWorklogId=293441&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-293441
 ]

ASF GitHub Bot logged work on HDDS-1954:


Author: ASF GitHub Bot
Created on: 12/Aug/19 22:23
Start Date: 12/Aug/19 22:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1283: HDDS-1954. 
StackOverflowError in OzoneClientInvocationHandler
URL: https://github.com/apache/hadoop/pull/1283#issuecomment-520617316
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 552 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 612 | trunk passed |
   | +1 | compile | 371 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 880 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | trunk passed |
   | 0 | spotbugs | 422 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 615 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 561 | the patch passed |
   | +1 | compile | 374 | the patch passed |
   | +1 | javac | 374 | the patch passed |
   | +1 | checkstyle | 82 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 668 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | the patch passed |
   | +1 | findbugs | 640 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 285 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2144 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 55 | The patch does not generate ASF License warnings. |
   | | | 8415 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1283/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1283 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 38eccfbf3c20 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e4b538b |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1283/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1283/1/testReport/ |
   | Max. process+thread count | 4917 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/client U: hadoop-ozone/client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1283/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 293441)
Time Spent: 20m  (was: 10m)

> StackOverflowError in OzoneClientInvocationHandler
> --
>
> Key: HDDS-1954
> URL: https://issues.apache.org/jira/browse/HDDS-1954
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Trivial
>

[jira] [Commented] (HDFS-14717) Junit not found in hadoop-dynamometer-infra

2019-08-12 Thread kevin su (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905637#comment-16905637
 ] 

kevin su commented on HDFS-14717:
-

[~xkrogen] Thanks for your reply 

{quote}It looks like the test failure is due to the recent cleanup of versions 
available on the Apache mirrors. The real long-term solution is to fix 
HDFS-14412, but in the meantime, I think we need to bump the default version 
used by the test to 3.1.2 from 3.1.1.{quote}

Make sense, if we use local build of Hadoop by default, we won't got failure by 
downloading Hadoop from Apache mirrors.

But it's seems like we need to build hadoop distribution firstly before run 
*_TestDynamometerInfra_* 
what if we directly run *_TestDynamometerInfra_* , it may fail.
Is there any way to build Hadoop inside the Unit Test 

> Junit not found in hadoop-dynamometer-infra
> ---
>
> Key: HDFS-14717
> URL: https://issues.apache.org/jira/browse/HDFS-14717
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: kevin su
>Assignee: kevin su
>Priority: Major
> Attachments: HDFS-14717.001.patch, HDFS-14717.002.patch
>
>
> {code:java}
> $ hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
> org.apache.hadoop.tools.dynamometer.Client
> {code}
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
>  at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
> Caused by: java.lang.ClassNotFoundException: org.junit.Assert
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  ... 7 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-12 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDFS-2470:
--
Attachment: HDFS-2470.06.patch

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron T. Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, HDFS-2470.06.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-12 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905641#comment-16905641
 ] 

Siddharth Wagle commented on HDFS-2470:
---

Addressed the comments from [~eyang] in 06, except the setting permissions on 
the root directory. Need some insight on what should the right fix be!
Setting the permissions on just */tmp/namenode/current* and not on 
*/tmp/namenode*, does not make sense to me, but consequently, if someone sets 
the _dfs.namenode.name.dir_ to "/tmp" we would end up doing the wrong thing.

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron T. Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, HDFS-2470.06.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1952) TestMiniChaosOzoneCluster may run until OOME

2019-08-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1952?focusedWorklogId=293449&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-293449
 ]

ASF GitHub Bot logged work on HDDS-1952:


Author: ASF GitHub Bot
Created on: 12/Aug/19 22:36
Start Date: 12/Aug/19 22:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1284: HDDS-1952. 
Disable TestMiniChaosOzoneCluster in integration.sh
URL: https://github.com/apache/hadoop/pull/1284#issuecomment-520620317
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 110 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 757 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1081 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 688 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 1 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 852 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 129 | hadoop-hdds in the patch passed. |
   | +1 | unit | 336 | hadoop-ozone in the patch passed. |
   | +1 | asflicense | 61 | The patch does not generate ASF License warnings. |
   | | | 4260 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1284/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1284 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
   | uname | Linux 020854b254e5 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e4b538b |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1284/1/testReport/ |
   | Max. process+thread count | 306 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1284/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 293449)
Time Spent: 0.5h  (was: 20m)

> TestMiniChaosOzoneCluster may run until OOME
> 
>
> Key: HDDS-1952
> URL: https://issues.apache.org/jira/browse/HDDS-1952
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {{TestMiniChaosOzoneCluster}} runs load generator on a cluster for supposedly 
> 1 minute, but it may run indefinitely until JVM crashes due to 
> OutOfMemoryError.
> In 0.4.1 nightly build it crashed 29/30 times (and no tests were executed in 
> the remaining one run due to some other error).
> Latest:
> https://github.com/elek/ozone-ci/blob/3f553ed6ad358ba61a302967617de737d7fea01a/byscane/byscane-nightly-wggqd/integration/output.log#L5661-L5662
> When it crashes, it leaves GBs of data lying around.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14717) Junit not found in hadoop-dynamometer-infra

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905655#comment-16905655
 ] 

Hadoop QA commented on HDFS-14717:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra: The patch generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
45s{color} | {color:green} hadoop-dynamometer-infra in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14717 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977412/HDFS-14717.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 56457a7f76f3 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e4b538b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27484/artifact/out/diff-checkstyle-hadoop-tools_hadoop-dynamometer_hadoop-dynamometer-infra.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27484/testReport/ |
| Max. process+thread count | 964 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra U: 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra |
| Console output | 
ht

[jira] [Updated] (HDFS-14724) Fix JDK7 compatibility in branch-2

2019-08-12 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14724:
--
Status: Patch Available  (was: Open)

> Fix JDK7 compatibility in branch-2
> --
>
> Key: HDFS-14724
> URL: https://issues.apache.org/jira/browse/HDFS-14724
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: Wei-Chiu Chuang
>Assignee: Chen Liang
>Priority: Blocker
> Attachments: HDFS-14724-branch-2.001.patch
>
>
> With the feature Consistent Read from Standby now in branch-2, I found it 
> breaks the build when building with JDK7, because it uses 
> java.util.concurrent.atomic.LongAccumulator which is only in JDK8 and above.
>  
> We should figure out if we want to fix it, or give up JDK7 compatibility.
> [~xkrogen] [~shv] [~vagarychen]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14726) Fix JN incompatibility issue in branch-2 due to backport of HDFS-10519

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905659#comment-16905659
 ] 

Hadoop QA commented on HDFS-14726:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
25s{color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m  
3s{color} | {color:red} hadoop-hdfs in branch-2 failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} branch-2 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} branch-2 passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m  
2s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  1m  2s{color} | 
{color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:da675796017 |
| JIRA Issue | HDFS-14726 |
| JIRA Patch URL | 
https://issues.apache.org/jira/

[jira] [Commented] (HDFS-916) Rewrite DFSOutputStream to use a single thread with NIO

2019-08-12 Thread Todd Lipcon (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905664#comment-16905664
 ] 

Todd Lipcon commented on HDFS-916:
--

I doubt anyone is working on this at this point (I certainly am not, almost 10 
years later!)

> Rewrite DFSOutputStream to use a single thread with NIO
> ---
>
> Key: HDFS-916
> URL: https://issues.apache.org/jira/browse/HDFS-916
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Priority: Major
>
> The DFS write pipeline code has some really hairy multi-threaded 
> synchronization. There have been a lot of bugs produced by this (HDFS-101, 
> HDFS-793, HDFS-915, tens of others) since it's very hard to understand the 
> message passing, lock sharing, and interruption properties. The reason for 
> the multiple threads is to be able to simultaneously send and receive. If 
> instead of using multiple threads, it used nonblocking IO, I think the whole 
> thing would be a lot less error prone.
> I think we could do this in two halves: one half is the DFSOutputStream. The 
> other half is BlockReceiver. I opened this JIRA first as I think it's simpler 
> (only one TCP connection to deal with, rather than an up and downstream)
> Opinions? Am I crazy? I would like to see some agreement on the idea before I 
> spend time writing code.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-12 Thread Masatake Iwasaki (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-14423:

Attachment: HDFS-14423.003.patch

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HDFS-14423.001.patch, HDFS-14423.002.patch, 
> HDFS-14423.003.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-12 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905670#comment-16905670
 ] 

Masatake Iwasaki commented on HDFS-14423:
-

Attached 003 with fix for compilation issue.

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HDFS-14423.001.patch, HDFS-14423.002.patch, 
> HDFS-14423.003.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14708) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2019-08-12 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905689#comment-16905689
 ] 

Wei-Chiu Chuang commented on HDFS-14708:


Great catch! Thanks for working on this. LGTM +1

> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in 
> trunk
> 
>
> Key: HDFS-14708
> URL: https://issues.apache.org/jira/browse/HDFS-14708
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14708-001.patch, HDFS-14708-002.patch
>
>
> {code:java}
> [ERROR] 
> testBlockReportSucceedsWithLargerLengthLimit(org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport)
>   Time elapsed: 47.956 s  <<< ERROR!
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5011)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1581)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:181)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:31664)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)
> Caused by: java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:424)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:396)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiffSorted(BlockManager.java:2952)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2787)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1582)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5089)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5068)
> Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
>   at 
> com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
>   at 
> com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
>   at 
> com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
>   at 
> com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:420)
>   ... 8 more
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1499)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>   at com.sun.proxy.$Proxy25.blockReport(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:218)
>   at 
> org.apache.hadoop.hdfs.se

[jira] [Resolved] (HDFS-14148) HDFS OIV ReverseXML SnapshotSection parser throws exception when there are more than one snapshottable directory

2019-08-12 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-14148.

   Resolution: Fixed
Fix Version/s: 3.3.0

PR was merged. Thanks [~smeng] for reporting the issue and fixing the bug!

> HDFS OIV ReverseXML SnapshotSection parser throws exception when there are 
> more than one snapshottable directory
> 
>
> Key: HDFS-14148
> URL: https://issues.apache.org/jira/browse/HDFS-14148
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14148.test.001.patch, fsimage_024, 
> fsimage_441
>
>
> The current HDFS OIV tool doesn't seem to support snapshot well when 
> reversing XML back to binary.
> {code:bash}
> $ hdfs oiv -i fsimage_0026542.xml -o reverse.bin -p ReverseXML
> OfflineImageReconstructor failed: Found unknown XML keys in : dir
> java.io.IOException: Found unknown XML keys in : dir
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$Node.verifyNoRemainingKeys(OfflineImageReconstructor.java:324)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$SnapshotSectionProcessor.process(OfflineImageReconstructor.java:1357)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1785)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1840)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:193)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:136)
> $ grep -n "" fsimage_0026542.xml
> 228:222049220495
> {code}
> This is also reproduced on latest trunk 
> fba222a85603d6321419aa37bcc48d276dd6c4a6
> This problem can be easily reproduced when there are at least TWO 
> snapshot-enabled directories. Apply HDFS-14148.test.001.patch and run 
> *TestOfflineImageViewer#testReverseXmlRoundTrip()* to see my point.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14148) HDFS OIV ReverseXML SnapshotSection parser throws exception when there are more than one snapshottable directory

2019-08-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905692#comment-16905692
 ] 

Hudson commented on HDFS-14148:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17097 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17097/])
HDFS-14148. HDFS OIV ReverseXML SnapshotSection parser throws exception 
(weichiu: rev c92b49876a078ce7fb4e2a852e315de5b6410082)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageReconstructor.java


> HDFS OIV ReverseXML SnapshotSection parser throws exception when there are 
> more than one snapshottable directory
> 
>
> Key: HDFS-14148
> URL: https://issues.apache.org/jira/browse/HDFS-14148
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14148.test.001.patch, fsimage_024, 
> fsimage_441
>
>
> The current HDFS OIV tool doesn't seem to support snapshot well when 
> reversing XML back to binary.
> {code:bash}
> $ hdfs oiv -i fsimage_0026542.xml -o reverse.bin -p ReverseXML
> OfflineImageReconstructor failed: Found unknown XML keys in : dir
> java.io.IOException: Found unknown XML keys in : dir
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$Node.verifyNoRemainingKeys(OfflineImageReconstructor.java:324)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$SnapshotSectionProcessor.process(OfflineImageReconstructor.java:1357)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1785)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1840)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:193)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:136)
> $ grep -n "" fsimage_0026542.xml
> 228:222049220495
> {code}
> This is also reproduced on latest trunk 
> fba222a85603d6321419aa37bcc48d276dd6c4a6
> This problem can be easily reproduced when there are at least TWO 
> snapshot-enabled directories. Apply HDFS-14148.test.001.patch and run 
> *TestOfflineImageViewer#testReverseXmlRoundTrip()* to see my point.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14148) HDFS OIV ReverseXML SnapshotSection parser throws exception when there are more than one snapshottable directory

2019-08-12 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905696#comment-16905696
 ] 

Siyao Meng commented on HDFS-14148:
---

[~jojochuang] Thanks for committing!

> HDFS OIV ReverseXML SnapshotSection parser throws exception when there are 
> more than one snapshottable directory
> 
>
> Key: HDFS-14148
> URL: https://issues.apache.org/jira/browse/HDFS-14148
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14148.test.001.patch, fsimage_024, 
> fsimage_441
>
>
> The current HDFS OIV tool doesn't seem to support snapshot well when 
> reversing XML back to binary.
> {code:bash}
> $ hdfs oiv -i fsimage_0026542.xml -o reverse.bin -p ReverseXML
> OfflineImageReconstructor failed: Found unknown XML keys in : dir
> java.io.IOException: Found unknown XML keys in : dir
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$Node.verifyNoRemainingKeys(OfflineImageReconstructor.java:324)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$SnapshotSectionProcessor.process(OfflineImageReconstructor.java:1357)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1785)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1840)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:193)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:136)
> $ grep -n "" fsimage_0026542.xml
> 228:222049220495
> {code}
> This is also reproduced on latest trunk 
> fba222a85603d6321419aa37bcc48d276dd6c4a6
> This problem can be easily reproduced when there are at least TWO 
> snapshot-enabled directories. Apply HDFS-14148.test.001.patch and run 
> *TestOfflineImageViewer#testReverseXmlRoundTrip()* to see my point.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14724) Fix JDK7 compatibility in branch-2

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905698#comment-16905698
 ] 

Hadoop QA commented on HDFS-14724:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  4m 
52s{color} | {color:red} root in branch-2 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m  
3s{color} | {color:red} hadoop-hdfs in branch-2 failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} branch-2 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} branch-2 passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 50s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95 with JDK v1.7.0_95 
generated 1 new + 372 unchanged - 2 fixed = 373 total (was 374) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 17 unchanged - 1 fixed = 17 total (was 18) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:da675796017 |
| JIRA Issue | HDFS-14724 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977384/HDFS-14724-branch-2.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6a2519b2e10c 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patc

[jira] [Updated] (HDFS-14148) HDFS OIV ReverseXML SnapshotSection parser throws exception when there are more than one snapshottable directory

2019-08-12 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14148:
---
Fix Version/s: 3.1.3
   3.2.1

> HDFS OIV ReverseXML SnapshotSection parser throws exception when there are 
> more than one snapshottable directory
> 
>
> Key: HDFS-14148
> URL: https://issues.apache.org/jira/browse/HDFS-14148
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14148.test.001.patch, fsimage_024, 
> fsimage_441
>
>
> The current HDFS OIV tool doesn't seem to support snapshot well when 
> reversing XML back to binary.
> {code:bash}
> $ hdfs oiv -i fsimage_0026542.xml -o reverse.bin -p ReverseXML
> OfflineImageReconstructor failed: Found unknown XML keys in : dir
> java.io.IOException: Found unknown XML keys in : dir
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$Node.verifyNoRemainingKeys(OfflineImageReconstructor.java:324)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$SnapshotSectionProcessor.process(OfflineImageReconstructor.java:1357)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1785)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1840)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:193)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:136)
> $ grep -n "" fsimage_0026542.xml
> 228:222049220495
> {code}
> This is also reproduced on latest trunk 
> fba222a85603d6321419aa37bcc48d276dd6c4a6
> This problem can be easily reproduced when there are at least TWO 
> snapshot-enabled directories. Apply HDFS-14148.test.001.patch and run 
> *TestOfflineImageViewer#testReverseXmlRoundTrip()* to see my point.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14724) Fix JDK7 compatibility in branch-2

2019-08-12 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14724:
--
   Resolution: Fixed
Fix Version/s: 2.10.0
   Status: Resolved  (was: Patch Available)

> Fix JDK7 compatibility in branch-2
> --
>
> Key: HDFS-14724
> URL: https://issues.apache.org/jira/browse/HDFS-14724
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: Wei-Chiu Chuang
>Assignee: Chen Liang
>Priority: Blocker
> Fix For: 2.10.0
>
> Attachments: HDFS-14724-branch-2.001.patch
>
>
> With the feature Consistent Read from Standby now in branch-2, I found it 
> breaks the build when building with JDK7, because it uses 
> java.util.concurrent.atomic.LongAccumulator which is only in JDK8 and above.
>  
> We should figure out if we want to fix it, or give up JDK7 compatibility.
> [~xkrogen] [~shv] [~vagarychen]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14724) Fix JDK7 compatibility in branch-2

2019-08-12 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905701#comment-16905701
 ] 

Chen Liang commented on HDFS-14724:
---

Thanks [~jojochuang] for the review. The failed test is unrelated and passed in 
my local run. I've committed v001 patch to branch-2.

> Fix JDK7 compatibility in branch-2
> --
>
> Key: HDFS-14724
> URL: https://issues.apache.org/jira/browse/HDFS-14724
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: Wei-Chiu Chuang
>Assignee: Chen Liang
>Priority: Blocker
> Attachments: HDFS-14724-branch-2.001.patch
>
>
> With the feature Consistent Read from Standby now in branch-2, I found it 
> breaks the build when building with JDK7, because it uses 
> java.util.concurrent.atomic.LongAccumulator which is only in JDK8 and above.
>  
> We should figure out if we want to fix it, or give up JDK7 compatibility.
> [~xkrogen] [~shv] [~vagarychen]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905703#comment-16905703
 ] 

Hadoop QA commented on HDFS-2470:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 778 unchanged - 5 fixed = 778 total (was 783) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.TestPersistBlocks |
|   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-2470 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977413/HDFS-2470.06.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 6eed1da317bc 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 201dc66 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27485/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-H

[jira] [Commented] (HDFS-14609) RBF: Security should use common AuthenticationFilter

2019-08-12 Thread Chen Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905724#comment-16905724
 ] 

Chen Zhang commented on HDFS-14609:
---

[~crh] Sure I'll work on trunk to fix these tests.

Just wondering why Eric and Takanobu both run these tests failed after revert 
or switch to branch HDFS-13891, so I did some digging works

> RBF: Security should use common AuthenticationFilter
> 
>
> Key: HDFS-14609
> URL: https://issues.apache.org/jira/browse/HDFS-14609
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: CR Hota
>Assignee: Chen Zhang
>Priority: Major
>
> We worked on router based federation security as part of HDFS-13532. We kept 
> it compatible with the way namenode works. However with HADOOP-16314 and 
> HDFS-16354 in trunk, auth filters seems to have been changed causing tests to 
> fail.
> Changes are needed appropriately in RBF, mainly fixing broken tests.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1894) Support listPipelines by filters in scmcli

2019-08-12 Thread Li Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905725#comment-16905725
 ] 

Li Cheng commented on HDDS-1894:


[~xyao] [~Sammi] [~junjie]

Please see PR *[https://github.com/apache/hadoop/pull/1286]*

> Support listPipelines by filters in scmcli
> --
>
> Key: HDDS-1894
> URL: https://issues.apache.org/jira/browse/HDDS-1894
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Li Cheng
>Priority: Major
>
> Today scmcli has a subcmd that allow list all pipelines. This ticket is 
> opened to filter the results by switches, e.g., filter by Factor: THREE and 
> State: OPEN. This will be useful for trouble shooting in large cluster.
>  
> {code}
> bin/ozone scmcli listPipelines
> Pipeline[ Id: a8d1b0c9-e1d4-49ea-8746-3f61dfb5ee3f, Nodes: 
> cce44fde-bc8d-4063-97b3-6f557af756e1\{ip: 10.17.112.65, host: 
> ia0230.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, Type:RATIS, Factor:ONE, State:OPEN]
> Pipeline[ Id: c9c453d1-d74c-4414-b87f-1d3585d78a7c, Nodes: 
> 0b7b0b93-8323-4b82-8cc0-a9a5c10ab827\{ip: 10.17.112.29, host: 
> ia0138.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}c756a0e0-5a1b-4d03-ba5b-cafbcabac877\{ip: 10.17.112.27, host: 
> ia0134.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}bee45bd7-1ee6-4726-b3d1-81476dc1eb49\{ip: 10.17.112.28, host: 
> ia0136.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, Type:RATIS, Factor:THREE, State:OPEN]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14713) RBF: routeradmin support refreshRouterArgs command but it not on display

2019-08-12 Thread wangzhaohui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhaohui updated HDFS-14713:
---
Attachment: HDFS-14713-002.patch

> RBF: routeradmin support refreshRouterArgs command but it not on display
> 
>
> Key: HDFS-14713
> URL: https://issues.apache.org/jira/browse/HDFS-14713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-14713-000.patch, HDFS-14713-001.patch, 
> HDFS-14713-002.patch, after.png, before.png
>
>
> When the cmd commond is null,the refreshRouterArgs command is not display 
> ,because there is one missing value in the String[] commands
> {code:java}
> //
> if (cmd == null) {
>   String[] commands =
>   {"-add", "-update", "-rm", "-ls", "-getDestination",
>   "-setQuota", "-clrQuota",
>   "-safemode", "-nameservice", "-getDisabledNameservices",
>   "-refresh"};
>   StringBuilder usage = new StringBuilder();
>   usage.append("Usage: hdfs dfsrouteradmin :\n");
>   for (int i = 0; i < commands.length; i++) {
> usage.append(getUsage(commands[i]));
> if (i + 1 < commands.length) {
>   usage.append("\n");
> }
>   }
>   
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14721) RBF: ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor

2019-08-12 Thread xuzq (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905742#comment-16905742
 ] 

xuzq commented on HDFS-14721:
-

Thanks [~elgoiri]. And I will modify the unit test code.
{quote}Shouldn't we unwrapping the exception in the unknown exception case too?
{quote} * If the exception is RemoteException, it will be unwrapped in 
invokeMethod too.
 * If the exception is other Exception, it will not be unwrapped in the past.

In my experience, we needn't to unwrap the unknown exception.

 

> RBF: ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor
> ---
>
> Key: HDFS-14721
> URL: https://issues.apache.org/jira/browse/HDFS-14721
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14721-trunk-001.patch, HDFS-14721-trunk-002.patch
>
>
> ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor when 
> RemoteException is returned.
> Because the remoteException is unwrap in invoke method, and it will be 
> proxyOpComplete(false) in invokeMethod.
> {code:java}
> // invoke method
> if (ioe instanceof RemoteException) {
>   RemoteException re = (RemoteException) ioe;
>   ioe = re.unwrapRemoteException();
>   ioe = getCleanException(ioe);
> }
> // invokeMethod method
> if (this.rpcMonitor != null) {
>   this.rpcMonitor.proxyOpFailureCommunicate();
>   this.rpcMonitor.proxyOpComplete(false);
> }
> throw ioe;{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14721) RBF: ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor

2019-08-12 Thread xuzq (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuzq updated HDFS-14721:

Attachment: HDFS-14721-trunk-002.patch

> RBF: ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor
> ---
>
> Key: HDFS-14721
> URL: https://issues.apache.org/jira/browse/HDFS-14721
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14721-trunk-001.patch, HDFS-14721-trunk-002.patch
>
>
> ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor when 
> RemoteException is returned.
> Because the remoteException is unwrap in invoke method, and it will be 
> proxyOpComplete(false) in invokeMethod.
> {code:java}
> // invoke method
> if (ioe instanceof RemoteException) {
>   RemoteException re = (RemoteException) ioe;
>   ioe = re.unwrapRemoteException();
>   ioe = getCleanException(ioe);
> }
> // invokeMethod method
> if (this.rpcMonitor != null) {
>   this.rpcMonitor.proxyOpFailureCommunicate();
>   this.rpcMonitor.proxyOpComplete(false);
> }
> throw ioe;{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14721) RBF: ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor

2019-08-12 Thread xuzq (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905742#comment-16905742
 ] 

xuzq edited comment on HDFS-14721 at 8/13/19 2:19 AM:
--

Thanks [~elgoiri]. And I will modify the unit test code.
{quote}Shouldn't we unwrapping the exception in the unknown exception case too?
{quote}

 * If the exception is RemoteException, it will be unwrapped in invokeMethod 
too.
 * If the exception is other Exception, it will not be unwrapped in the past.

In my experience, we needn't to unwrap the unknown exception.

 


was (Author: xuzq_zander):
Thanks [~elgoiri]. And I will modify the unit test code.
{quote}Shouldn't we unwrapping the exception in the unknown exception case too?
{quote} * If the exception is RemoteException, it will be unwrapped in 
invokeMethod too.
 * If the exception is other Exception, it will not be unwrapped in the past.

In my experience, we needn't to unwrap the unknown exception.

 

> RBF: ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor
> ---
>
> Key: HDFS-14721
> URL: https://issues.apache.org/jira/browse/HDFS-14721
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14721-trunk-001.patch, HDFS-14721-trunk-002.patch
>
>
> ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor when 
> RemoteException is returned.
> Because the remoteException is unwrap in invoke method, and it will be 
> proxyOpComplete(false) in invokeMethod.
> {code:java}
> // invoke method
> if (ioe instanceof RemoteException) {
>   RemoteException re = (RemoteException) ioe;
>   ioe = re.unwrapRemoteException();
>   ioe = getCleanException(ioe);
> }
> // invokeMethod method
> if (this.rpcMonitor != null) {
>   this.rpcMonitor.proxyOpFailureCommunicate();
>   this.rpcMonitor.proxyOpComplete(false);
> }
> throw ioe;{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905756#comment-16905756
 ] 

Hadoop QA commented on HDFS-14423:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
19s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
7s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}199m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
|   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14423 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977414/HDFS-14423.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 40e5a437c89c 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Commented] (HDFS-14708) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2019-08-12 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905767#comment-16905767
 ] 

Ayush Saxena commented on HDFS-14708:
-

Committed to trunk.
Thanx [~leosun08] for the contribution and [~jojochuang] for the review!!!

> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in 
> trunk
> 
>
> Key: HDFS-14708
> URL: https://issues.apache.org/jira/browse/HDFS-14708
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14708-001.patch, HDFS-14708-002.patch
>
>
> {code:java}
> [ERROR] 
> testBlockReportSucceedsWithLargerLengthLimit(org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport)
>   Time elapsed: 47.956 s  <<< ERROR!
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5011)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1581)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:181)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:31664)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)
> Caused by: java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:424)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:396)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiffSorted(BlockManager.java:2952)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2787)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1582)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5089)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5068)
> Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
>   at 
> com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
>   at 
> com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
>   at 
> com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
>   at 
> com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:420)
>   ... 8 more
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1499)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>   at com.sun.proxy.$Proxy25.blockReport(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:218)

[jira] [Updated] (HDFS-14708) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2019-08-12 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14708:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in 
> trunk
> 
>
> Key: HDFS-14708
> URL: https://issues.apache.org/jira/browse/HDFS-14708
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14708-001.patch, HDFS-14708-002.patch
>
>
> {code:java}
> [ERROR] 
> testBlockReportSucceedsWithLargerLengthLimit(org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport)
>   Time elapsed: 47.956 s  <<< ERROR!
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5011)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1581)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:181)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:31664)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)
> Caused by: java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:424)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:396)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiffSorted(BlockManager.java:2952)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2787)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1582)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5089)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5068)
> Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
>   at 
> com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
>   at 
> com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
>   at 
> com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
>   at 
> com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:420)
>   ... 8 more
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1499)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>   at com.sun.proxy.$Proxy25.blockReport(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.j

[jira] [Commented] (HDFS-14713) RBF: routeradmin support refreshRouterArgs command but it not on display

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905771#comment-16905771
 ] 

Hadoop QA commented on HDFS-14713:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 50s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14713 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977428/HDFS-14713-002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 79d8280416ab 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c92b498 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27488/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27488/testReport/ |
| Max. process+thread count | 1584 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://build

[jira] [Commented] (HDFS-14713) RBF: routeradmin support refreshRouterArgs command but it not on display

2019-08-12 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905773#comment-16905773
 ] 

Ayush Saxena commented on HDFS-14713:
-

Thanx [~wangzhaohui] for the patch.
Add the part here in the test too :

{code:java}

argv = new String[] {"-Random"};
assertEquals(-1, ToolRunner.run(admin, argv));
String expected = "Usage: hdfs dfsrouteradmin :\n"
+ "\t[-add"
+ "[-readonly] [-faulttolerant] "
+ "[-order HASH|LOCAL|RANDOM|HASH_ALL|SPACE] "
+ "-owner  -group  -mode ]\n"
+ "\t[-update  [ "
+ "] [-readonly true|false]"
+ " [-faulttolerant true|false] "
+ "[-order HASH|LOCAL|RANDOM|HASH_ALL|SPACE] "
+ "-owner  -group  -mode ]\n" + "\t[-rm ]\n"
+ "\t[-ls ]\n"
+ "\t[-getDestination ]\n"
+ "\t[-setQuota  -nsQuota  -ssQuota "
+ "]\n" + "\t[-clrQuota ]\n"
+ "\t[-safemode enter | leave | get]\n"
+ "\t[-nameservice enable | disable ]\n"
+ "\t[-getDisabledNameservices]";
{code}


> RBF: routeradmin support refreshRouterArgs command but it not on display
> 
>
> Key: HDFS-14713
> URL: https://issues.apache.org/jira/browse/HDFS-14713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-14713-000.patch, HDFS-14713-001.patch, 
> HDFS-14713-002.patch, after.png, before.png
>
>
> When the cmd commond is null,the refreshRouterArgs command is not display 
> ,because there is one missing value in the String[] commands
> {code:java}
> //
> if (cmd == null) {
>   String[] commands =
>   {"-add", "-update", "-rm", "-ls", "-getDestination",
>   "-setQuota", "-clrQuota",
>   "-safemode", "-nameservice", "-getDisabledNameservices",
>   "-refresh"};
>   StringBuilder usage = new StringBuilder();
>   usage.append("Usage: hdfs dfsrouteradmin :\n");
>   for (int i = 0; i < commands.length; i++) {
> usage.append(getUsage(commands[i]));
> if (i + 1 < commands.length) {
>   usage.append("\n");
> }
>   }
>   
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14708) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2019-08-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905777#comment-16905777
 ] 

Hudson commented on HDFS-14708:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17098 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17098/])
HDFS-14708. (ayushsaxena: rev 454420e4f25c7e6c29bbeff3ca055dda59dd5b7b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestLargeBlockReport.java


> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in 
> trunk
> 
>
> Key: HDFS-14708
> URL: https://issues.apache.org/jira/browse/HDFS-14708
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14708-001.patch, HDFS-14708-002.patch
>
>
> {code:java}
> [ERROR] 
> testBlockReportSucceedsWithLargerLengthLimit(org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport)
>   Time elapsed: 47.956 s  <<< ERROR!
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5011)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1581)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:181)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:31664)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)
> Caused by: java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:424)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:396)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiffSorted(BlockManager.java:2952)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2787)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1582)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5089)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5068)
> Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
>   at 
> com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
>   at 
> com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
>   at 
> com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
>   at 
> com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:420)
>   ... 8 more
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1499)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invo

[jira] [Updated] (HDFS-14595) HDFS-11848 breaks API compatibility

2019-08-12 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-14595:
--
Status: In Progress  (was: Patch Available)

> HDFS-11848 breaks API compatibility
> ---
>
> Key: HDFS-14595
> URL: https://issues.apache.org/jira/browse/HDFS-14595
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.2, 3.2.0
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Blocker
> Attachments: HDFS-14595.001.patch, HDFS-14595.002.patch, 
> HDFS-14595.003.patch, HDFS-14595.004.patch, HDFS-14595.005.patch, hadoop_ 
> 36e1870eab904d5a6f12ecfb1fdb52ca08d95ac5 to 
> b241194d56f97ee372cbec7062bcf155bc3df662 compatibility report.htm
>
>
> Our internal tool caught an API compatibility issue with HDFS-11848.
> HDFS-11848 adds an additional parameter to 
> DistributedFileSystem.listOpenFiles(), but it doesn't keep the existing API.
> This can cause issue when upgrading from Hadoop 2.9.0/2.8.3/3.0.0 to 
> 3.0.1/3.1.0 and above.
> Suggest:
> (1) Add back the old API (which was added in HDFS-10480), and mark it 
> deprecated.
> (2) Update release doc to enforce running API compatibility check for each 
> releases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14595) HDFS-11848 breaks API compatibility

2019-08-12 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905787#comment-16905787
 ] 

Siyao Meng commented on HDFS-14595:
---

[~ayushtkn] Oddly TestStripedFileAppend was the only place to use DFS 
listOpenFiles() hence my quick mod to that test.
Anyway, posted new patch rev 006, test inserted in TestDistributedFileSystem 
#testDFSClose().

> HDFS-11848 breaks API compatibility
> ---
>
> Key: HDFS-14595
> URL: https://issues.apache.org/jira/browse/HDFS-14595
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Blocker
> Attachments: HDFS-14595.001.patch, HDFS-14595.002.patch, 
> HDFS-14595.003.patch, HDFS-14595.004.patch, HDFS-14595.005.patch, hadoop_ 
> 36e1870eab904d5a6f12ecfb1fdb52ca08d95ac5 to 
> b241194d56f97ee372cbec7062bcf155bc3df662 compatibility report.htm
>
>
> Our internal tool caught an API compatibility issue with HDFS-11848.
> HDFS-11848 adds an additional parameter to 
> DistributedFileSystem.listOpenFiles(), but it doesn't keep the existing API.
> This can cause issue when upgrading from Hadoop 2.9.0/2.8.3/3.0.0 to 
> 3.0.1/3.1.0 and above.
> Suggest:
> (1) Add back the old API (which was added in HDFS-10480), and mark it 
> deprecated.
> (2) Update release doc to enforce running API compatibility check for each 
> releases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14595) HDFS-11848 breaks API compatibility

2019-08-12 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-14595:
--
Attachment: HDFS-14595.006.patch
Status: Patch Available  (was: In Progress)

> HDFS-11848 breaks API compatibility
> ---
>
> Key: HDFS-14595
> URL: https://issues.apache.org/jira/browse/HDFS-14595
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.2, 3.2.0
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Blocker
> Attachments: HDFS-14595.001.patch, HDFS-14595.002.patch, 
> HDFS-14595.003.patch, HDFS-14595.004.patch, HDFS-14595.005.patch, 
> HDFS-14595.006.patch, hadoop_ 36e1870eab904d5a6f12ecfb1fdb52ca08d95ac5 to 
> b241194d56f97ee372cbec7062bcf155bc3df662 compatibility report.htm
>
>
> Our internal tool caught an API compatibility issue with HDFS-11848.
> HDFS-11848 adds an additional parameter to 
> DistributedFileSystem.listOpenFiles(), but it doesn't keep the existing API.
> This can cause issue when upgrading from Hadoop 2.9.0/2.8.3/3.0.0 to 
> 3.0.1/3.1.0 and above.
> Suggest:
> (1) Add back the old API (which was added in HDFS-10480), and mark it 
> deprecated.
> (2) Update release doc to enforce running API compatibility check for each 
> releases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13123) RBF: Add a balancer tool to move data across subcluster

2019-08-12 Thread hemanthboyina (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905797#comment-16905797
 ] 

hemanthboyina commented on HDFS-13123:
--

thanks for the comment [~crh]

           _How is atomicity in distcp taken into account here? If distcp 
fails, destination cluster may have unused files lying around unaudited. May be 
user can specify atomicity flag through admin._

  __  if the distcp fails , we will delete the files copied in the destination 
cluster ,  +We can use atomicity flag for better purpose+ 

         _How are multiple rebalancings going to work if executed? Should admin 
maintain a state of what all rebalancing is in progress and what all completed. 
Some basic auditing at least._

 Yes admin should maintain what all rebalancing operations in progress , and 
for a given mount point, we only allow one concurrent rebalancing operation.

       _Rebalancing across secured clusters?_

As we are using distcp , distcp should be taken care of it in secure cluster 

> RBF: Add a balancer tool to move data across subcluster 
> 
>
> Key: HDFS-13123
> URL: https://issues.apache.org/jira/browse/HDFS-13123
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei Yan
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS Router-Based Federation Rebalancer.pdf, 
> HDFS-13123.patch
>
>
> Follow the discussion in HDFS-12615. This Jira is to track effort for 
> building a rebalancer tool, used by router-based federation to move data 
> among subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-12 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16904728#comment-16904728
 ] 

Masatake Iwasaki edited comment on HDFS-14423 at 8/13/19 4:10 AM:
--

java.net.URLDecoder converts "+" into " " (since java.net.URLEncoder converts " 
" into "+"). The code path of {{create}} seems to convert "a+b" into "a b" 
while {{mkdir}} does not.


was (Author: iwasakims):
java.net.URLEncoder converts "\+" into " " (since java.net.URLEncoder converts 
" " into "\+"). The code path of {{create}} seems to convert "a+b" into "a b" 
while {{mkdir}} does not.

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HDFS-14423.001.patch, HDFS-14423.002.patch, 
> HDFS-14423.003.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-12 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16904728#comment-16904728
 ] 

Masatake Iwasaki edited comment on HDFS-14423 at 8/13/19 4:11 AM:
--

java.net.URLDecoder converts "\+" into " " (since java.net.URLEncoder converts 
" " into "\+"). The code path of {{create}} seems to convert "a+b" into "a b" 
while {{mkdir}} does not.


was (Author: iwasakims):
java.net.URLDecoder converts "+" into " " (since java.net.URLEncoder converts " 
" into "+"). The code path of {{create}} seems to convert "a+b" into "a b" 
while {{mkdir}} does not.

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HDFS-14423.001.patch, HDFS-14423.002.patch, 
> HDFS-14423.003.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14725) Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks until next report)

2019-08-12 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HDFS-14725:
---
Attachment: HDFS-14725.branch-2.003.patch

> Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks 
> until next report)
> 
>
> Key: HDFS-14725
> URL: https://issues.apache.org/jira/browse/HDFS-14725
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Reporter: Wei-Chiu Chuang
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14725.branch-2.001.patch, 
> HDFS-14725.branch-2.002.patch, HDFS-14725.branch-2.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13505) Turn on HDFS ACLs by default.

2019-08-12 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13505:
--
Status: In Progress  (was: Patch Available)

> Turn on HDFS ACLs by default.
> -
>
> Key: HDFS-13505
> URL: https://issues.apache.org/jira/browse/HDFS-13505
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13505.00.patch, HDFS-13505.001.patch
>
>
> Turn on HDFS ACLs by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13505) Turn on HDFS ACLs by default.

2019-08-12 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13505:
--
Attachment: HDFS-13505.002.patch
Status: Patch Available  (was: In Progress)

[~ayushtkn] oops I did missed that. Thanks for the reminder.
Posted rev 002 to update HdfsPermissionsGuide.md

> Turn on HDFS ACLs by default.
> -
>
> Key: HDFS-13505
> URL: https://issues.apache.org/jira/browse/HDFS-13505
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13505.00.patch, HDFS-13505.001.patch, 
> HDFS-13505.002.patch
>
>
> Turn on HDFS ACLs by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14725) Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks until next report)

2019-08-12 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905801#comment-16905801
 ] 

He Xiaoqiao commented on HDFS-14725:


Check failied unit test {{TestDecommissioningStatus}} and 
{{TestJournalNodeRespectsBindHostKeys}}, both are passed at local, 
{{TestWebHdfsTimeouts}} and {{TestDirectoryScanner}} are failed as Jenkins 
report. I think it is unrelated with the patch.
[~jojochuang] would you mind take a review and double check? Thanks.

> Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks 
> until next report)
> 
>
> Key: HDFS-14725
> URL: https://issues.apache.org/jira/browse/HDFS-14725
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Reporter: Wei-Chiu Chuang
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14725.branch-2.001.patch, 
> HDFS-14725.branch-2.002.patch, HDFS-14725.branch-2.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14725) Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks until next report)

2019-08-12 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905801#comment-16905801
 ] 

He Xiaoqiao edited comment on HDFS-14725 at 8/13/19 4:20 AM:
-

Check failied unit test {{TestDecommissioningStatus}} and 
{{TestJournalNodeRespectsBindHostKeys}}, both are passed at local, 
{{TestWebHdfsTimeouts}} and {{TestDirectoryScanner}} are failed as Jenkins 
report. I think it is unrelated with the patch.
[^HDFS-14725.branch-2.003.patch] fix checkstyle.
[~jojochuang], would you mind take a review and double check? Thanks.


was (Author: hexiaoqiao):
Check failied unit test {{TestDecommissioningStatus}} and 
{{TestJournalNodeRespectsBindHostKeys}}, both are passed at local, 
{{TestWebHdfsTimeouts}} and {{TestDirectoryScanner}} are failed as Jenkins 
report. I think it is unrelated with the patch.
[~jojochuang] would you mind take a review and double check? Thanks.

> Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks 
> until next report)
> 
>
> Key: HDFS-14725
> URL: https://issues.apache.org/jira/browse/HDFS-14725
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Reporter: Wei-Chiu Chuang
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14725.branch-2.001.patch, 
> HDFS-14725.branch-2.002.patch, HDFS-14725.branch-2.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-12 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905815#comment-16905815
 ] 

Masatake Iwasaki commented on HDFS-14423:
-

The result of manual testing:
{noformat}
[centos@centos7 hadoop-3.3.0-SNAPSHOT-HDFS-14423]$ bin/hadoop fs -touchz 
'webhdfs://localhost/%'
[centos@centos7 hadoop-3.3.0-SNAPSHOT-HDFS-14423]$ bin/hadoop fs -touchz 
'webhdfs://localhost/a+b'
[centos@centos7 hadoop-3.3.0-SNAPSHOT-HDFS-14423]$ bin/hadoop fs -touchz 
'webhdfs://localhost/a%b'
[centos@centos7 hadoop-3.3.0-SNAPSHOT-HDFS-14423]$ bin/hadoop fs -touchz 
'webhdfs://localhost/a;b'
[centos@centos7 hadoop-3.3.0-SNAPSHOT-HDFS-14423]$ bin/hadoop fs -mkdir 
'webhdfs://localhost/c+d'
[centos@centos7 hadoop-3.3.0-SNAPSHOT-HDFS-14423]$ bin/hadoop fs -mkdir 
'webhdfs://localhost/c%d'
[centos@centos7 hadoop-3.3.0-SNAPSHOT-HDFS-14423]$ bin/hadoop fs -mkdir 
'webhdfs://localhost/c;d'
[centos@centos7 hadoop-3.3.0-SNAPSHOT-HDFS-14423]$ bin/hdfs dfs -ls /
Found 7 items
-rw-r--r--   1 centos supergroup  0 2019-08-13 13:44 /%
-rw-r--r--   1 centos supergroup  0 2019-08-13 13:44 /a%b
-rw-r--r--   1 centos supergroup  0 2019-08-13 13:44 /a+b
-rw-r--r--   1 centos supergroup  0 2019-08-13 13:44 /a;b
drwxr-xr-x   - centos supergroup  0 2019-08-13 13:44 /c%d
drwxr-xr-x   - centos supergroup  0 2019-08-13 13:44 /c+d
drwxr-xr-x   - centos supergroup  0 2019-08-13 13:44 /c;d

[centos@centos7 hadoop-3.3.0-SNAPSHOT-HDFS-14423]$ echo foobar > foobar.txt
[centos@centos7 hadoop-3.3.0-SNAPSHOT-HDFS-14423]$ bin/hadoop fs -rm 
'webhdfs://localhost/%'
Deleted webhdfs://localhost/%
[centos@centos7 hadoop-3.3.0-SNAPSHOT-HDFS-14423]$ bin/hadoop fs -put 
foobar.txt 'webhdfs://localhost/%'
[centos@centos7 hadoop-3.3.0-SNAPSHOT-HDFS-14423]$ bin/hadoop fs -cat 
'webhdfs://localhost/%'
foobar
{noformat}

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HDFS-14423.001.patch, HDFS-14423.002.patch, 
> HDFS-14423.003.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14713) RBF: routeradmin support refreshRouterArgs command but it not on display

2019-08-12 Thread wangzhaohui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhaohui updated HDFS-14713:
---
Attachment: HDFS-14713-003.patch

> RBF: routeradmin support refreshRouterArgs command but it not on display
> 
>
> Key: HDFS-14713
> URL: https://issues.apache.org/jira/browse/HDFS-14713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-14713-000.patch, HDFS-14713-001.patch, 
> HDFS-14713-002.patch, HDFS-14713-003.patch, after.png, before.png
>
>
> When the cmd commond is null,the refreshRouterArgs command is not display 
> ,because there is one missing value in the String[] commands
> {code:java}
> //
> if (cmd == null) {
>   String[] commands =
>   {"-add", "-update", "-rm", "-ls", "-getDestination",
>   "-setQuota", "-clrQuota",
>   "-safemode", "-nameservice", "-getDisabledNameservices",
>   "-refresh"};
>   StringBuilder usage = new StringBuilder();
>   usage.append("Usage: hdfs dfsrouteradmin :\n");
>   for (int i = 0; i < commands.length; i++) {
> usage.append(getUsage(commands[i]));
> if (i + 1 < commands.length) {
>   usage.append("\n");
> }
>   }
>   
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-12 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905818#comment-16905818
 ] 

Masatake Iwasaki commented on HDFS-14423:
-

{quote}That's worthy of revert but hopefully we won't wade through the debate 
of being compatible with the revert of an incompatible bug.
{quote}
The fix effectively reverts HDFS-13176. It is incompatible change for 
WebHdfsFileSystem(client) of 3.2.0, 3.1.0, 3.1.1 (and 2.10.0-SNAPSHOT) which 
uses encoded path in URL if ";" or "%" are contained. I think it is ok because 
webhdfs of those versions are just broken as reported here.

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HDFS-14423.001.patch, HDFS-14423.002.patch, 
> HDFS-14423.003.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14718) HttpFS: Sort response by key names as WebHDFS does

2019-08-12 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905829#comment-16905829
 ] 

Siyao Meng commented on HDFS-14718:
---

[~jojochuang] Thanks for the comment.
(1) I don't think so. People access JSON objects with keys. If some one has 
been using index to access a object and has been using both WebHDFS and HttpFS, 
he would have noticed the different order.
(2) Theoretically using LinkedHashMap would have been faster by an O(1) factor 
- the slowness isn't related to N (number of files):

{code:title=FSOperations#toJson & toJsonInner, serializing FileStatuses to JSON 
for HttpFS LISTSTATUS response}
  private static Map toJson(FileStatus[] fileStatuses,
  boolean isFile) {
Map json = new TreeMap<>();
Map inner = new TreeMap<>();
JSONArray statuses = new JSONArray();
for (FileStatus f : fileStatuses) {
  statuses.add(toJsonInner(f, isFile));
}
inner.put(HttpFSFileSystem.FILE_STATUS_JSON, statuses);
json.put(HttpFSFileSystem.FILE_STATUSES_JSON, inner);
return json;
  }

  private static Map toJsonInner(FileStatus fileStatus,
  boolean emptyPathSuffix) {
Map json = new TreeMap();
...
json.put(HttpFSFileSystem.PATH_SUFFIX_JSON,
(emptyPathSuffix) ? "" : fileStatus.getPath().getName());
...
  }
{code}

Note the for loop in *FSOperations#toJson* just inserts serializes each 
FileStatus entry to a plain *JSONArray*.
Inside *FSOperations#toJsonInner*, the number of entries to be inserted for 
each FileStatus entry is a constant (exactly 13 entries for HDFS, for now). 
Hence TreeMap will be slower. But it won't be much slower even if there are a 
million files for a LISTSTATUS request. Plus, WebHDFS is doing this already 
(sorting the inside entry order of each FileStatus).

My PR is just a POC for now. We do need to inspect each map change carefully. 
Also, I might just narrow down the scope of the jira back to only sort the 
order of LISTSTATUS entries inside each FileStatus.

> HttpFS: Sort response by key names as WebHDFS does
> --
>
> Key: HDFS-14718
> URL: https://issues.apache.org/jira/browse/HDFS-14718
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> *Example*
> See description of HDFS-14665 for an example of LISTSTATUS.
> *Analysis*
> WebHDFS is [using a 
> TreeMap|https://github.com/apache/hadoop/blob/99bf1dc9eb18f9b4d0338986d1b8fd2232f1232f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java#L120]
>  to serialize HdfsFileStatus, while HttpFS [uses a 
> LinkedHashMap|https://github.com/apache/hadoop/blob/6fcc5639ae32efa5a5d55a6b6cf23af06fc610c3/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java#L107]
>  to serialize FileStatus.
> *Questions*
> Why the difference? Is this intentional?
> - I looked into the Git history. It seems it's simply because WebHDFS uses 
> TreeMap from the beginning; and HttpFS uses LinkedHashMap from the beginning. 
> It is not only limited to LISTSTATUS, but ALL other request's JSON 
> serialization.
> Now the real question: Could/Should we replace ALL LinkedHashMap into TreeMap 
> in HttpFS serialization in FSOperations class?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14708) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2019-08-12 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905834#comment-16905834
 ] 

Siyao Meng commented on HDFS-14708:
---

Thanks [~leosun08] for fixing this test!

> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in 
> trunk
> 
>
> Key: HDFS-14708
> URL: https://issues.apache.org/jira/browse/HDFS-14708
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14708-001.patch, HDFS-14708-002.patch
>
>
> {code:java}
> [ERROR] 
> testBlockReportSucceedsWithLargerLengthLimit(org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport)
>   Time elapsed: 47.956 s  <<< ERROR!
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5011)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1581)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:181)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:31664)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)
> Caused by: java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:424)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:396)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiffSorted(BlockManager.java:2952)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2787)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1582)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5089)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5068)
> Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
>   at 
> com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
>   at 
> com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
>   at 
> com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
>   at 
> com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:420)
>   ... 8 more
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1499)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>   at com.sun.proxy.$Proxy25.blockReport(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:218)
>   at 
> org.apache.h

[jira] [Comment Edited] (HDFS-13505) Turn on HDFS ACLs by default.

2019-08-12 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905800#comment-16905800
 ] 

Siyao Meng edited comment on HDFS-13505 at 8/13/19 5:23 AM:


[~ayushtkn] oops I did miss that. Thanks for the reminder.
Posted rev 002 to update HdfsPermissionsGuide.md in accordance with 
hdfs-default.xml


was (Author: smeng):
[~ayushtkn] oops I did missed that. Thanks for the reminder.
Posted rev 002 to update HdfsPermissionsGuide.md

> Turn on HDFS ACLs by default.
> -
>
> Key: HDFS-13505
> URL: https://issues.apache.org/jira/browse/HDFS-13505
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13505.00.patch, HDFS-13505.001.patch, 
> HDFS-13505.002.patch
>
>
> Turn on HDFS ACLs by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14720) DataNode shouldn't report block as bad block if the block length is Long.MAX_VALUE.

2019-08-12 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905841#comment-16905841
 ] 

Surendra Singh Lilhore commented on HDFS-14720:
---

Hi [~jojochuang],

HDFS-10453 just solve the ReplicationMonitor issue. Actually NN should ignore 
the block for replication if the block size is Long.MAX_VALUE. It is 
unnecessary work for DN if NN send command for deleted block.

I feel -HDFS-10453- fix is not correct.

> DataNode shouldn't report block as bad block if the block length is 
> Long.MAX_VALUE.
> ---
>
> Key: HDFS-14720
> URL: https://issues.apache.org/jira/browse/HDFS-14720
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Major
>
> {noformat}
> 2019-08-11 09:15:58,092 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Can't replicate block 
> BP-725378529-10.0.0.8-1410027444173:blk_13276745777_1112363330268 because 
> on-disk length 175085 is shorter than NameNode recorded length 
> 9223372036854775807.{noformat}
> If the block length is Long.MAX_VALUE, means file belongs to this block is 
> deleted from the namenode and DN got the command after deletion of file. In 
> this case command should be ignored.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14719) Correct the safemode threshold value in BlockManagerSafeMode

2019-08-12 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905851#comment-16905851
 ] 

Surendra Singh Lilhore commented on HDFS-14719:
---

Thanks [~hemanthboyina] for patch.
{code:java}
this.replQueueThreshold =
conf.getFloat(DFS_NAMENODE_REPL_QUEUE_THRESHOLD_PCT_KEY,
(float) threshold);{code}
{{ threshold}} used as a default value for {{replQueueThreshold}}. No need to 
type cast here and change {{replQueueThreshold}} also to float.

> Correct the safemode threshold value in BlockManagerSafeMode
> 
>
> Key: HDFS-14719
> URL: https://issues.apache.org/jira/browse/HDFS-14719
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14719.patch
>
>
> BlockManagerSafeMode is doing wrong parsing for safemode threshold. It is 
> storing float value in double, which will give different result some time. If 
> we store "0.999f" value in double then it will be converted to 
> "0.999128746033".
> {code:java}
> this.threshold = conf.getFloat(DFS_NAMENODE_SAFEMODE_THRESHOLD_PCT_KEY,
> DFS_NAMENODE_SAFEMODE_THRESHOLD_PCT_DEFAULT);{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1105) Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager.

2019-08-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1105?focusedWorklogId=293642&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-293642
 ]

ASF GitHub Bot logged work on HDDS-1105:


Author: ASF GitHub Bot
Created on: 13/Aug/19 06:03
Start Date: 13/Aug/19 06:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1259: HDDS-1105 : Add 
mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager
URL: https://github.com/apache/hadoop/pull/1259#issuecomment-520700669
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 9 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 66 | Maven dependency ordering for branch |
   | +1 | mvninstall | 599 | trunk passed |
   | +1 | compile | 376 | trunk passed |
   | +1 | checkstyle | 79 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 867 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | trunk passed |
   | 0 | spotbugs | 460 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 665 | trunk passed |
   | -0 | patch | 513 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 38 | Maven dependency ordering for patch |
   | +1 | mvninstall | 561 | the patch passed |
   | +1 | compile | 370 | the patch passed |
   | +1 | javac | 370 | the patch passed |
   | +1 | checkstyle | 67 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 683 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 86 | hadoop-ozone generated 6 new + 20 unchanged - 0 fixed 
= 26 total (was 20) |
   | +1 | findbugs | 665 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 276 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2393 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 57 | The patch does not generate ASF License warnings. |
   | | | 8282 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1259 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 2bbc990ac71d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 454420e |
   | Default Java | 1.8.0_222 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/5/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/5/testReport/ |
   | Max. process+thread count | 4636 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tra

[jira] [Work logged] (HDDS-1105) Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager.

2019-08-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1105?focusedWorklogId=293647&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-293647
 ]

ASF GitHub Bot logged work on HDDS-1105:


Author: ASF GitHub Bot
Created on: 13/Aug/19 06:07
Start Date: 13/Aug/19 06:07
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #1259: HDDS-1105 : Add 
mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager
URL: https://github.com/apache/hadoop/pull/1259#discussion_r313231490
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/ReconServerConfigKeys.java
 ##
 @@ -114,7 +114,7 @@
 
   public static final String OZONE_RECON_TASK_THREAD_COUNT_KEY =
   "ozone.recon.task.thread.count";
-  public static final int OZONE_RECON_TASK_THREAD_COUNT_DEFAULT = 1;
+  public static final int OZONE_RECON_TASK_THREAD_COUNT_DEFAULT = 5;
 
 Review comment:
   This should be set to a number of cores multiple instead.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 293647)
Time Spent: 1h 50m  (was: 1h 40m)

> Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone 
> Manager.
> 
>
> Key: HDDS-1105
> URL: https://issues.apache.org/jira/browse/HDDS-1105
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> *Some context*
> The FSCK server will periodically invoke this OM API passing in the most 
> recent sequence number of its own RocksDB instance. The OM will use the 
> RockDB getUpdateSince() API to answer this query. Since the getUpdateSince 
> API only works against the RocksDB WAL, we have to configure OM RocksDB WAL 
> (https://github.com/facebook/rocksdb/wiki/Write-Ahead-Log) with sufficient 
> max size to make this API useful. If the OM cannot get all transactions since 
> the given sequence number (due to WAL flushing), it can error out. In that 
> case the FSCK server can fall back to getting the entire checkpoint snapshot 
> implemented in HDDS-1085.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14725) Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks until next report)

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905857#comment-16905857
 ] 

Hadoop QA commented on HDFS-14725:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
5s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
21s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:da67579 |
| JIRA Issue | HDFS-14725 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977437/HDFS-14725.branch-2.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a64ddb3dd3f2 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | br

[jira] [Commented] (HDFS-14713) RBF: routeradmin support refreshRouterArgs command but it not on display

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905859#comment-16905859
 ] 

Hadoop QA commented on HDFS-14713:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 35s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterAdminCLI |
|   | hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14713 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977441/HDFS-14713-003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e83e6acba503 4.4.0-157-generic #185-Ubuntu SMP Tue Jul 23 
09:17:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 454420e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27492/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27492/testReport/ |
| Max. process+thread count | 1611 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hd

[jira] [Work logged] (HDDS-1610) applyTransaction failure should not be lost on restart

2019-08-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1610?focusedWorklogId=293651&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-293651
 ]

ASF GitHub Bot logged work on HDDS-1610:


Author: ASF GitHub Bot
Created on: 13/Aug/19 06:19
Start Date: 13/Aug/19 06:19
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #1226: HDDS-1610. 
applyTransaction failure should not be lost on restart.
URL: https://github.com/apache/hadoop/pull/1226#discussion_r313234077
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachineFailures.java
 ##
 @@ -270,4 +279,73 @@ public void testUnhealthyContainer() throws Exception {
 Assert.assertEquals(ContainerProtos.Result.CONTAINER_UNHEALTHY,
 dispatcher.dispatch(request.build(), null).getResult());
   }
+
+  @Test
+  public void testAppyTransactionFailure() throws Exception {
+OzoneOutputStream key =
+objectStore.getVolume(volumeName).getBucket(bucketName)
+.createKey("ratis", 1024, ReplicationType.RATIS,
+ReplicationFactor.ONE, new HashMap<>());
+// First write and flush creates a container in the datanode
+key.write("ratis".getBytes());
+key.flush();
+key.write("ratis".getBytes());
+
+//get the name of a valid container
+OmKeyArgs keyArgs = new OmKeyArgs.Builder().setVolumeName(volumeName).
+setBucketName(bucketName).setType(HddsProtos.ReplicationType.RATIS)
+.setFactor(HddsProtos.ReplicationFactor.ONE).setKeyName("ratis")
+.build();
+KeyOutputStream groupOutputStream = (KeyOutputStream) 
key.getOutputStream();
+List locationInfoList =
+groupOutputStream.getLocationInfoList();
+Assert.assertEquals(1, locationInfoList.size());
+OmKeyLocationInfo omKeyLocationInfo = locationInfoList.get(0);
+ContainerData containerData =
+cluster.getHddsDatanodes().get(0).getDatanodeStateMachine()
+.getContainer().getContainerSet()
+.getContainer(omKeyLocationInfo.getContainerID())
+.getContainerData();
+Assert.assertTrue(containerData instanceof KeyValueContainerData);
+KeyValueContainerData keyValueContainerData =
+(KeyValueContainerData) containerData;
+key.close();
+
+long containerID = omKeyLocationInfo.getContainerID();
+// delete the container db file
+FileUtil.fullyDelete(new File(keyValueContainerData.getContainerPath()));
+Pipeline pipeline = cluster.getStorageContainerLocationClient()
+.getContainerWithPipeline(containerID).getPipeline();
+XceiverClientSpi client = xceiverClientManager.acquireClient(pipeline);
+ContainerProtos.ContainerCommandRequestProto.Builder request =
 
 Review comment:
   The idea is to execute a transaction on the same container. If we write more 
data , it can potentially go a new container altogether.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 293651)
Time Spent: 4h 20m  (was: 4h 10m)

> applyTransaction failure should not be lost on restart
> --
>
> Key: HDDS-1610
> URL: https://issues.apache.org/jira/browse/HDDS-1610
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> If the applyTransaction fails in the containerStateMachine, then the 
> container should not accept new writes on restart,.
> This can occur if
> # chunk write applyTransaction fails
> # container state update to UNHEALTHY also fails
> # Ratis snapshot is taken
> # Node restarts
> # container accepts new transactions



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14595) HDFS-11848 breaks API compatibility

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905869#comment-16905869
 ] 

Hadoop QA commented on HDFS-14595:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
48s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}154m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14595 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977436/HDFS-14595.006.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dd276d852a2a 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 454420e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27489/a

[jira] [Commented] (HDFS-14090) RBF: Improved isolation for downstream name nodes.

2019-08-12 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905872#comment-16905872
 ] 

Akira Ajisaka commented on HDFS-14090:
--

Thanks [~crh] for the update.

bq.  I have a concern about the "Permit" wording.

IMHO, it seems to be "Quota" rather than "Permit".

> RBF: Improved isolation for downstream name nodes.
> --
>
> Key: HDFS-14090
> URL: https://issues.apache.org/jira/browse/HDFS-14090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14090-HDFS-13891.001.patch, 
> HDFS-14090-HDFS-13891.002.patch, HDFS-14090-HDFS-13891.003.patch, 
> HDFS-14090-HDFS-13891.004.patch, HDFS-14090-HDFS-13891.005.patch, 
> HDFS-14090.006.patch, HDFS-14090.007.patch, HDFS-14090.008.patch, 
> HDFS-14090.009.patch, RBF_ Isolation design.pdf
>
>
> Router is a gateway to underlying name nodes. Gateway architectures, should 
> help minimize impact of clients connecting to healthy clusters vs unhealthy 
> clusters.
> For example - If there are 2 name nodes downstream, and one of them is 
> heavily loaded with calls spiking rpc queue times, due to back pressure the 
> same with start reflecting on the router. As a result of this, clients 
> connecting to healthy/faster name nodes will also slow down as same rpc queue 
> is maintained for all calls at the router layer. Essentially the same IPC 
> thread pool is used by router to connect to all name nodes.
> Currently router uses one single rpc queue for all calls. Lets discuss how we 
> can change the architecture and add some throttling logic for 
> unhealthy/slow/overloaded name nodes.
> One way could be to read from current call queue, immediately identify 
> downstream name node and maintain a separate queue for each underlying name 
> node. Another simpler way is to maintain some sort of rate limiter configured 
> for each name node and let routers drop/reject/send error requests after 
> certain threshold. 
> This won’t be a simple change as router’s ‘Server’ layer would need redesign 
> and implementation. Currently this layer is the same as name node.
> Opening this ticket to discuss, design and implement this feature.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14707) Add JAVA_LIBRARY_PATH to HTTPFS startup options in branch-2

2019-08-12 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905881#comment-16905881
 ] 

Akira Ajisaka commented on HDFS-14707:
--

Thanks [~iwasakims] for the patch. Mostly looks good to me. One comment:
{code}
if [ "x$JAVA_LIBRARY_PATH" = "x" ]; then
{code}
the check can be replaced with -z option:
{code}
if [[ -z "${JAVA_LIBRARY_PATH}" ]]; then
{code}

>  Add JAVA_LIBRARY_PATH to HTTPFS startup options in branch-2
> 
>
> Key: HDFS-14707
> URL: https://issues.apache.org/jira/browse/HDFS-14707
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HDFS-14707-branch-2.001.patch
>
>
> Currently HTTPFS does not load hadoop native library since java.library.path 
> is not set on Tomcat startup.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



<    1   2