[jira] [Comment Edited] (HDDS-722) ozone datanodes failed to start on few nodes

2018-10-23 Thread Nilotpal Nandi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661759#comment-16661759
 ] 

Nilotpal Nandi edited comment on HDDS-722 at 10/24/18 5:57 AM:
---

all node logs :

[^all-node-ozone-logs-1540356965.tar.gz]

^Please untar the logs .^

^datanode failed to start on nodes - 172.27.10.199, 172.27.15.131, 172.27.57.0, 
172.27.19.74^


was (Author: nilotpalnandi):
all node logs :

[^all-node-ozone-logs-1540356965.tar.gz]

> ozone datanodes failed to start on few nodes
> 
>
> Key: HDDS-722
> URL: https://issues.apache.org/jira/browse/HDDS-722
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Nilotpal Nandi
>Priority: Critical
> Attachments: all-node-ozone-logs-1540356965.tar.gz
>
>
> steps taken :
> --
>  # put few keys using ozonefs.
>  # stopped all services of the cluster.
>  # started om and scm.
>  # After sometime , started datanodes.
> All datanodes failed to start . Out of 12 datanodes, 4 datanodes failed to 
> start.
>  
> Here is the datanode log snippet :
> 
>  
> {noformat}
> 2018-10-24 04:49:30,594 ERROR 
> org.apache.ratis.server.impl.StateMachineUpdater: Terminating with exit 
> status 2: StateMachineUpdater-9524f4e2-9031-4852-ab7c-11c2da3460db: the 
> StateMachineUpdater hits Throwable
> org.apache.ratis.server.storage.RaftLogIOException: java.io.IOException: 
> Premature EOF from inputStream
>  at org.apache.ratis.server.storage.LogSegment.loadCache(LogSegment.java:299)
>  at 
> org.apache.ratis.server.storage.SegmentedRaftLog.get(SegmentedRaftLog.java:192)
>  at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:142)
>  at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Premature EOF from inputStream
>  at org.apache.ratis.util.IOUtils.readFully(IOUtils.java:100)
>  at org.apache.ratis.server.storage.LogReader.decodeEntry(LogReader.java:250)
>  at org.apache.ratis.server.storage.LogReader.readEntry(LogReader.java:155)
>  at 
> org.apache.ratis.server.storage.LogInputStream.nextEntry(LogInputStream.java:128)
>  at 
> org.apache.ratis.server.storage.LogSegment.readSegmentFile(LogSegment.java:110)
>  at org.apache.ratis.server.storage.LogSegment.access$400(LogSegment.java:43)
>  at 
> org.apache.ratis.server.storage.LogSegment$LogEntryLoader.load(LogSegment.java:167)
>  at 
> org.apache.ratis.server.storage.LogSegment$LogEntryLoader.load(LogSegment.java:161)
>  at org.apache.ratis.server.storage.LogSegment.loadCache(LogSegment.java:295)
>  ... 3 more
> 2018-10-24 04:49:30,598 INFO org.apache.hadoop.ozone.HddsDatanodeService: 
> SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down HddsDatanodeService at 
> ctr-e138-1518143905142-541661-01-03.hwx.site/172.27.57.0
> /
> 2018-10-24 04:49:30,598 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Thread 
> Interrupted waiting to refresh disk information: sleep interrupted
>  
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13924) Handle BlockMissingException when reading from observer

2018-10-23 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13924:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-12943
   Status: Resolved  (was: Patch Available)

Committed to the branch. Thanks [~xkrogen] for the review!

> Handle BlockMissingException when reading from observer
> ---
>
> Key: HDFS-13924
> URL: https://issues.apache.org/jira/browse/HDFS-13924
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-13924-HDFS-12943.000.patch, 
> HDFS-13924-HDFS-12943.001.patch, HDFS-13924-HDFS-12943.002.patch, 
> HDFS-13924-HDFS-12943.003.patch, HDFS-13924-HDFS-12943.004.patch
>
>
> Internally we found that reading from ObserverNode may result to 
> {{BlockMissingException}}. This may happen when the observer sees a smaller 
> number of DNs than active (maybe due to communication issue with those DNs), 
> or (we guess) late block reports from some DNs to the observer. This error 
> happens in 
> [DFSInputStream#chooseDataNode|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L846],
>  when no valid DN can be found for the {{LocatedBlock}} got from the NN side.
> One potential solution (although a little hacky) is to ask the 
> {{DFSInputStream}} to retry active when this happens. The retry logic already 
> present in the code - we just have to dynamically set a flag to ask the 
> {{ObserverReadProxyProvider}} try active in this case.
> cc [~shv], [~xkrogen], [~vagarychen], [~zero45] for discussion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-722) ozone datanodes failed to start on few nodes

2018-10-23 Thread Nilotpal Nandi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661759#comment-16661759
 ] 

Nilotpal Nandi commented on HDDS-722:
-

all node logs :

[^all-node-ozone-logs-1540356965.tar.gz]

> ozone datanodes failed to start on few nodes
> 
>
> Key: HDDS-722
> URL: https://issues.apache.org/jira/browse/HDDS-722
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Nilotpal Nandi
>Priority: Critical
> Attachments: all-node-ozone-logs-1540356965.tar.gz
>
>
> steps taken :
> --
>  # put few keys using ozonefs.
>  # stopped all services of the cluster.
>  # started om and scm.
>  # After sometime , started datanodes.
> All datanodes failed to start . Out of 12 datanodes, 4 datanodes failed to 
> start.
>  
> Here is the datanode log snippet :
> 
>  
> {noformat}
> 2018-10-24 04:49:30,594 ERROR 
> org.apache.ratis.server.impl.StateMachineUpdater: Terminating with exit 
> status 2: StateMachineUpdater-9524f4e2-9031-4852-ab7c-11c2da3460db: the 
> StateMachineUpdater hits Throwable
> org.apache.ratis.server.storage.RaftLogIOException: java.io.IOException: 
> Premature EOF from inputStream
>  at org.apache.ratis.server.storage.LogSegment.loadCache(LogSegment.java:299)
>  at 
> org.apache.ratis.server.storage.SegmentedRaftLog.get(SegmentedRaftLog.java:192)
>  at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:142)
>  at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Premature EOF from inputStream
>  at org.apache.ratis.util.IOUtils.readFully(IOUtils.java:100)
>  at org.apache.ratis.server.storage.LogReader.decodeEntry(LogReader.java:250)
>  at org.apache.ratis.server.storage.LogReader.readEntry(LogReader.java:155)
>  at 
> org.apache.ratis.server.storage.LogInputStream.nextEntry(LogInputStream.java:128)
>  at 
> org.apache.ratis.server.storage.LogSegment.readSegmentFile(LogSegment.java:110)
>  at org.apache.ratis.server.storage.LogSegment.access$400(LogSegment.java:43)
>  at 
> org.apache.ratis.server.storage.LogSegment$LogEntryLoader.load(LogSegment.java:167)
>  at 
> org.apache.ratis.server.storage.LogSegment$LogEntryLoader.load(LogSegment.java:161)
>  at org.apache.ratis.server.storage.LogSegment.loadCache(LogSegment.java:295)
>  ... 3 more
> 2018-10-24 04:49:30,598 INFO org.apache.hadoop.ozone.HddsDatanodeService: 
> SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down HddsDatanodeService at 
> ctr-e138-1518143905142-541661-01-03.hwx.site/172.27.57.0
> /
> 2018-10-24 04:49:30,598 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Thread 
> Interrupted waiting to refresh disk information: sleep interrupted
>  
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-722) ozone datanodes failed to start on few nodes

2018-10-23 Thread Nilotpal Nandi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilotpal Nandi updated HDDS-722:

Attachment: all-node-ozone-logs-1540356965.tar.gz

> ozone datanodes failed to start on few nodes
> 
>
> Key: HDDS-722
> URL: https://issues.apache.org/jira/browse/HDDS-722
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Nilotpal Nandi
>Priority: Critical
> Attachments: all-node-ozone-logs-1540356965.tar.gz
>
>
> steps taken :
> --
>  # put few keys using ozonefs.
>  # stopped all services of the cluster.
>  # started om and scm.
>  # After sometime , started datanodes.
> All datanodes failed to start . Out of 12 datanodes, 4 datanodes failed to 
> start.
>  
> Here is the datanode log snippet :
> 
>  
> {noformat}
> 2018-10-24 04:49:30,594 ERROR 
> org.apache.ratis.server.impl.StateMachineUpdater: Terminating with exit 
> status 2: StateMachineUpdater-9524f4e2-9031-4852-ab7c-11c2da3460db: the 
> StateMachineUpdater hits Throwable
> org.apache.ratis.server.storage.RaftLogIOException: java.io.IOException: 
> Premature EOF from inputStream
>  at org.apache.ratis.server.storage.LogSegment.loadCache(LogSegment.java:299)
>  at 
> org.apache.ratis.server.storage.SegmentedRaftLog.get(SegmentedRaftLog.java:192)
>  at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:142)
>  at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Premature EOF from inputStream
>  at org.apache.ratis.util.IOUtils.readFully(IOUtils.java:100)
>  at org.apache.ratis.server.storage.LogReader.decodeEntry(LogReader.java:250)
>  at org.apache.ratis.server.storage.LogReader.readEntry(LogReader.java:155)
>  at 
> org.apache.ratis.server.storage.LogInputStream.nextEntry(LogInputStream.java:128)
>  at 
> org.apache.ratis.server.storage.LogSegment.readSegmentFile(LogSegment.java:110)
>  at org.apache.ratis.server.storage.LogSegment.access$400(LogSegment.java:43)
>  at 
> org.apache.ratis.server.storage.LogSegment$LogEntryLoader.load(LogSegment.java:167)
>  at 
> org.apache.ratis.server.storage.LogSegment$LogEntryLoader.load(LogSegment.java:161)
>  at org.apache.ratis.server.storage.LogSegment.loadCache(LogSegment.java:295)
>  ... 3 more
> 2018-10-24 04:49:30,598 INFO org.apache.hadoop.ozone.HddsDatanodeService: 
> SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down HddsDatanodeService at 
> ctr-e138-1518143905142-541661-01-03.hwx.site/172.27.57.0
> /
> 2018-10-24 04:49:30,598 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Thread 
> Interrupted waiting to refresh disk information: sleep interrupted
>  
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-722) ozone datanodes failed to start on few nodes

2018-10-23 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-722:
---

 Summary: ozone datanodes failed to start on few nodes
 Key: HDDS-722
 URL: https://issues.apache.org/jira/browse/HDDS-722
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 0.3.0
Reporter: Nilotpal Nandi


steps taken :

--
 # put few keys using ozonefs.
 # stopped all services of the cluster.
 # started om and scm.
 # After sometime , started datanodes.

All datanodes failed to start . Out of 12 datanodes, 4 datanodes failed to 
start.

 

Here is the datanode log snippet :



 
{noformat}
2018-10-24 04:49:30,594 ERROR org.apache.ratis.server.impl.StateMachineUpdater: 
Terminating with exit status 2: 
StateMachineUpdater-9524f4e2-9031-4852-ab7c-11c2da3460db: the 
StateMachineUpdater hits Throwable
org.apache.ratis.server.storage.RaftLogIOException: java.io.IOException: 
Premature EOF from inputStream
 at org.apache.ratis.server.storage.LogSegment.loadCache(LogSegment.java:299)
 at 
org.apache.ratis.server.storage.SegmentedRaftLog.get(SegmentedRaftLog.java:192)
 at 
org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:142)
 at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Premature EOF from inputStream
 at org.apache.ratis.util.IOUtils.readFully(IOUtils.java:100)
 at org.apache.ratis.server.storage.LogReader.decodeEntry(LogReader.java:250)
 at org.apache.ratis.server.storage.LogReader.readEntry(LogReader.java:155)
 at 
org.apache.ratis.server.storage.LogInputStream.nextEntry(LogInputStream.java:128)
 at 
org.apache.ratis.server.storage.LogSegment.readSegmentFile(LogSegment.java:110)
 at org.apache.ratis.server.storage.LogSegment.access$400(LogSegment.java:43)
 at 
org.apache.ratis.server.storage.LogSegment$LogEntryLoader.load(LogSegment.java:167)
 at 
org.apache.ratis.server.storage.LogSegment$LogEntryLoader.load(LogSegment.java:161)
 at org.apache.ratis.server.storage.LogSegment.loadCache(LogSegment.java:295)
 ... 3 more
2018-10-24 04:49:30,598 INFO org.apache.hadoop.ozone.HddsDatanodeService: 
SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down HddsDatanodeService at 
ctr-e138-1518143905142-541661-01-03.hwx.site/172.27.57.0
/
2018-10-24 04:49:30,598 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Thread 
Interrupted waiting to refresh disk information: sleep interrupted
 
{noformat}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-716) Update ozone to latest ratis snapshot build(0.3.0-aa38160-SNAPSHOT)

2018-10-23 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661757#comment-16661757
 ] 

Mukul Kumar Singh commented on HDDS-716:


TestFreonWithDatanodeRestart is failing after this patch, will work on a new 
patch.

> Update ozone to latest ratis snapshot build(0.3.0-aa38160-SNAPSHOT)
> ---
>
> Key: HDDS-716
> URL: https://issues.apache.org/jira/browse/HDDS-716
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Attachments: HDDS-716.001.patch, HDDS-716.002.patch, 
> HDDS-716.003.patch
>
>
> This jira updates the ozone to latest ratis snapshot 
> build(0.3.0-aa38160-SNAPSHOT)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14021) TestReconstructStripedBlocksWithRackAwareness#testReconstructForNotEnoughRacks fails intermittently

2018-10-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661754#comment-16661754
 ] 

Hadoop QA commented on HDFS-14021:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestHdfsNativeCodeLoader |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-14021 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945326/HDFS-14021.02.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bde7776d1617 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a0c0b79 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25347/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25347/testReport/ |
| Max. process+thread count | 2752 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25347/console |
| Powered 

[jira] [Commented] (HDFS-13924) Handle BlockMissingException when reading from observer

2018-10-23 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661747#comment-16661747
 ] 

Chao Sun commented on HDFS-13924:
-

Thanks [~xkrogen] for the help. The test change looks good to me. The latest 
jenkins run looks good. Will commit this shortly.

> Handle BlockMissingException when reading from observer
> ---
>
> Key: HDFS-13924
> URL: https://issues.apache.org/jira/browse/HDFS-13924
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13924-HDFS-12943.000.patch, 
> HDFS-13924-HDFS-12943.001.patch, HDFS-13924-HDFS-12943.002.patch, 
> HDFS-13924-HDFS-12943.003.patch, HDFS-13924-HDFS-12943.004.patch
>
>
> Internally we found that reading from ObserverNode may result to 
> {{BlockMissingException}}. This may happen when the observer sees a smaller 
> number of DNs than active (maybe due to communication issue with those DNs), 
> or (we guess) late block reports from some DNs to the observer. This error 
> happens in 
> [DFSInputStream#chooseDataNode|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L846],
>  when no valid DN can be found for the {{LocatedBlock}} got from the NN side.
> One potential solution (although a little hacky) is to ask the 
> {{DFSInputStream}} to retry active when this happens. The retry logic already 
> present in the code - we just have to dynamically set a flag to ask the 
> {{ObserverReadProxyProvider}} try active in this case.
> cc [~shv], [~xkrogen], [~vagarychen], [~zero45] for discussion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-23 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661735#comment-16661735
 ] 

Arpit Agarwal commented on HDDS-719:


bq. org.mockito.internal.util.reflection.Whitebox should not be used because 
this class was removed in Mockito 2.x. Please see HADOOP-14178 and HADOOP-14188.
Thanks for the note [~ajisakaa]. In that case duplicating these functions seems 
to be the right solution.

I'll look at the unit test failures. They are probably related.

> Remove Ozone dependencies on Apache Hadoop 3.2.0
> 
>
> Key: HDDS-719
> URL: https://issues.apache.org/jira/browse/HDDS-719
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM, test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-719.01.patch, HDDS-719.02.patch
>
>
> A few more changes to remove dependencies on Hadoop 3.2.0.
> # {{Time#getUtcTime}} used by SCM, unit tests and genesis.
> # Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-715) Ozone compilation against hadoop-3.1 fails

2018-10-23 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661734#comment-16661734
 ] 

Arpit Agarwal commented on HDDS-715:


Thanks [~msingh], I missed this one.

> Ozone compilation against hadoop-3.1 fails
> --
>
> Key: HDDS-715
> URL: https://issues.apache.org/jira/browse/HDDS-715
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>
> Compiling Ozone against hadoop-3.1 fails with the following error.
> {code}
> ozone/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java:[468,26]
>  cannot find symbol
> 03:04:54 2018/10/23 10:04:54 INFO:   symbol:   method getUtcTime()
> 03:04:54 2018/10/23 10:04:54 INFO:   location: class 
> org.apache.hadoop.util.Time
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-721) NullPointerException thrown while trying to read a file when datanode restarted

2018-10-23 Thread Nilotpal Nandi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilotpal Nandi updated HDDS-721:

Attachment: all-node-ozone-logs-1540356965.tar.gz

> NullPointerException thrown while trying to read a file when datanode 
> restarted
> ---
>
> Key: HDDS-721
> URL: https://issues.apache.org/jira/browse/HDDS-721
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Nilotpal Nandi
>Priority: Critical
> Attachments: all-node-ozone-logs-1540356965.tar.gz
>
>
> steps taken :
> ---
>  # Put few files and directories using ozonefs
>  # stopped all services of cluster.
>  # started the scm, om and then datanodes.
> While datanodes were starting up, tried to read a file. Null pointer 
> Exception was thrown.
>  
> {noformat}
> [root@ctr-e138-1518143905142-53-01-03 ~]# 
> /root/hadoop_trunk/ozone-0.3.0-SNAPSHOT/bin/ozone fs -ls -R /
> 2018-10-24 04:48:00,703 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> drwxrwxrwx - root root 0 2018-10-24 04:12 /testdir1
> -rw-rw-rw- 1 root root 5368709120 1970-02-25 15:29 /testdir1/5GB
> -rw-rw-rw- 1 root root 4798 1970-02-25 15:22 /testdir1/passwd
> drwxrwxrwx - root root 0 2018-10-24 04:46 /testdir3
> [root@ctr-e138-1518143905142-53-01-03 ~]# 
> /root/hadoop_trunk/ozone-0.3.0-SNAPSHOT/bin/ozone fs -cat 
> o3fs://fs-bucket.fs-volume/testdir1/passwd
> 2018-10-24 04:49:24,955 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> cat: Exception getting XceiverClient: 
> com.google.common.util.concurrent.UncheckedExecutionException: 
> java.lang.NullPointerException{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-721) NullPointerException thrown while trying to read a file when datanode restarted

2018-10-23 Thread Nilotpal Nandi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661731#comment-16661731
 ] 

Nilotpal Nandi commented on HDDS-721:
-

logs from all nodes :

[^all-node-ozone-logs-1540356965.tar.gz]

> NullPointerException thrown while trying to read a file when datanode 
> restarted
> ---
>
> Key: HDDS-721
> URL: https://issues.apache.org/jira/browse/HDDS-721
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Nilotpal Nandi
>Priority: Critical
> Attachments: all-node-ozone-logs-1540356965.tar.gz
>
>
> steps taken :
> ---
>  # Put few files and directories using ozonefs
>  # stopped all services of cluster.
>  # started the scm, om and then datanodes.
> While datanodes were starting up, tried to read a file. Null pointer 
> Exception was thrown.
>  
> {noformat}
> [root@ctr-e138-1518143905142-53-01-03 ~]# 
> /root/hadoop_trunk/ozone-0.3.0-SNAPSHOT/bin/ozone fs -ls -R /
> 2018-10-24 04:48:00,703 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> drwxrwxrwx - root root 0 2018-10-24 04:12 /testdir1
> -rw-rw-rw- 1 root root 5368709120 1970-02-25 15:29 /testdir1/5GB
> -rw-rw-rw- 1 root root 4798 1970-02-25 15:22 /testdir1/passwd
> drwxrwxrwx - root root 0 2018-10-24 04:46 /testdir3
> [root@ctr-e138-1518143905142-53-01-03 ~]# 
> /root/hadoop_trunk/ozone-0.3.0-SNAPSHOT/bin/ozone fs -cat 
> o3fs://fs-bucket.fs-volume/testdir1/passwd
> 2018-10-24 04:49:24,955 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> cat: Exception getting XceiverClient: 
> com.google.common.util.concurrent.UncheckedExecutionException: 
> java.lang.NullPointerException{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-721) NullPointerException thrown while trying to read a file when datanode restarted

2018-10-23 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-721:
---

 Summary: NullPointerException thrown while trying to read a file 
when datanode restarted
 Key: HDDS-721
 URL: https://issues.apache.org/jira/browse/HDDS-721
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 0.3.0
Reporter: Nilotpal Nandi


steps taken :

---
 # Put few files and directories using ozonefs
 # stopped all services of cluster.
 # started the scm, om and then datanodes.

While datanodes were starting up, tried to read a file. Null pointer Exception 
was thrown.

 
{noformat}
[root@ctr-e138-1518143905142-53-01-03 ~]# 
/root/hadoop_trunk/ozone-0.3.0-SNAPSHOT/bin/ozone fs -ls -R /
2018-10-24 04:48:00,703 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
drwxrwxrwx - root root 0 2018-10-24 04:12 /testdir1
-rw-rw-rw- 1 root root 5368709120 1970-02-25 15:29 /testdir1/5GB
-rw-rw-rw- 1 root root 4798 1970-02-25 15:22 /testdir1/passwd
drwxrwxrwx - root root 0 2018-10-24 04:46 /testdir3

[root@ctr-e138-1518143905142-53-01-03 ~]# 
/root/hadoop_trunk/ozone-0.3.0-SNAPSHOT/bin/ozone fs -cat 
o3fs://fs-bucket.fs-volume/testdir1/passwd
2018-10-24 04:49:24,955 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
cat: Exception getting XceiverClient: 
com.google.common.util.concurrent.UncheckedExecutionException: 
java.lang.NullPointerException{noformat}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661722#comment-16661722
 ] 

Hadoop QA commented on HDDS-719:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
34s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
54s{color} | {color:green} integration-test in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 22s{color} 
| {color:red} ozonefs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 27s{color} 
| {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 4s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 18s{color} | 
{color:black} {color} |
\\

[jira] [Commented] (HDDS-716) Update ozone to latest ratis snapshot build(0.3.0-aa38160-SNAPSHOT)

2018-10-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661719#comment-16661719
 ] 

Hadoop QA commented on HDDS-716:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}116m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-716 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945323/HDDS-716.003.patch |
| Optional Tests |  asflicense  compile  

[jira] [Commented] (HDDS-103) SCM CA: Add new security protocol for SCM to expose security related functions

2018-10-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661705#comment-16661705
 ] 

Hadoop QA commented on HDDS-103:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
28s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-hdds/container-service in HDDS-4 has 1 extant 
Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-hdds/server-scm in HDDS-4 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
28s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | 

[jira] [Commented] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661684#comment-16661684
 ] 

Hadoop QA commented on HDFS-13996:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  9s{color} | {color:orange} hadoop-hdfs-project: The patch generated 8 new + 
502 unchanged - 0 fixed = 510 total (was 502) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}115m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m  
9s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}188m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestHdfsNativeCodeLoader |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13996 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945307/HDFS-13996.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f8b104e7dfda 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a0c0b79 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| 

[jira] [Resolved] (HDDS-715) Ozone compilation against hadoop-3.1 fails

2018-10-23 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh resolved HDDS-715.

Resolution: Duplicate

> Ozone compilation against hadoop-3.1 fails
> --
>
> Key: HDDS-715
> URL: https://issues.apache.org/jira/browse/HDDS-715
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>
> Compiling Ozone against hadoop-3.1 fails with the following error.
> {code}
> ozone/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java:[468,26]
>  cannot find symbol
> 03:04:54 2018/10/23 10:04:54 INFO:   symbol:   method getUtcTime()
> 03:04:54 2018/10/23 10:04:54 INFO:   location: class 
> org.apache.hadoop.util.Time
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-103) SCM CA: Add new security protocol for SCM to expose security related functions

2018-10-23 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-103:

Attachment: HDDS-103-HDDS-4.04.patch

> SCM CA: Add new security protocol for SCM to expose security related functions
> --
>
> Key: HDDS-103
> URL: https://issues.apache.org/jira/browse/HDDS-103
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-103-HDDS-4.00.patch, HDDS-103-HDDS-4.01.patch, 
> HDDS-103-HDDS-4.02.patch, HDDS-103-HDDS-4.03.patch, HDDS-103-HDDS-4.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14021) TestReconstructStripedBlocksWithRackAwareness#testReconstructForNotEnoughRacks fails intermittently

2018-10-23 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-14021:
-
Attachment: HDFS-14021.02.patch

> TestReconstructStripedBlocksWithRackAwareness#testReconstructForNotEnoughRacks
>  fails intermittently
> ---
>
> Key: HDFS-14021
> URL: https://issues.apache.org/jira/browse/HDFS-14021
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-14021.01.patch, HDFS-14021.02.patch, 
> TEST-org.apache.hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness.xml
>
>
> The test sometimes fail with:
> {noformat}
> java.lang.AssertionError: expected:<0> but was:<1>
>   
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwarness.testReconstructForNotEnoughRacks(TestReconstructStripedBlocksWithRackAwareness.java:171)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-23 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661647#comment-16661647
 ] 

Akira Ajisaka commented on HDDS-719:


org.mockito.internal.util.reflection.Whitebox should not be used because this 
class was removed in Mockito 2.x. Please see HADOOP-14178 and HADOOP-14188.

> Remove Ozone dependencies on Apache Hadoop 3.2.0
> 
>
> Key: HDDS-719
> URL: https://issues.apache.org/jira/browse/HDDS-719
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM, test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-719.01.patch, HDDS-719.02.patch
>
>
> A few more changes to remove dependencies on Hadoop 3.2.0.
> # {{Time#getUtcTime}} used by SCM, unit tests and genesis.
> # Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-23 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661641#comment-16661641
 ] 

Arpit Agarwal commented on HDDS-719:


Sorry I edited your comment instead of mine. Reverted the edit.

> Remove Ozone dependencies on Apache Hadoop 3.2.0
> 
>
> Key: HDDS-719
> URL: https://issues.apache.org/jira/browse/HDDS-719
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM, test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-719.01.patch, HDDS-719.02.patch
>
>
> A few more changes to remove dependencies on Hadoop 3.2.0.
> # {{Time#getUtcTime}} used by SCM, unit tests and genesis.
> # Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-23 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661631#comment-16661631
 ] 

Arpit Agarwal edited comment on HDDS-719 at 10/24/18 2:49 AM:
--

Hi [~arpitagarwal], almost looks great to me, some review comments from me:

 I see the class {{HddsWhiteboxTestUtils}} is copied from the 
{{org.apache.hadoop.test.Whitebox}}, but that class was tagged as @Deprecated. 
Should we still use this one? Can we use 
{{org.mockito.internal.util.reflection.Whitebox}} in {{TestOmMetrics}} instead? 
At least, I don't think there is a necessity to have this.

 {{ITestOzoneContractGetFileStatus}} is also the intended change in this patch? 
I'm okay for this. Just making a confirm.


was (Author: linyiqun):
Hi [~arpitagarwal], almost looks great to me, some review comments from me:

 I see the class {{HddsWhiteboxTestUtils}} is copied from the 
{{org.apache.hadoop.test.Whitebox}}, but that class was tagged as @Deprecated. 
Should we still use this one? Can we use 
{{org.mockito.internal.util.reflection.Whitebox}} in {{TestOmMetrics}} instead? 
At least, I don't think there is a necessity to have this.

 {{ITestOzoneContractGetFileStatus}} is also the intended change in this patch? 
I'm okay for this. Just making a confirm.

bq.  ITestOzoneContractGetFileStatus is also the intended change in this patch? 
I'm okay for this. Just making a confirm.
Yes this was intended. The getLog() method is not available in Hadoop 3.1.0.

> Remove Ozone dependencies on Apache Hadoop 3.2.0
> 
>
> Key: HDDS-719
> URL: https://issues.apache.org/jira/browse/HDDS-719
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM, test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-719.01.patch, HDDS-719.02.patch
>
>
> A few more changes to remove dependencies on Hadoop 3.2.0.
> # {{Time#getUtcTime}} used by SCM, unit tests and genesis.
> # Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-23 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661639#comment-16661639
 ] 

Arpit Agarwal edited comment on HDDS-719 at 10/24/18 2:49 AM:
--

Thanks for taking a look [~linyiqun]. This code is fairly straightforward 
reflection and should be safe to copy.

I'll take a look to see if we can just use 
{{org.mockito.internal.util.reflection.Whitebox}}.

bq.  ITestOzoneContractGetFileStatus is also the intended change in this patch? 
I'm okay for this. Just making a confirm.
Yes this is intended. The getLog() method in the base class is not available in 
Hadoop 3.1.0.


was (Author: arpitagarwal):
Thanks for taking a look [~linyiqun]. This code is fairly straightforward 
reflection and should be safe to copy.

I'll take a look to see if we can just use 
{{org.mockito.internal.util.reflection.Whitebox}}.

> Remove Ozone dependencies on Apache Hadoop 3.2.0
> 
>
> Key: HDDS-719
> URL: https://issues.apache.org/jira/browse/HDDS-719
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM, test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-719.01.patch, HDDS-719.02.patch
>
>
> A few more changes to remove dependencies on Hadoop 3.2.0.
> # {{Time#getUtcTime}} used by SCM, unit tests and genesis.
> # Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-23 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661639#comment-16661639
 ] 

Arpit Agarwal commented on HDDS-719:


Thanks for taking a look [~linyiqun]. This code is fairly straightforward 
reflection and should be safe to copy.

I'll take a look to see if we can just use 
{{org.mockito.internal.util.reflection.Whitebox}}.

> Remove Ozone dependencies on Apache Hadoop 3.2.0
> 
>
> Key: HDDS-719
> URL: https://issues.apache.org/jira/browse/HDDS-719
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM, test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-719.01.patch, HDDS-719.02.patch
>
>
> A few more changes to remove dependencies on Hadoop 3.2.0.
> # {{Time#getUtcTime}} used by SCM, unit tests and genesis.
> # Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-23 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661631#comment-16661631
 ] 

Arpit Agarwal edited comment on HDDS-719 at 10/24/18 2:48 AM:
--

Hi [~arpitagarwal], almost looks great to me, some review comments from me:

 I see the class {{HddsWhiteboxTestUtils}} is copied from the 
{{org.apache.hadoop.test.Whitebox}}, but that class was tagged as @Deprecated. 
Should we still use this one? Can we use 
{{org.mockito.internal.util.reflection.Whitebox}} in {{TestOmMetrics}} instead? 
At least, I don't think there is a necessity to have this.

 {{ITestOzoneContractGetFileStatus}} is also the intended change in this patch? 
I'm okay for this. Just making a confirm.

bq.  ITestOzoneContractGetFileStatus is also the intended change in this patch? 
I'm okay for this. Just making a confirm.
Yes this was intended. The getLog() method is not available in Hadoop 3.1.0.


was (Author: linyiqun):
Hi [~arpitagarwal], almost looks great to me, some review comments from me:

 I see the class {{HddsWhiteboxTestUtils}} is copied from the 
{{org.apache.hadoop.test.Whitebox}}, but that class was tagged as @Deprecated. 
Should we still use this one? Can we use 
{{org.mockito.internal.util.reflection.Whitebox}} in {{TestOmMetrics}} instead? 
At least, I don't think there is a necessity to have this.

 {{ITestOzoneContractGetFileStatus}} is also the intended change in this patch? 
I'm okay for this. Just making a confirm.

> Remove Ozone dependencies on Apache Hadoop 3.2.0
> 
>
> Key: HDDS-719
> URL: https://issues.apache.org/jira/browse/HDDS-719
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM, test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-719.01.patch, HDDS-719.02.patch
>
>
> A few more changes to remove dependencies on Hadoop 3.2.0.
> # {{Time#getUtcTime}} used by SCM, unit tests and genesis.
> # Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-188) TestOmMetrcis should not use the deprecated WhiteBox class

2018-10-23 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin resolved HDDS-188.

Resolution: Duplicate

> TestOmMetrcis should not use the deprecated WhiteBox class
> --
>
> Key: HDDS-188
> URL: https://issues.apache.org/jira/browse/HDDS-188
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: newbie
>
> TestOmMetrcis should stop using {{org.apache.hadoop.test.Whitebox}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-188) TestOmMetrcis should not use the deprecated WhiteBox class

2018-10-23 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661636#comment-16661636
 ] 

Yiqun Lin commented on HDDS-188:


As HDDS-719 is fixing this now. Closing this JIRA.

> TestOmMetrcis should not use the deprecated WhiteBox class
> --
>
> Key: HDDS-188
> URL: https://issues.apache.org/jira/browse/HDDS-188
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: newbie
>
> TestOmMetrcis should stop using {{org.apache.hadoop.test.Whitebox}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-23 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661631#comment-16661631
 ] 

Yiqun Lin commented on HDDS-719:


Hi [~arpitagarwal], almost looks great to me, some review comments from me:

 I see the class {{HddsWhiteboxTestUtils}} is copied from the 
{{org.apache.hadoop.test.Whitebox}}, but that class was tagged as @Deprecated. 
Should we still use this one? Can we use 
{{org.mockito.internal.util.reflection.Whitebox}} in {{TestOmMetrics}} instead? 
At least, I don't think there is a necessity to have this.

 {{ITestOzoneContractGetFileStatus}} is also the intended change in this patch? 
I'm okay for this. Just making a confirm.

> Remove Ozone dependencies on Apache Hadoop 3.2.0
> 
>
> Key: HDDS-719
> URL: https://issues.apache.org/jira/browse/HDDS-719
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM, test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-719.01.patch, HDDS-719.02.patch
>
>
> A few more changes to remove dependencies on Hadoop 3.2.0.
> # {{Time#getUtcTime}} used by SCM, unit tests and genesis.
> # Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-716) Update ozone to latest ratis snapshot build(0.3.0-aa38160-SNAPSHOT)

2018-10-23 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661630#comment-16661630
 ] 

Mukul Kumar Singh commented on HDDS-716:


Thanks for reviewing the patch [~jnp]. Patch v3 address the checkstyle issues. 
In readStateMachineData and writeStateMachineData, StateMachineLogEntryProto 
will always be set. 

> Update ozone to latest ratis snapshot build(0.3.0-aa38160-SNAPSHOT)
> ---
>
> Key: HDDS-716
> URL: https://issues.apache.org/jira/browse/HDDS-716
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Attachments: HDDS-716.001.patch, HDDS-716.002.patch, 
> HDDS-716.003.patch
>
>
> This jira updates the ozone to latest ratis snapshot 
> build(0.3.0-aa38160-SNAPSHOT)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-716) Update ozone to latest ratis snapshot build(0.3.0-aa38160-SNAPSHOT)

2018-10-23 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-716:
---
Attachment: HDDS-716.003.patch

> Update ozone to latest ratis snapshot build(0.3.0-aa38160-SNAPSHOT)
> ---
>
> Key: HDDS-716
> URL: https://issues.apache.org/jira/browse/HDDS-716
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Attachments: HDDS-716.001.patch, HDDS-716.002.patch, 
> HDDS-716.003.patch
>
>
> This jira updates the ozone to latest ratis snapshot 
> build(0.3.0-aa38160-SNAPSHOT)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-718) Introduce new SCM Commands to list and close Pipelines

2018-10-23 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661621#comment-16661621
 ] 

Yiqun Lin commented on HDDS-718:


Thanks for filling this, [~nandakumar131]. Feel free to attach the patch for 
trunk, :).

> Introduce new SCM Commands to list and close Pipelines
> --
>
> Key: HDDS-718
> URL: https://issues.apache.org/jira/browse/HDDS-718
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Blocker
>
> We need to have a tear-down pipeline command in SCM so that an administrator 
> can close/destroy a pipeline in the cluster.
> HDDS-695 brings in the commands in branch ozone-0.3, this Jira is for porting 
> them to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14022) Failing CTEST test_libhdfs

2018-10-23 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661605#comment-16661605
 ] 

Ayush Saxena commented on HDFS-14022:
-

Does this have any relation to HADOOP-15856 ?

> Failing CTEST test_libhdfs
> --
>
> Key: HDFS-14022
> URL: https://issues.apache.org/jira/browse/HDFS-14022
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Priority: Major
>
> Here are list of recurring failures that are seen, there seems to be a 
> problem with
> invoking the build() in MiniDFSClusterBuilder, there are several failures ( 2 
> cores related to it), in the function
> struct NativeMiniDfsCluster* nmdCreate(struct NativeMiniDfsConf *conf)
> {
>jthr = invokeMethod(env, , INSTANCE, bld, MINIDFS_CLUSTER_BUILDER,
> "build", "()L" MINIDFS_CLUSTER ";"); --->
> }
> Failed CTEST tests
> test_test_libhdfs_threaded_hdfs_static
>   test_test_libhdfs_zerocopy_hdfs_static
>   test_libhdfs_threaded_hdfspp_test_shim_static
>   test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static
>   libhdfs_mini_stress_valgrind_hdfspp_test_static
>   memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static
>   test_libhdfs_mini_stress_hdfspp_test_shim_static
>   test_hdfs_ext_hdfspp_test_shim_static
> 
> Details of the failures:
>  a) test_test_libhdfs_threaded_hdfs_static
> hdfsOpenFile(/tlhData0001/file1): 
> FileSystem#open((Lorg/apache/hadoop/fs/Path;I)Lorg/apache/hadoop/fs/FSDataInputStream;)
>  error:
> (unable to get root cause for java.io.FileNotFoundException) --->
> (unable to get stack trace for java.io.FileNotFoundException)
> TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:180
>  with NULL return return value (errno: 2): expected substring: File does not 
> exist
> TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:336
>  with return code -1 (errno: 2): got nonzero from doTestHdfsOperations(ti, 
> fs, )
> hdfsOpenFile(/tlhData/file1): 
> FileSystem#open((Lorg/apache/hadoop/fs/Path;I)Lorg/apache/hadoop/fs/FSDataInputStream;)
>  error:
> (unable to get root cause for java.io.FileNotFoundException)
> b) test_test_libhdfs_zerocopy_hdfs_static
> nmdCreate: Builder#build error:
> (unable to get root cause for java.lang.RuntimeException)
> (unable to get stack trace for java.lang.RuntimeException)
> TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_zerocopy.c:253
>  (errno: 2): got NULL from cl
> Failure: 
> struct NativeMiniDfsCluster* nmdCreate(struct NativeMiniDfsConf *conf)
> jthr = invokeMethod(env, , INSTANCE, bld, MINIDFS_CLUSTER_BUILDER,
> "build", "()L" MINIDFS_CLUSTER ";"); ===> Failure 
> if (jthr) {
> printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
>   "nmdCreate: Builder#build");
> goto error;
> }
> }
> c) test_libhdfs_threaded_hdfspp_test_shim_static
> hdfsOpenFile(/tlhData0002/file1): 
> FileSystem#open((Lorg/apache/hadoop/fs/Path;I)Lorg/apache/hadoop/fs/FSDataInputStream;)
>  error:
> (unable to get root cause for java.io.FileNotFoundException) --->
> (unable to get stack trace for java.io.FileNotFoundException)
> TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:180
>  with NULL return return value (errno: 2): expected substring: File does not 
> exist
> TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:336
>  with return code -1 (errno: 2): got nonzero from doTestHdfsOperations(ti, 
> fs, )
> d)
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x0078c513, pid=16765, tid=0x7fc4449717c0
> #
> # JRE version: OpenJDK Runtime Environment (8.0_181-b13) (build 
> 1.8.0_181-8u181-b13-0ubuntu0.16.04.1-b13)
> # Java VM: OpenJDK 64-Bit Server VM (25.181-b13 mixed mode linux-amd64 
> compressed oops)
> # Problematic frame:
> # C  [hdfs_ext_hdfspp_test_shim_static+0x38c513]
> #
> # Core dump written. Default location: 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/main/native/libhdfspp/tests/core
>  or core.16765
> #
> # An error report file with more information is saved as:
> # 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/main/native/libhdfspp/tests/hs_err_pid16765.log
> #
> # If you would like to submit a bug report, please visit:
> #   

[jira] [Commented] (HDDS-659) Implement pagination in GET bucket (object list) endpoint

2018-10-23 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661602#comment-16661602
 ] 

Bharat Viswanadham commented on HDDS-659:
-

Attached an WIP implementation for these headers. Not added any test cases.

Code is almost implemented, need to do some modifications.

> Implement pagination in GET bucket (object list) endpoint
> -
>
> Key: HDDS-659
> URL: https://issues.apache.org/jira/browse/HDDS-659
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-659.00-WIP.patch
>
>
> The current implementation always returns with all the elements. We need to 
> support paging with supporting the following headers:
>  * {{start-after}}
>  * {{continuation-token}}{{}}
>  * {{max-keys}}{{}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-659) Implement pagination in GET bucket (object list) endpoint

2018-10-23 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-659:

Attachment: HDDS-659.00-WIP.patch

> Implement pagination in GET bucket (object list) endpoint
> -
>
> Key: HDDS-659
> URL: https://issues.apache.org/jira/browse/HDDS-659
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-659.00-WIP.patch
>
>
> The current implementation always returns with all the elements. We need to 
> support paging with supporting the following headers:
>  * {{start-after}}
>  * {{continuation-token}}{{}}
>  * {{max-keys}}{{}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-659) Implement pagination in GET bucket (object list) endpoint

2018-10-23 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-659:
---

Assignee: Bharat Viswanadham

> Implement pagination in GET bucket (object list) endpoint
> -
>
> Key: HDDS-659
> URL: https://issues.apache.org/jira/browse/HDDS-659
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>
> The current implementation always returns with all the elements. We need to 
> support paging with supporting the following headers:
>  * {{start-after}}
>  * {{continuation-token}}{{}}
>  * {{max-keys}}{{}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-23 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661592#comment-16661592
 ] 

Arpit Agarwal commented on HDDS-719:


v02 patch:
- Fixes checkstyle and whitespace issues.

> Remove Ozone dependencies on Apache Hadoop 3.2.0
> 
>
> Key: HDDS-719
> URL: https://issues.apache.org/jira/browse/HDDS-719
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM, test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-719.01.patch, HDDS-719.02.patch
>
>
> A few more changes to remove dependencies on Hadoop 3.2.0.
> # {{Time#getUtcTime}} used by SCM, unit tests and genesis.
> # Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-23 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-719:
---
Attachment: HDDS-719.02.patch

> Remove Ozone dependencies on Apache Hadoop 3.2.0
> 
>
> Key: HDDS-719
> URL: https://issues.apache.org/jira/browse/HDDS-719
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM, test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-719.01.patch, HDDS-719.02.patch
>
>
> A few more changes to remove dependencies on Hadoop 3.2.0.
> # {{Time#getUtcTime}} used by SCM, unit tests and genesis.
> # Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13924) Handle BlockMissingException when reading from observer

2018-10-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661577#comment-16661577
 ] 

Hadoop QA commented on HDFS-13924:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
20s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
5s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
56s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
20s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
5s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-12943 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
15s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
43s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}225m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks 
|
|   | hadoop.hdfs.protocol.TestLayoutVersion |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13924 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945281/HDFS-13924-HDFS-12943.004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 70ff0f395e01 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 

[jira] [Commented] (HDFS-14021) TestReconstructStripedBlocksWithRackAwareness#testReconstructForNotEnoughRacks fails intermittently

2018-10-23 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661575#comment-16661575
 ] 

Íñigo Goiri commented on HDFS-14021:


Thanks [~xiaochen] for the patch.
We can get rid of the checkstyle by removing the unused import.
Other than that it looks good.

BTW, note that we only had the TestHdfsNativeCodeLoader failure this time.
We are improving here.

> TestReconstructStripedBlocksWithRackAwareness#testReconstructForNotEnoughRacks
>  fails intermittently
> ---
>
> Key: HDFS-14021
> URL: https://issues.apache.org/jira/browse/HDFS-14021
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-14021.01.patch, 
> TEST-org.apache.hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness.xml
>
>
> The test sometimes fail with:
> {noformat}
> java.lang.AssertionError: expected:<0> but was:<1>
>   
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwarness.testReconstructForNotEnoughRacks(TestReconstructStripedBlocksWithRackAwareness.java:171)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-23 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661519#comment-16661519
 ] 

Daniel Templeton commented on HDFS-14015:
-

Thanks, [~jojochuang] and [~pranay_singh].  How shall we proceed here?  We can 
see that the build for patch 004 (the current patch) appears to be just as 
broken as the build for patch 005 (the placebo patch).  I'm a little nervous to 
commit patch 004 on faith, but I also don't want to make resolving HDFS-14022 a 
dependency for committing patch 004.  Thoughts?

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch, 
> HDFS-14015.003.patch, HDFS-14015.004.patch, HDFS-14015.005.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-23 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661512#comment-16661512
 ] 

Siyao Meng commented on HDFS-13996:
---

Uploaded patch rev 002. Added unit test in BaseHttpFSTestWith, the way to solve 
the config issue is to add the config when MiniDFSCluster is started. This 
avoids restarting the NN and breaking other tests. I need to expose 
MiniDFSCluster in order to get the correct config for setAcl(), see my comment 
in the unit test BaseTestHttpFSWith#testCustomizedUserAndGroupNames for more 
details.

> Make HttpFS' ACLs RegEx configurable
> 
>
> Key: HDFS-13996
> URL: https://issues.apache.org/jira/browse/HDFS-13996
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.5, 3.0.3, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13996.001.patch, HDFS-13996.002.patch
>
>
> Previously in HDFS-11421, WebHDFS' ACLs RegEx is made configurable, but it's 
> not configurable yet in HttpFS. For now in HttpFS, the ACL permission pattern 
> is fixed to DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-23 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13996:
--
Attachment: HDFS-13996.002.patch
Status: Patch Available  (was: In Progress)

> Make HttpFS' ACLs RegEx configurable
> 
>
> Key: HDFS-13996
> URL: https://issues.apache.org/jira/browse/HDFS-13996
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.7.7, 3.0.3, 2.6.5
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13996.001.patch, HDFS-13996.002.patch
>
>
> Previously in HDFS-11421, WebHDFS' ACLs RegEx is made configurable, but it's 
> not configurable yet in HttpFS. For now in HttpFS, the ACL permission pattern 
> is fixed to DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-23 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13996:
--
Status: In Progress  (was: Patch Available)

> Make HttpFS' ACLs RegEx configurable
> 
>
> Key: HDFS-13996
> URL: https://issues.apache.org/jira/browse/HDFS-13996
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.7.7, 3.0.3, 2.6.5
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13996.001.patch
>
>
> Previously in HDFS-11421, WebHDFS' ACLs RegEx is made configurable, but it's 
> not configurable yet in HttpFS. For now in HttpFS, the ACL permission pattern 
> is fixed to DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14008) NN should log snapshotdiff report

2018-10-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661496#comment-16661496
 ] 

Hadoop QA commented on HDFS-14008:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  8s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 
192 unchanged - 0 fixed = 195 total (was 192) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
38s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
|   | hadoop.fs.TestHdfsNativeCodeLoader |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-14008 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945274/HDFS-14008.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | 

[jira] [Commented] (HDFS-14021) TestReconstructStripedBlocksWithRackAwareness#testReconstructForNotEnoughRacks fails intermittently

2018-10-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661495#comment-16661495
 ] 

Hadoop QA commented on HDFS-14021:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 3 unchanged - 0 fixed = 4 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}155m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestHdfsNativeCodeLoader |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-14021 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945279/HDFS-14021.01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 457b5aced73a 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / efdfe67 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25344/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25344/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25344/testReport/ |
| Max. process+thread count | 3154 (vs. ulimit of 1) |
| modules | C: 

[jira] [Assigned] (HDDS-659) Implement pagination in GET bucket (object list) endpoint

2018-10-23 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-659:
---

Assignee: (was: Bharat Viswanadham)

> Implement pagination in GET bucket (object list) endpoint
> -
>
> Key: HDDS-659
> URL: https://issues.apache.org/jira/browse/HDDS-659
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Priority: Major
>
> The current implementation always returns with all the elements. We need to 
> support paging with supporting the following headers:
>  * {{start-after}}
>  * {{continuation-token}}{{}}
>  * {{max-keys}}{{}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-659) Implement pagination in GET bucket (object list) endpoint

2018-10-23 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-659:
---

Assignee: Bharat Viswanadham

> Implement pagination in GET bucket (object list) endpoint
> -
>
> Key: HDDS-659
> URL: https://issues.apache.org/jira/browse/HDDS-659
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>
> The current implementation always returns with all the elements. We need to 
> support paging with supporting the following headers:
>  * {{start-after}}
>  * {{continuation-token}}{{}}
>  * {{max-keys}}{{}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-23 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661470#comment-16661470
 ] 

Wei-Chiu Chuang commented on HDFS-14015:


The failures are tracked by HDFS-14022

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch, 
> HDFS-14015.003.patch, HDFS-14015.004.patch, HDFS-14015.005.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14022) Failing CTEST test_libhdfs

2018-10-23 Thread Pranay Singh (JIRA)
Pranay Singh created HDFS-14022:
---

 Summary: Failing CTEST test_libhdfs
 Key: HDFS-14022
 URL: https://issues.apache.org/jira/browse/HDFS-14022
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Pranay Singh


Here are list of recurring failures that are seen, there seems to be a problem 
with
invoking the build() in MiniDFSClusterBuilder, there are several failures ( 2 
cores related to it), in the function

struct NativeMiniDfsCluster* nmdCreate(struct NativeMiniDfsConf *conf)
{
   jthr = invokeMethod(env, , INSTANCE, bld, MINIDFS_CLUSTER_BUILDER,
"build", "()L" MINIDFS_CLUSTER ";"); --->
}



Failed CTEST tests  
test_test_libhdfs_threaded_hdfs_static
test_test_libhdfs_zerocopy_hdfs_static
test_libhdfs_threaded_hdfspp_test_shim_static
test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static
libhdfs_mini_stress_valgrind_hdfspp_test_static
memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static
test_libhdfs_mini_stress_hdfspp_test_shim_static
test_hdfs_ext_hdfspp_test_shim_static


Details of the failures:

 a) test_test_libhdfs_threaded_hdfs_static

hdfsOpenFile(/tlhData0001/file1): 
FileSystem#open((Lorg/apache/hadoop/fs/Path;I)Lorg/apache/hadoop/fs/FSDataInputStream;)
 error:
(unable to get root cause for java.io.FileNotFoundException) --->
(unable to get stack trace for java.io.FileNotFoundException)
TEST_ERROR: failed on 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:180
 with NULL return return value (errno: 2): expected substring: File does not 
exist
TEST_ERROR: failed on 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:336
 with return code -1 (errno: 2): got nonzero from doTestHdfsOperations(ti, fs, 
)
hdfsOpenFile(/tlhData/file1): 
FileSystem#open((Lorg/apache/hadoop/fs/Path;I)Lorg/apache/hadoop/fs/FSDataInputStream;)
 error:
(unable to get root cause for java.io.FileNotFoundException)

b) test_test_libhdfs_zerocopy_hdfs_static

nmdCreate: Builder#build error:
(unable to get root cause for java.lang.RuntimeException)
(unable to get stack trace for java.lang.RuntimeException)
TEST_ERROR: failed on 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_zerocopy.c:253
 (errno: 2): got NULL from cl

Failure: 

struct NativeMiniDfsCluster* nmdCreate(struct NativeMiniDfsConf *conf)

jthr = invokeMethod(env, , INSTANCE, bld, MINIDFS_CLUSTER_BUILDER,
"build", "()L" MINIDFS_CLUSTER ";"); ===> Failure 
if (jthr) {
printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
  "nmdCreate: Builder#build");
goto error;
}
}

c) test_libhdfs_threaded_hdfspp_test_shim_static

hdfsOpenFile(/tlhData0002/file1): 
FileSystem#open((Lorg/apache/hadoop/fs/Path;I)Lorg/apache/hadoop/fs/FSDataInputStream;)
 error:
(unable to get root cause for java.io.FileNotFoundException) --->
(unable to get stack trace for java.io.FileNotFoundException)
TEST_ERROR: failed on 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:180
 with NULL return return value (errno: 2): expected substring: File does not 
exist
TEST_ERROR: failed on 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:336
 with return code -1 (errno: 2): got nonzero from doTestHdfsOperations(ti, fs, 
)

d)

# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x0078c513, pid=16765, tid=0x7fc4449717c0
#
# JRE version: OpenJDK Runtime Environment (8.0_181-b13) (build 
1.8.0_181-8u181-b13-0ubuntu0.16.04.1-b13)
# Java VM: OpenJDK 64-Bit Server VM (25.181-b13 mixed mode linux-amd64 
compressed oops)
# Problematic frame:
# C  [hdfs_ext_hdfspp_test_shim_static+0x38c513]
#
# Core dump written. Default location: 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/main/native/libhdfspp/tests/core
 or core.16765
#
# An error report file with more information is saved as:
# 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/main/native/libhdfspp/tests/hs_err_pid16765.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#

Test time =  16.09 sec
--
Test Failed.
"test_hdfs_ext_hdfspp_test_shim_static" end time: Oct 23 18:46 UTC


nmdCreate: Builder#build error:
(unable to get root cause for java.lang.RuntimeException)
(unable to get stack 

[jira] [Commented] (HDFS-13566) Add configurable additional RPC listener to NameNode

2018-10-23 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661378#comment-16661378
 ] 

Hudson commented on HDFS-13566:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15298 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15298/])
HDFS-13566. Add configurable additional RPC listener to NameNode. (cliang: rev 
635786a511344b53b1d3f25c2f29ab5298f6ac74)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHAAuxiliaryPort.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPropertiesResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java


> Add configurable additional RPC listener to NameNode
> 
>
> Key: HDFS-13566
> URL: https://issues.apache.org/jira/browse/HDFS-13566
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13566.001.patch, HDFS-13566.002.patch, 
> HDFS-13566.003.patch, HDFS-13566.004.patch, HDFS-13566.005.patch, 
> HDFS-13566.006.patch, HDFS-13566.007.patch, HDFS-13566.008.patch, 
> HDFS-13566.009.patch, HDFS-13566.010.patch, HDFS-13566.011.patch
>
>
> This Jira aims to add the capability to NameNode to run additional 
> listener(s). Such that NameNode can be accessed from multiple ports. 
> Fundamentally, this Jira tries to extend ipc.Server to allow configured with 
> more listeners, binding to different ports, but sharing the same call queue 
> and the handlers. Useful when different clients are only allowed to access 
> certain different ports. Combined with HDFS-13547, this also allows different 
> ports to have different SASL security levels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-23 Thread Lukas Majercak (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661362#comment-16661362
 ] 

Lukas Majercak commented on HDFS-12284:
---

[~daryn], I feel like we should distinguish between ServicePrincipalNames and 
UserPrincipalNames for all services in HDFS, or at least give the admin an 
option to override the user principal. The _HOST solution is okay, but it 
relies on DNS giving consistent results. This inconsistency is fine for SPNs, 
as you can have as many as you want in your keytab, but is not okay for client 
principals.

 Say you have a NN running on HOSTNAME, and set it up using hdfs/_HOST@DOMAIN 
as the principal name. Now, one day, when your NN starts up and tries to 
resolve itself using _HOST, your DNS server decides to return back 
HOSTNAME.domain instead of the usual HOSTNAME. Your NN then uses that as the 
client principal to log in, and will fail.

Maybe something like {{dfs.federation.router.kerberos.user.principal}} would be 
better than {{dfs.federation.router.hostname}}

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284-HDFS-13532.004.patch, 
> HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, 
> HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, 
> HDFS-12284-HDFS-13532.009.patch, HDFS-12284-HDFS-13532.010.patch, 
> HDFS-12284-HDFS-13532.011.patch, HDFS-12284.000.patch, HDFS-12284.001.patch, 
> HDFS-12284.002.patch, HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13566) Add configurable additional RPC listener to NameNode

2018-10-23 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13566:
--
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> Add configurable additional RPC listener to NameNode
> 
>
> Key: HDFS-13566
> URL: https://issues.apache.org/jira/browse/HDFS-13566
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13566.001.patch, HDFS-13566.002.patch, 
> HDFS-13566.003.patch, HDFS-13566.004.patch, HDFS-13566.005.patch, 
> HDFS-13566.006.patch, HDFS-13566.007.patch, HDFS-13566.008.patch, 
> HDFS-13566.009.patch, HDFS-13566.010.patch, HDFS-13566.011.patch
>
>
> This Jira aims to add the capability to NameNode to run additional 
> listener(s). Such that NameNode can be accessed from multiple ports. 
> Fundamentally, this Jira tries to extend ipc.Server to allow configured with 
> more listeners, binding to different ports, but sharing the same call queue 
> and the handlers. Useful when different clients are only allowed to access 
> certain different ports. Combined with HDFS-13547, this also allows different 
> ports to have different SASL security levels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13566) Add configurable additional RPC listener to NameNode

2018-10-23 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661351#comment-16661351
 ] 

Chen Liang commented on HDFS-13566:
---

The failed tests are unrelated, and passed in my local run. I've committed v011 
patch to trunk. Thanks [~shv] and [~xkrogen] for the review!

> Add configurable additional RPC listener to NameNode
> 
>
> Key: HDFS-13566
> URL: https://issues.apache.org/jira/browse/HDFS-13566
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13566.001.patch, HDFS-13566.002.patch, 
> HDFS-13566.003.patch, HDFS-13566.004.patch, HDFS-13566.005.patch, 
> HDFS-13566.006.patch, HDFS-13566.007.patch, HDFS-13566.008.patch, 
> HDFS-13566.009.patch, HDFS-13566.010.patch, HDFS-13566.011.patch
>
>
> This Jira aims to add the capability to NameNode to run additional 
> listener(s). Such that NameNode can be accessed from multiple ports. 
> Fundamentally, this Jira tries to extend ipc.Server to allow configured with 
> more listeners, binding to different ports, but sharing the same call queue 
> and the handlers. Useful when different clients are only allowed to access 
> certain different ports. Combined with HDFS-13547, this also allows different 
> ports to have different SASL security levels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14004) TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk

2018-10-23 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661338#comment-16661338
 ] 

Hudson commented on HDFS-14004:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15297 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15297/])
HDFS-14004. TestLeaseRecovery2#testCloseWhileRecoverLease fails (inigoiri: rev 
efdfe679d64ce9de4ba6aaf2afa34e180f68d969)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery2.java


> TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk
> ---
>
> Key: HDFS-14004
> URL: https://issues.apache.org/jira/browse/HDFS-14004
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14004-01.patch, HDFS-14004-02.patch
>
>
> Reference
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/testReport/junit/org.apache.hadoop.hdfs/TestLeaseRecovery2/testCloseWhileRecoverLease/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13924) Handle BlockMissingException when reading from observer

2018-10-23 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13924:
---
Attachment: HDFS-13924-HDFS-12943.004.patch

> Handle BlockMissingException when reading from observer
> ---
>
> Key: HDFS-13924
> URL: https://issues.apache.org/jira/browse/HDFS-13924
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13924-HDFS-12943.000.patch, 
> HDFS-13924-HDFS-12943.001.patch, HDFS-13924-HDFS-12943.002.patch, 
> HDFS-13924-HDFS-12943.003.patch, HDFS-13924-HDFS-12943.004.patch
>
>
> Internally we found that reading from ObserverNode may result to 
> {{BlockMissingException}}. This may happen when the observer sees a smaller 
> number of DNs than active (maybe due to communication issue with those DNs), 
> or (we guess) late block reports from some DNs to the observer. This error 
> happens in 
> [DFSInputStream#chooseDataNode|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L846],
>  when no valid DN can be found for the {{LocatedBlock}} got from the NN side.
> One potential solution (although a little hacky) is to ask the 
> {{DFSInputStream}} to retry active when this happens. The retry logic already 
> present in the code - we just have to dynamically set a flag to ask the 
> {{ObserverReadProxyProvider}} try active in this case.
> cc [~shv], [~xkrogen], [~vagarychen], [~zero45] for discussion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13924) Handle BlockMissingException when reading from observer

2018-10-23 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661333#comment-16661333
 ] 

Erik Krogen commented on HDFS-13924:


v003 looks great, +1. Agreed that FindBugs is not related, and that 
TestLayoutVersion is broken separately from this patch (I checked and after a 
trunk merge it passes). My one nit is that the new test should ensure that a 
subsequent read continues to use NN1. I uploaded a v004 patch with this change. 
If it looks good to you feel free to commit.


> Handle BlockMissingException when reading from observer
> ---
>
> Key: HDFS-13924
> URL: https://issues.apache.org/jira/browse/HDFS-13924
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13924-HDFS-12943.000.patch, 
> HDFS-13924-HDFS-12943.001.patch, HDFS-13924-HDFS-12943.002.patch, 
> HDFS-13924-HDFS-12943.003.patch
>
>
> Internally we found that reading from ObserverNode may result to 
> {{BlockMissingException}}. This may happen when the observer sees a smaller 
> number of DNs than active (maybe due to communication issue with those DNs), 
> or (we guess) late block reports from some DNs to the observer. This error 
> happens in 
> [DFSInputStream#chooseDataNode|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L846],
>  when no valid DN can be found for the {{LocatedBlock}} got from the NN side.
> One potential solution (although a little hacky) is to ask the 
> {{DFSInputStream}} to retry active when this happens. The retry logic already 
> present in the code - we just have to dynamically set a flag to ask the 
> {{ObserverReadProxyProvider}} try active in this case.
> cc [~shv], [~xkrogen], [~vagarychen], [~zero45] for discussion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661317#comment-16661317
 ] 

Hadoop QA commented on HDDS-719:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
0s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 45s{color} | {color:orange} root: The patch generated 7 new + 0 unchanged - 
0 fixed = 7 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
42s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 34s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
46s{color} | {color:green} ozonefs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
39s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} 

[jira] [Updated] (HDFS-14004) TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk

2018-10-23 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14004:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~ayushtkn] for the review and [~knanasi] and [~jojochuang] for the 
reviews.
Committed to trunk.

> TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk
> ---
>
> Key: HDFS-14004
> URL: https://issues.apache.org/jira/browse/HDFS-14004
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14004-01.patch, HDFS-14004-02.patch
>
>
> Reference
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/testReport/junit/org.apache.hadoop.hdfs/TestLeaseRecovery2/testCloseWhileRecoverLease/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14021) TestReconstructStripedBlocksWithRackAwareness#testReconstructForNotEnoughRacks fails intermittently

2018-10-23 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661307#comment-16661307
 ] 

Xiao Chen commented on HDFS-14021:
--

Attached a sample failure report, and a patch to fix.

 
 {noformat}
2018-10-15 23:02:12,834 [Block report processor] DEBUG 
blockmanagement.BlockManager 
(BlockManager.java:processAndHandleReportedBlock(3839)) - In memory 
blockUCState = UNDER_CONSTRUCTION
2018-10-15 23:02:12,836 [Block report processor] DEBUG BlockStateChange 
(BlockManager.java:addStoredBlock(3148)) - BLOCK* addStoredBlock: 
127.0.0.1:38427 is added to blk_-9223372036854775792_1001 (size=0)
2018-10-15 23:02:12,837 [Block report processor] DEBUG BlockStateChange 
(BlockManager.java:processIncrementalBlockReport(3949)) - BLOCK* block 
RECEIVED_BLOCK: blk_-9223372036854775785_1001 is received from 127.0.0.1:38427
2018-10-15 23:02:12,837 [Block report processor] DEBUG BlockStateChange 
(BlockManager.java:processIncrementalBlockReport(3952)) - *BLOCK* 
NameNode.processIncrementalBlockReport: from 127.0.0.1:38427 receiving: 0, 
received: 1, deleted: 0
---> 2018-10-15 23:02:12,840 [IPC Server handler 7 on 35885] DEBUG 
BlockStateChange (LowRedundancyBlocks.java:add(293)) - BLOCK* 
NameSystem.LowRedundancyBlock.add: blk_-9223372036854775792_1001 has only 8 
replicas and need 9 replicas so is added to neededReconstructions at priority 
level 2
2018-10-15 23:02:12,840 [IPC Server handler 7 on 35885] INFO hdfs.StateChange 
(FSNamesystem.java:completeFile(2830)) - DIR* completeFile: /foo is closed by 
DFSClient_NONMAPREDUCE_-442030319_1
2018-10-15 23:02:12,841 [Block report processor] DEBUG 
blockmanagement.BlockManager 
(BlockManager.java:processAndHandleReportedBlock(3816)) - Reported block 
blk_-9223372036854775784_1001 on 127.0.0.1:44904 size 2097152 replicaState = 
FINALIZED
2018-10-15 23:02:12,841 [Block report processor] DEBUG 
blockmanagement.BlockManager 
(BlockManager.java:processAndHandleReportedBlock(3839)) - In memory 
blockUCState = COMPLETE
2018-10-15 23:02:12,841 [Block report processor] DEBUG BlockStateChange 
(BlockManager.java:addStoredBlock(3148)) - BLOCK* addStoredBlock: 
127.0.0.1:44904 is added to blk_-9223372036854775792_1001 (size=12582912)
2018-10-15 23:02:12,841 [main] INFO hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
---> 2018-10-15 23:02:12,842 [Block report processor] DEBUG BlockStateChange 
---(LowRedundancyBlocks.java:remove(387)) - BLOCK* 
NameSystem.LowRedundancyBlock.remove: Removing block 
blk_-9223372036854775792_1001 from priority queue 2
2018-10-15 23:02:12,842 [main] INFO hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdownDataNode(2013)) - Shutting down DataNode 8
2018-10-15 23:02:12,842 [Block report processor] DEBUG BlockStateChange 
(BlockManager.java:processIncrementalBlockReport(3949)) - BLOCK* block 
RECEIVED_BLOCK: blk_-9223372036854775784_1001 is received from 127.0.0.1:44904
2018-10-15 23:02:12,844 [Block report processor] DEBUG BlockStateChange 
(BlockManager.java:processIncrementalBlockReport(3952)) - *BLOCK* 
NameNode.processIncrementalBlockReport: from 127.0.0.1:44904 receiving: 0, 
received: 1, deleted: 0
2018-10-15 23:02:12,843 
[org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@62e7dffa] INFO 
datanode.DataNode (DataXceiverServer.java:closeAllPeers(281)) - Closing all 
peers.
2018-10-15 23:02:12,843 [main] WARN datanode.DirectoryScanner 
(DirectoryScanner.java:shutdown(340)) - DirectoryScanner: shutdown has been 
called

 {noformat}

It appears to be a race condition between the block reports and the test's 
check to {{numOfUnderReplicatedBlocks}}. 

> TestReconstructStripedBlocksWithRackAwareness#testReconstructForNotEnoughRacks
>  fails intermittently
> ---
>
> Key: HDFS-14021
> URL: https://issues.apache.org/jira/browse/HDFS-14021
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-14021.01.patch, 
> TEST-org.apache.hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness.xml
>
>
> The test sometimes fail with:
> {noformat}
> java.lang.AssertionError: expected:<0> but was:<1>
>   
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwarness.testReconstructForNotEnoughRacks(TestReconstructStripedBlocksWithRackAwareness.java:171)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14021) TestReconstructStripedBlocksWithRackAwareness#testReconstructForNotEnoughRacks fails intermittently

2018-10-23 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661307#comment-16661307
 ] 

Xiao Chen edited comment on HDFS-14021 at 10/23/18 9:25 PM:


Attached a sample failure report, and a patch to fix.
 {noformat}
2018-10-15 23:02:12,834 [Block report processor] DEBUG 
blockmanagement.BlockManager 
(BlockManager.java:processAndHandleReportedBlock(3839)) - In memory 
blockUCState = UNDER_CONSTRUCTION
2018-10-15 23:02:12,836 [Block report processor] DEBUG BlockStateChange 
(BlockManager.java:addStoredBlock(3148)) - BLOCK* addStoredBlock: 
127.0.0.1:38427 is added to blk_-9223372036854775792_1001 (size=0)
2018-10-15 23:02:12,837 [Block report processor] DEBUG BlockStateChange 
(BlockManager.java:processIncrementalBlockReport(3949)) - BLOCK* block 
RECEIVED_BLOCK: blk_-9223372036854775785_1001 is received from 127.0.0.1:38427
2018-10-15 23:02:12,837 [Block report processor] DEBUG BlockStateChange 
(BlockManager.java:processIncrementalBlockReport(3952)) - *BLOCK* 
NameNode.processIncrementalBlockReport: from 127.0.0.1:38427 receiving: 0, 
received: 1, deleted: 0
---> 2018-10-15 23:02:12,840 [IPC Server handler 7 on 35885] DEBUG 
BlockStateChange (LowRedundancyBlocks.java:add(293)) - BLOCK* 
NameSystem.LowRedundancyBlock.add: blk_-9223372036854775792_1001 has only 8 
replicas and need 9 replicas so is added to neededReconstructions at priority 
level 2
2018-10-15 23:02:12,840 [IPC Server handler 7 on 35885] INFO hdfs.StateChange 
(FSNamesystem.java:completeFile(2830)) - DIR* completeFile: /foo is closed by 
DFSClient_NONMAPREDUCE_-442030319_1
2018-10-15 23:02:12,841 [Block report processor] DEBUG 
blockmanagement.BlockManager 
(BlockManager.java:processAndHandleReportedBlock(3816)) - Reported block 
blk_-9223372036854775784_1001 on 127.0.0.1:44904 size 2097152 replicaState = 
FINALIZED
2018-10-15 23:02:12,841 [Block report processor] DEBUG 
blockmanagement.BlockManager 
(BlockManager.java:processAndHandleReportedBlock(3839)) - In memory 
blockUCState = COMPLETE
2018-10-15 23:02:12,841 [Block report processor] DEBUG BlockStateChange 
(BlockManager.java:addStoredBlock(3148)) - BLOCK* addStoredBlock: 
127.0.0.1:44904 is added to blk_-9223372036854775792_1001 (size=12582912)
2018-10-15 23:02:12,841 [main] INFO hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
---> 2018-10-15 23:02:12,842 [Block report processor] DEBUG BlockStateChange 
---(LowRedundancyBlocks.java:remove(387)) - BLOCK* 
NameSystem.LowRedundancyBlock.remove: Removing block 
blk_-9223372036854775792_1001 from priority queue 2
2018-10-15 23:02:12,842 [main] INFO hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdownDataNode(2013)) - Shutting down DataNode 8
2018-10-15 23:02:12,842 [Block report processor] DEBUG BlockStateChange 
(BlockManager.java:processIncrementalBlockReport(3949)) - BLOCK* block 
RECEIVED_BLOCK: blk_-9223372036854775784_1001 is received from 127.0.0.1:44904
2018-10-15 23:02:12,844 [Block report processor] DEBUG BlockStateChange 
(BlockManager.java:processIncrementalBlockReport(3952)) - *BLOCK* 
NameNode.processIncrementalBlockReport: from 127.0.0.1:44904 receiving: 0, 
received: 1, deleted: 0
2018-10-15 23:02:12,843 
[org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@62e7dffa] INFO 
datanode.DataNode (DataXceiverServer.java:closeAllPeers(281)) - Closing all 
peers.
2018-10-15 23:02:12,843 [main] WARN datanode.DirectoryScanner 
(DirectoryScanner.java:shutdown(340)) - DirectoryScanner: shutdown has been 
called
 {noformat}

It appears to be a race condition between the block reports and the test's 
check to {{numOfUnderReplicatedBlocks}}. 


was (Author: xiaochen):
Attached a sample failure report, and a patch to fix.

 
 {noformat}
2018-10-15 23:02:12,834 [Block report processor] DEBUG 
blockmanagement.BlockManager 
(BlockManager.java:processAndHandleReportedBlock(3839)) - In memory 
blockUCState = UNDER_CONSTRUCTION
2018-10-15 23:02:12,836 [Block report processor] DEBUG BlockStateChange 
(BlockManager.java:addStoredBlock(3148)) - BLOCK* addStoredBlock: 
127.0.0.1:38427 is added to blk_-9223372036854775792_1001 (size=0)
2018-10-15 23:02:12,837 [Block report processor] DEBUG BlockStateChange 
(BlockManager.java:processIncrementalBlockReport(3949)) - BLOCK* block 
RECEIVED_BLOCK: blk_-9223372036854775785_1001 is received from 127.0.0.1:38427
2018-10-15 23:02:12,837 [Block report processor] DEBUG BlockStateChange 
(BlockManager.java:processIncrementalBlockReport(3952)) - *BLOCK* 
NameNode.processIncrementalBlockReport: from 127.0.0.1:38427 receiving: 0, 
received: 1, deleted: 0
---> 2018-10-15 23:02:12,840 [IPC Server handler 7 on 35885] DEBUG 
BlockStateChange (LowRedundancyBlocks.java:add(293)) - BLOCK* 
NameSystem.LowRedundancyBlock.add: blk_-9223372036854775792_1001 has only 8 
replicas and need 9 replicas so is added to neededReconstructions at 

[jira] [Updated] (HDFS-14021) TestReconstructStripedBlocksWithRackAwareness#testReconstructForNotEnoughRacks fails intermittently

2018-10-23 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-14021:
-
Status: Patch Available  (was: Open)

> TestReconstructStripedBlocksWithRackAwareness#testReconstructForNotEnoughRacks
>  fails intermittently
> ---
>
> Key: HDFS-14021
> URL: https://issues.apache.org/jira/browse/HDFS-14021
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-14021.01.patch, 
> TEST-org.apache.hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness.xml
>
>
> The test sometimes fail with:
> {noformat}
> java.lang.AssertionError: expected:<0> but was:<1>
>   
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwarness.testReconstructForNotEnoughRacks(TestReconstructStripedBlocksWithRackAwareness.java:171)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14021) TestReconstructStripedBlocksWithRackAwareness#testReconstructForNotEnoughRacks fails intermittently

2018-10-23 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-14021:
-
Attachment: 
TEST-org.apache.hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness.xml

> TestReconstructStripedBlocksWithRackAwareness#testReconstructForNotEnoughRacks
>  fails intermittently
> ---
>
> Key: HDFS-14021
> URL: https://issues.apache.org/jira/browse/HDFS-14021
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-14021.01.patch, 
> TEST-org.apache.hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness.xml
>
>
> The test sometimes fail with:
> {noformat}
> java.lang.AssertionError: expected:<0> but was:<1>
>   
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwarness.testReconstructForNotEnoughRacks(TestReconstructStripedBlocksWithRackAwareness.java:171)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-101) SCM CA: generate CSR for SCM CA clients

2018-10-23 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661300#comment-16661300
 ] 

Xiaoyu Yao commented on HDDS-101:
-

[~ajayydv], we hide unnecessary extension details with this wrapper class from 
the callers. Currently, we support Basic/KeyUsage/SAN extensions. Customizable 
parts are simplified for our use case, e.g., for SAN, Builder method to 
addName, Ip, etc are added as needed. 

> SCM CA: generate CSR for SCM CA clients
> ---
>
> Key: HDDS-101
> URL: https://issues.apache.org/jira/browse/HDDS-101
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-101-HDDS-4-002.patch, HDDS-101-HDDS-4.001.patch, 
> HDDS-101-HDDS-4.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14021) TestReconstructStripedBlocksWithRackAwareness#testReconstructForNotEnoughRacks fails intermittently

2018-10-23 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-14021:
-
Attachment: HDFS-14021.01.patch

> TestReconstructStripedBlocksWithRackAwareness#testReconstructForNotEnoughRacks
>  fails intermittently
> ---
>
> Key: HDFS-14021
> URL: https://issues.apache.org/jira/browse/HDFS-14021
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-14021.01.patch, 
> TEST-org.apache.hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness.xml
>
>
> The test sometimes fail with:
> {noformat}
> java.lang.AssertionError: expected:<0> but was:<1>
>   
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwarness.testReconstructForNotEnoughRacks(TestReconstructStripedBlocksWithRackAwareness.java:171)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14021) TestReconstructStripedBlocksWithRackAwareness#testReconstructForNotEnoughRacks fails intermittently

2018-10-23 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-14021:


 Summary: 
TestReconstructStripedBlocksWithRackAwareness#testReconstructForNotEnoughRacks 
fails intermittently
 Key: HDFS-14021
 URL: https://issues.apache.org/jira/browse/HDFS-14021
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding, test
Affects Versions: 3.0.0
Reporter: Xiao Chen
Assignee: Xiao Chen


The test sometimes fail with:
{noformat}
java.lang.AssertionError: expected:<0> but was:<1>

at 
org.apache.hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwarness.testReconstructForNotEnoughRacks(TestReconstructStripedBlocksWithRackAwareness.java:171)

{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14008) NN should log snapshotdiff report

2018-10-23 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-14008:

Attachment: HDFS-14008.001.patch

> NN should log snapshotdiff report
> -
>
> Key: HDFS-14008
> URL: https://issues.apache.org/jira/browse/HDFS-14008
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0, 3.1.1, 3.0.3
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HDFS-14008.001.patch
>
>
> It will be helpful to log message for snapshotdiff  to correlate snapshotdiff 
> operation against memory spikes in NN heap.  It will be good to log the below 
> details at the end of snapshot diff operation, this will help us to know the 
> time spent in the snapshotdiff operation and to know the number of 
> files/directories processed and compared.
> a) Total dirs processed
> b) Total dirs compared
> c) Total files processed
> d)  Total files compared
>  e) Total children listing time



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14008) NN should log snapshotdiff report

2018-10-23 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-14008:

Status: Patch Available  (was: In Progress)

> NN should log snapshotdiff report
> -
>
> Key: HDFS-14008
> URL: https://issues.apache.org/jira/browse/HDFS-14008
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.3, 3.1.1, 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HDFS-14008.001.patch
>
>
> It will be helpful to log message for snapshotdiff  to correlate snapshotdiff 
> operation against memory spikes in NN heap.  It will be good to log the below 
> details at the end of snapshot diff operation, this will help us to know the 
> time spent in the snapshotdiff operation and to know the number of 
> files/directories processed and compared.
> a) Total dirs processed
> b) Total dirs compared
> c) Total files processed
> d)  Total files compared
>  e) Total children listing time



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-14008) NN should log snapshotdiff report

2018-10-23 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-14008 started by Pranay Singh.
---
> NN should log snapshotdiff report
> -
>
> Key: HDFS-14008
> URL: https://issues.apache.org/jira/browse/HDFS-14008
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0, 3.1.1, 3.0.3
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HDFS-14008.001.patch
>
>
> It will be helpful to log message for snapshotdiff  to correlate snapshotdiff 
> operation against memory spikes in NN heap.  It will be good to log the below 
> details at the end of snapshot diff operation, this will help us to know the 
> time spent in the snapshotdiff operation and to know the number of 
> files/directories processed and compared.
> a) Total dirs processed
> b) Total dirs compared
> c) Total files processed
> d)  Total files compared
>  e) Total children listing time



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-697) update and validate the BCSID for PutSmallFile/GetSmallFile command

2018-10-23 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-697:
-
Attachment: HDDS-697.001.patch

> update and validate the BCSID for PutSmallFile/GetSmallFile command
> ---
>
> Key: HDDS-697
> URL: https://issues.apache.org/jira/browse/HDDS-697
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-697.000.patch, HDDS-697.001.patch
>
>
> Similar to putBlock/GetBlock, putSmallFile transaction in Ratis needs to 
> update the BCSID in the container db on datanode. getSmallFile should 
> validate the bcsId while reading the block similar to getBlock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-697) update and validate the BCSID for PutSmallFile/GetSmallFile command

2018-10-23 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-697:
-
Status: Open  (was: Patch Available)

> update and validate the BCSID for PutSmallFile/GetSmallFile command
> ---
>
> Key: HDDS-697
> URL: https://issues.apache.org/jira/browse/HDDS-697
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-697.000.patch
>
>
> Similar to putBlock/GetBlock, putSmallFile transaction in Ratis needs to 
> update the BCSID in the container db on datanode. getSmallFile should 
> validate the bcsId while reading the block similar to getBlock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-697) update and validate the BCSID for PutSmallFile/GetSmallFile command

2018-10-23 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-697:
-
Attachment: (was: HDDS-697.001.patch)

> update and validate the BCSID for PutSmallFile/GetSmallFile command
> ---
>
> Key: HDDS-697
> URL: https://issues.apache.org/jira/browse/HDDS-697
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-697.000.patch
>
>
> Similar to putBlock/GetBlock, putSmallFile transaction in Ratis needs to 
> update the BCSID in the container db on datanode. getSmallFile should 
> validate the bcsId while reading the block similar to getBlock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14020) Emulate Observer node falling far behind the Active

2018-10-23 Thread Sherwood Zheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sherwood Zheng updated HDFS-14020:
--
Description: 
Emulate Observer node falling far behind the Active. Ensure readers switch over
to another Observer instead of waiting for the lagging Observer to catch up. If
there is only a single Observer, it should fall back to the Active.

> Emulate Observer node falling far behind the Active
> ---
>
> Key: HDFS-14020
> URL: https://issues.apache.org/jira/browse/HDFS-14020
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Sherwood Zheng
>Assignee: Sherwood Zheng
>Priority: Major
>
> Emulate Observer node falling far behind the Active. Ensure readers switch 
> over
> to another Observer instead of waiting for the lagging Observer to catch up. 
> If
> there is only a single Observer, it should fall back to the Active.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14020) Emulate Observer node falling far behind the Active

2018-10-23 Thread Sherwood Zheng (JIRA)
Sherwood Zheng created HDFS-14020:
-

 Summary: Emulate Observer node falling far behind the Active
 Key: HDFS-14020
 URL: https://issues.apache.org/jira/browse/HDFS-14020
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Sherwood Zheng
Assignee: Sherwood Zheng






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-697) update and validate the BCSID for PutSmallFile/GetSmallFile command

2018-10-23 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661217#comment-16661217
 ] 

Shashikant Banerjee commented on HDDS-697:
--

Patch v1 fixes the test failures. 

testContainerStateMachineFailures seems like a flaky test where we mark the 
container in unhealthy state and wait for the closeContainerAction to be 
queued. But, before the assert call to verify whether action exists in the 
pending actions queue executes, in Datanode, it might get already removed from 
the action queue to be sent to SCM by the datanode. As a result of which, 
sometimes the test works sometimes doesn't. Removed the assert condition to 
verify the pending action queue from the test to make the test more stable.

> update and validate the BCSID for PutSmallFile/GetSmallFile command
> ---
>
> Key: HDDS-697
> URL: https://issues.apache.org/jira/browse/HDDS-697
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-697.000.patch, HDDS-697.001.patch
>
>
> Similar to putBlock/GetBlock, putSmallFile transaction in Ratis needs to 
> update the BCSID in the container db on datanode. getSmallFile should 
> validate the bcsId while reading the block similar to getBlock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-697) update and validate the BCSID for PutSmallFile/GetSmallFile command

2018-10-23 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-697:
-
Attachment: HDDS-697.001.patch

> update and validate the BCSID for PutSmallFile/GetSmallFile command
> ---
>
> Key: HDDS-697
> URL: https://issues.apache.org/jira/browse/HDDS-697
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-697.000.patch, HDDS-697.001.patch
>
>
> Similar to putBlock/GetBlock, putSmallFile transaction in Ratis needs to 
> update the BCSID in the container db on datanode. getSmallFile should 
> validate the bcsId while reading the block similar to getBlock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-101) SCM CA: generate CSR for SCM CA clients

2018-10-23 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661201#comment-16661201
 ] 

Ajay Kumar commented on HDDS-101:
-

[~xyao] thanks for updated patch. Shall we allow custom extensions to be added 
via builder as well?

> SCM CA: generate CSR for SCM CA clients
> ---
>
> Key: HDDS-101
> URL: https://issues.apache.org/jira/browse/HDDS-101
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-101-HDDS-4-002.patch, HDDS-101-HDDS-4.001.patch, 
> HDDS-101-HDDS-4.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-720) ContainerReportPublisher fails when the container is marked unhealthy on Datanodes

2018-10-23 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-720:


 Summary: ContainerReportPublisher fails when the container is 
marked unhealthy on Datanodes
 Key: HDDS-720
 URL: https://issues.apache.org/jira/browse/HDDS-720
 Project: Hadoop Distributed Data Store
  Issue Type: Test
  Components: Ozone Datanode, SCM
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee


{code:java}
2018-10-24 01:15:00,265 ERROR report.ReportPublisher 
(ReportPublisher.java:publishReport(88)) - Exception while publishing report.
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
Invalid Container state found: 2
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getHddsState(KeyValueContainer.java:558)
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getContainerReport(KeyValueContainer.java:532)
at 
org.apache.hadoop.ozone.container.common.impl.ContainerSet.getContainerReport(ContainerSet.java:203)
at 
org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getContainerReport(OzoneContainer.java:168)
at 
org.apache.hadoop.ozone.container.common.report.ContainerReportPublisher.getReport(ContainerReportPublisher.java:83)
at 
org.apache.hadoop.ozone.container.common.report.ContainerReportPublisher.getReport(ContainerReportPublisher.java:50)
at 
org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
at 
org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}
There is no mapping exist for Unhealthy state in Datanode for containers to 
LifecycleState of containers in SCM. Hence, the container report publisher 
fails with Invalid container state exception.

A container is marked unhealthy in Datanode only if a certain write transaction 
fails, so that successive updates get rejected and a close container action is 
initiated to SCM to close the container. For all practical cases, a container 
in unhealthy state can also be mapped to a container in closing state in SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-580) Bootstrap OM/SCM with private/public key pair

2018-10-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661140#comment-16661140
 ] 

Hadoop QA commented on HDDS-580:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 1s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
33s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
32s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
9s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-hdds/server-scm in HDDS-4 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
0s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
19s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
27s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 39s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | 

[jira] [Commented] (HDDS-581) Bootstrap DN with private/public key pair

2018-10-23 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661118#comment-16661118
 ] 

Ajay Kumar commented on HDDS-581:
-

patch utilizes {{SecurityUtils}} class introduced in [HDDS-580].

> Bootstrap DN with private/public key pair
> -
>
> Key: HDDS-581
> URL: https://issues.apache.org/jira/browse/HDDS-581
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-581-HDDS-4.00.patch
>
>
> This will create public/private key pair for HDDS datanode if there isn't one 
> available during secure dn startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-581) Bootstrap DN with private/public key pair

2018-10-23 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-581:

Attachment: HDDS-581-HDDS-4.00.patch

> Bootstrap DN with private/public key pair
> -
>
> Key: HDDS-581
> URL: https://issues.apache.org/jira/browse/HDDS-581
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-581-HDDS-4.00.patch
>
>
> This will create public/private key pair for HDDS datanode if there isn't one 
> available during secure dn startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-692) Use the ProgressBar class in the RandomKeyGenerator freon test

2018-10-23 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661114#comment-16661114
 ] 

Nanda kumar commented on HDDS-692:
--

[~horzsolt2006], thanks for working on this.

It is not a good idea to give the actual task to {{ProgressBar}} thread.
The way it should be is
 * Instantiate the ProgressBar class with {{PrintStream}}, {{MaxValue}} of type 
Long and {{Supplier}} function.
 * ProgressBar#start; this should start the ProgressBar thread
 * ProgressBar#shutdown; this should stop the ProgressBar thread

Apart from {{shutdown}} method which waits for the progress bar to complete, we 
should also have {{terminate}} method which can be used in case of exception in 
the actual job. Upon calling {{terminate}} method, {{ProgressBar}} thread 
should immediately terminate.

> Use the ProgressBar class in the RandomKeyGenerator freon test
> --
>
> Key: HDDS-692
> URL: https://issues.apache.org/jira/browse/HDDS-692
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Zsolt Horvath
>Priority: Major
> Attachments: HDDS-692.001.patch
>
>
> HDDS-443 provides a reusable progress bar to make it easier to add more freon 
> tests, but the existing RandomKeyGenerator test 
> (hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java)
>  still doesn't use it. 
> It would be good to switch to use the new progress bar there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661115#comment-16661115
 ] 

Hadoop QA commented on HDFS-14015:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
32m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 42s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed CTEST tests | test_test_libhdfs_threaded_hdfs_static |
|   | test_test_libhdfs_zerocopy_hdfs_static |
|   | test_libhdfs_threaded_hdfspp_test_shim_static |
|   | test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static |
|   | libhdfs_mini_stress_valgrind_hdfspp_test_static |
|   | memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static |
|   | test_libhdfs_mini_stress_hdfspp_test_shim_static |
|   | test_hdfs_ext_hdfspp_test_shim_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-14015 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945254/HDFS-14015.005.patch |
| Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux ab4e48cd196d 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 93fb3b4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| CTEST | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25342/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25342/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25342/testReport/ |
| Max. process+thread count | 414 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25342/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.

[jira] [Updated] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-23 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-719:
---
Attachment: HDDS-719.01.patch

> Remove Ozone dependencies on Apache Hadoop 3.2.0
> 
>
> Key: HDDS-719
> URL: https://issues.apache.org/jira/browse/HDDS-719
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM, test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-719.01.patch
>
>
> A few more changes to remove dependencies on Hadoop 3.2.0.
> # {{Time#getUtcTime}} used by SCM, unit tests and genesis.
> # Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-23 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-719:
---
Attachment: (was: HDDS-719.01.patch)

> Remove Ozone dependencies on Apache Hadoop 3.2.0
> 
>
> Key: HDDS-719
> URL: https://issues.apache.org/jira/browse/HDDS-719
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM, test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-719.01.patch
>
>
> A few more changes to remove dependencies on Hadoop 3.2.0.
> # {{Time#getUtcTime}} used by SCM, unit tests and genesis.
> # Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-23 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661105#comment-16661105
 ] 

Arpit Agarwal commented on HDDS-719:


v01 patch:
- duplicates Time.getUtcTime and a few functions from Whitebox for test, so we 
don't depend on them being available in Apache Hadoop.

> Remove Ozone dependencies on Apache Hadoop 3.2.0
> 
>
> Key: HDDS-719
> URL: https://issues.apache.org/jira/browse/HDDS-719
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM, test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-719.01.patch
>
>
> A few more changes to remove dependencies on Hadoop 3.2.0.
> # {{Time#getUtcTime}} used by SCM, unit tests and genesis.
> # Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-23 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-719:
---
Attachment: HDDS-719.01.patch

> Remove Ozone dependencies on Apache Hadoop 3.2.0
> 
>
> Key: HDDS-719
> URL: https://issues.apache.org/jira/browse/HDDS-719
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM, test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-719.01.patch
>
>
> A few more changes to remove dependencies on Hadoop 3.2.0.
> # {{Time#getUtcTime}} used by SCM, unit tests and genesis.
> # Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-23 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-719:
---
Status: Patch Available  (was: Open)

> Remove Ozone dependencies on Apache Hadoop 3.2.0
> 
>
> Key: HDDS-719
> URL: https://issues.apache.org/jira/browse/HDDS-719
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM, test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-719.01.patch
>
>
> A few more changes to remove dependencies on Hadoop 3.2.0.
> # {{Time#getUtcTime}} used by SCM, unit tests and genesis.
> # Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-23 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-719:
--

 Summary: Remove Ozone dependencies on Apache Hadoop 3.2.0
 Key: HDDS-719
 URL: https://issues.apache.org/jira/browse/HDDS-719
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM, test
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


A few more changes to remove dependencies on Hadoop 3.2.0.
# {{Time#getUtcTime}} used by SCM, unit tests and genesis.
# Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14008) NN should log snapshotdiff report

2018-10-23 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh reassigned HDFS-14008:
---

Assignee: Pranay Singh

> NN should log snapshotdiff report
> -
>
> Key: HDFS-14008
> URL: https://issues.apache.org/jira/browse/HDFS-14008
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0, 3.1.1, 3.0.3
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
>
> It will be helpful to log message for snapshotdiff  to correlate snapshotdiff 
> operation against memory spikes in NN heap.  It will be good to log the below 
> details at the end of snapshot diff operation, this will help us to know the 
> time spent in the snapshotdiff operation and to know the number of 
> files/directories processed and compared.
> a) Total dirs processed
> b) Total dirs compared
> c) Total files processed
> d)  Total files compared
>  e) Total children listing time



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-23 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661055#comment-16661055
 ] 

Daniel Templeton edited comment on HDFS-14015 at 10/23/18 5:54 PM:
---

I don't see why the tests are failing, but they're failing consistently.  I 
just posted a new patch that doesn't actually change anything important; it 
fixes a typo in a string.  I want to see what happens with a provably innocuous 
patch.


was (Author: templedf):
I don't see why the tests are failing, but they're failing consistently.  I 
just posted a new patch that doesn't actually change anything important; it 
fixes a typo in a string.  I want to see what happens when with a provably 
innocuous patch.

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch, 
> HDFS-14015.003.patch, HDFS-14015.004.patch, HDFS-14015.005.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-23 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661055#comment-16661055
 ] 

Daniel Templeton commented on HDFS-14015:
-

I don't see why the tests are failing, but they're failing consistently.  I 
just posted a new patch that doesn't actually change anything important; it 
fixes a typo in a string.  I want to see what happens when with a provably 
innocuous patch.

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch, 
> HDFS-14015.003.patch, HDFS-14015.004.patch, HDFS-14015.005.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-23 Thread Daniel Templeton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-14015:

Attachment: HDFS-14015.005.patch

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch, 
> HDFS-14015.003.patch, HDFS-14015.004.patch, HDFS-14015.005.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-716) Update ozone to latest ratis snapshot build(0.3.0-aa38160-SNAPSHOT)

2018-10-23 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661012#comment-16661012
 ] 

Jitendra Nath Pandey commented on HDDS-716:
---

In {{readStateMachineData}} the StateMachineLogEntryProto in LogEntry may be 
null that would cause NPE.

> Update ozone to latest ratis snapshot build(0.3.0-aa38160-SNAPSHOT)
> ---
>
> Key: HDDS-716
> URL: https://issues.apache.org/jira/browse/HDDS-716
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Attachments: HDDS-716.001.patch, HDDS-716.002.patch
>
>
> This jira updates the ozone to latest ratis snapshot 
> build(0.3.0-aa38160-SNAPSHOT)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-580) Bootstrap OM/SCM with private/public key pair

2018-10-23 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660983#comment-16660983
 ] 

Ajay Kumar commented on HDDS-580:
-

patch v4 to address checkstyle, findbug warnings and failed test in 
{{TestSecureOzoneCluster}}. 

[~xyao] reverted back the change to terminate in {{INIT_SECURITY}}  case 
SCM#createSCM as keeping it will mean getting rid of tests in 
TestSecureOzoneCluster.  Let us know if you think we need to do that.

> Bootstrap OM/SCM with private/public key pair
> -
>
> Key: HDDS-580
> URL: https://issues.apache.org/jira/browse/HDDS-580
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-4-HDDS-580.00.patch, HDDS-580-HDDS-4.00.patch, 
> HDDS-580-HDDS-4.01.patch, HDDS-580-HDDS-4.02.patch, HDDS-580-HDDS-4.03.patch, 
> HDDS-580-HDDS-4.04.patch
>
>
> We will need to add API that leverage the key generator from HDDS-100 to 
> generate public/private key pair for OM/SCM, this will be called by the 
> scm/om admin cli with "-init" cmd.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-580) Bootstrap OM/SCM with private/public key pair

2018-10-23 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-580:

Attachment: HDDS-580-HDDS-4.04.patch

> Bootstrap OM/SCM with private/public key pair
> -
>
> Key: HDDS-580
> URL: https://issues.apache.org/jira/browse/HDDS-580
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-4-HDDS-580.00.patch, HDDS-580-HDDS-4.00.patch, 
> HDDS-580-HDDS-4.01.patch, HDDS-580-HDDS-4.02.patch, HDDS-580-HDDS-4.03.patch, 
> HDDS-580-HDDS-4.04.patch
>
>
> We will need to add API that leverage the key generator from HDDS-100 to 
> generate public/private key pair for OM/SCM, this will be called by the 
> scm/om admin cli with "-init" cmd.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-716) Update ozone to latest ratis snapshot build(0.3.0-aa38160-SNAPSHOT)

2018-10-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660970#comment-16660970
 ] 

Hadoop QA commented on HDDS-716:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  7s{color} | {color:orange} root: The patch generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-716 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-13941) make storageId in BlockPoolTokenSecretManager.checkAccess optional

2018-10-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660942#comment-16660942
 ] 

Hadoop QA commented on HDFS-13941:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.0 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
26s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} branch-3.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:1776208 |
| JIRA Issue | HDFS-13941 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945222/HDFS-13941.branch-3.0.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 294d8be47bf6 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.0 / e4464f9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25341/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25341/testReport/ |
| Max. process+thread count | 4999 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-13941) make storageId in BlockPoolTokenSecretManager.checkAccess optional

2018-10-23 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660847#comment-16660847
 ] 

Arpit Agarwal commented on HDFS-13941:
--

Thanks [~jojochuang], that was my oversight. The patch back ported cleanly and 
the diff looked okay so I skipped compiling branch-3.0.

> make storageId in BlockPoolTokenSecretManager.checkAccess optional
> --
>
> Key: HDFS-13941
> URL: https://issues.apache.org/jira/browse/HDFS-13941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2, 3.3.0
>
> Attachments: HDFS-13941.00.patch, HDFS-13941.01.patch, 
> HDFS-13941.02.patch, HDFS-13941.branch-3.0.001.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} by 
> [HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-716) Update ozone to latest ratis snapshot build(0.3.0-aa38160-SNAPSHOT)

2018-10-23 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-716:
---
Attachment: HDDS-716.002.patch

> Update ozone to latest ratis snapshot build(0.3.0-aa38160-SNAPSHOT)
> ---
>
> Key: HDDS-716
> URL: https://issues.apache.org/jira/browse/HDDS-716
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Attachments: HDDS-716.001.patch, HDDS-716.002.patch
>
>
> This jira updates the ozone to latest ratis snapshot 
> build(0.3.0-aa38160-SNAPSHOT)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >