[jira] [Commented] (HDFS-14672) Backport HDFS-12703 to branch-2

2019-07-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894290#comment-16894290
 ] 

Hadoop QA commented on HDFS-14672:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 6s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} branch-2 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} branch-2 passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 26 unchanged - 0 fixed = 27 total (was 26) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}127m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:da675796017 |
| JIRA Issue | HDFS-14672 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976021/HDFS-12703.branch-2.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ebf26f58127c 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/pa

[jira] [Updated] (HDFS-14660) [SBN Read] ObserverNameNode should throw StandbyException for requests not from ObserverProxyProvider

2019-07-26 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-14660:

Attachment: HDFS-14660.003.patch

> [SBN Read] ObserverNameNode should throw StandbyException for requests not 
> from ObserverProxyProvider
> -
>
> Key: HDFS-14660
> URL: https://issues.apache.org/jira/browse/HDFS-14660
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14660.000.patch, HDFS-14660.001.patch, 
> HDFS-14660.002.patch, HDFS-14660.003.patch
>
>
> In a HDFS HA cluster with consistent reads enabled (HDFS-12943), clients 
> could be using either {{ObserverReadProxyProvider}}, 
> {{ConfiguredProxyProvider}}, or something else. Since observer is just a 
> special type of SBN and we allow transitions between them, a client NOT using 
> {{ObserverReadProxyProvider}} will need to have 
> {{dfs.ha.namenodes.}} include all NameNodes in the cluster, and 
> therefore, it may send request to a observer node.
> For this case, we should check whether the {{stateId}} in the incoming RPC 
> header is set or not, and throw an {{StandbyException}} when it is not. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14660) [SBN Read] ObserverNameNode should throw StandbyException for requests not from ObserverProxyProvider

2019-07-26 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894271#comment-16894271
 ] 

Chao Sun commented on HDFS-14660:
-

[~ayushtkn] Oops not sure how I missed that. Attached patch v3.

> [SBN Read] ObserverNameNode should throw StandbyException for requests not 
> from ObserverProxyProvider
> -
>
> Key: HDFS-14660
> URL: https://issues.apache.org/jira/browse/HDFS-14660
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14660.000.patch, HDFS-14660.001.patch, 
> HDFS-14660.002.patch, HDFS-14660.003.patch
>
>
> In a HDFS HA cluster with consistent reads enabled (HDFS-12943), clients 
> could be using either {{ObserverReadProxyProvider}}, 
> {{ConfiguredProxyProvider}}, or something else. Since observer is just a 
> special type of SBN and we allow transitions between them, a client NOT using 
> {{ObserverReadProxyProvider}} will need to have 
> {{dfs.ha.namenodes.}} include all NameNodes in the cluster, and 
> therefore, it may send request to a observer node.
> For this case, we should check whether the {{stateId}} in the incoming RPC 
> header is set or not, and throw an {{StandbyException}} when it is not. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14135) TestWebHdfsTimeouts Fails intermittently in trunk

2019-07-26 Thread Masatake Iwasaki (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-14135:

Attachment: HDFS-14135-branch-3.2.013.patch

> TestWebHdfsTimeouts Fails intermittently in trunk
> -
>
> Key: HDFS-14135
> URL: https://issues.apache.org/jira/browse/HDFS-14135
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14135-01.patch, HDFS-14135-02.patch, 
> HDFS-14135-03.patch, HDFS-14135-04.patch, HDFS-14135-05.patch, 
> HDFS-14135-06.patch, HDFS-14135-07.patch, HDFS-14135-08.patch, 
> HDFS-14135-branch-3.2.013.patch, HDFS-14135.009.patch, HDFS-14135.010.patch, 
> HDFS-14135.011.patch, HDFS-14135.012.patch, HDFS-14135.013.patch
>
>
> Reference to failure
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/982/testReport/junit/org.apache.hadoop.hdfs.web/TestWebHdfsTimeouts/



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14660) [SBN Read] ObserverNameNode should throw StandbyException for requests not from ObserverProxyProvider

2019-07-26 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894255#comment-16894255
 ] 

Ayush Saxena commented on HDFS-14660:
-

Thanx [~csun] for the patch, seems [~Harsha1206]'s comment for the if condition 
isn't addressed. Other than that the fix LGTM.

> [SBN Read] ObserverNameNode should throw StandbyException for requests not 
> from ObserverProxyProvider
> -
>
> Key: HDFS-14660
> URL: https://issues.apache.org/jira/browse/HDFS-14660
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14660.000.patch, HDFS-14660.001.patch, 
> HDFS-14660.002.patch
>
>
> In a HDFS HA cluster with consistent reads enabled (HDFS-12943), clients 
> could be using either {{ObserverReadProxyProvider}}, 
> {{ConfiguredProxyProvider}}, or something else. Since observer is just a 
> special type of SBN and we allow transitions between them, a client NOT using 
> {{ObserverReadProxyProvider}} will need to have 
> {{dfs.ha.namenodes.}} include all NameNodes in the cluster, and 
> therefore, it may send request to a observer node.
> For this case, we should check whether the {{stateId}} in the incoming RPC 
> header is set or not, and throw an {{StandbyException}} when it is not. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12967) NNBench should support multi-cluster access

2019-07-26 Thread Chen Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894253#comment-16894253
 ] 

Chen Zhang commented on HDFS-12967:
---

Thanks [~jojochuang], my email is chzhang1...@gmail.com

> NNBench should support multi-cluster access
> ---
>
> Key: HDFS-12967
> URL: https://issues.apache.org/jira/browse/HDFS-12967
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: benchmarks
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-12967-001.patch, HDFS-12967-002.patch, 
> HDFS-12967-003.patch, HDFS-12967-004.patch
>
>
> Sometimes we need to run NNBench for some scaling tests after made some 
> improvements on NameNode, so we have to deploy a new HDFS cluster and a new 
> Yarn cluster.
> If NNBench support multi-cluster access, we only need to deploy a new HDFS 
> test cluster and add it to existing YARN cluster, it'll make the scaling test 
> easier.
> Even more, if we want to do some A-B test, we have to run NNBench on 
> different HDFS clusters, this patch will be helpful.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14449) Expose total number of dt in jmx for Namenode

2019-07-26 Thread Fengnan Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894248#comment-16894248
 ] 

Fengnan Li commented on HDFS-14449:
---

Thanks for the review [~elgoiri] Fixed as comments.

> Expose total number of dt in jmx for Namenode
> -
>
> Key: HDFS-14449
> URL: https://issues.apache.org/jira/browse/HDFS-14449
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14449.001.patch, HDFS-14449.002.patch, 
> HDFS-14449.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11246) FSNameSystem#logAuditEvent should be called outside the read or write locks

2019-07-26 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894250#comment-16894250
 ] 

He Xiaoqiao commented on HDFS-11246:


[~daryn],[~ayushtkn],[~linyiqun] are you interested in helping to continue the 
reviews for [^HDFS-11246.009.patch]? Thanks.

> FSNameSystem#logAuditEvent should be called outside the read or write locks
> ---
>
> Key: HDFS-11246
> URL: https://issues.apache.org/jira/browse/HDFS-11246
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Kuhu Shukla
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-11246.001.patch, HDFS-11246.002.patch, 
> HDFS-11246.003.patch, HDFS-11246.004.patch, HDFS-11246.005.patch, 
> HDFS-11246.006.patch, HDFS-11246.007.patch, HDFS-11246.008.patch, 
> HDFS-11246.009.patch
>
>
> {code}
> readLock();
> boolean success = true;
> ContentSummary cs;
> try {
>   checkOperation(OperationCategory.READ);
>   cs = FSDirStatAndListingOp.getContentSummary(dir, src);
> } catch (AccessControlException ace) {
>   success = false;
>   logAuditEvent(success, operationName, src);
>   throw ace;
> } finally {
>   readUnlock(operationName);
> }
> {code}
> It would be nice to have audit logging outside the lock esp. in scenarios 
> where applications hammer a given operation several times. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14449) Expose total number of dt in jmx for Namenode

2019-07-26 Thread Fengnan Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengnan Li updated HDFS-14449:
--
Attachment: HDFS-14449.003.patch

> Expose total number of dt in jmx for Namenode
> -
>
> Key: HDFS-14449
> URL: https://issues.apache.org/jira/browse/HDFS-14449
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14449.001.patch, HDFS-14449.002.patch, 
> HDFS-14449.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10927) Lease Recovery: File not getting closed on HDFS when block write operation fails

2019-07-26 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894247#comment-16894247
 ] 

He Xiaoqiao commented on HDFS-10927:


[~zhangchen],[~jojochuang] Thanks for your detailed comments and reference. It 
seems I misunderstand this issue. So should we close this JIRA with duplicate 
status now?

> Lease Recovery: File not getting closed on HDFS when block write operation 
> fails
> 
>
> Key: HDFS-10927
> URL: https://issues.apache.org/jira/browse/HDFS-10927
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.1
>Reporter: Nitin Goswami
>Priority: Major
>
> HDFS was unable to close a file when block write operation failed because of 
> too high disk usage.
> Scenario:
> HBase was writing WAL logs on HDFS and the disk usage was too high at that 
> time. While writing these WAL logs, one of the blocks writes operation failed 
> with the following exception:
> 2016-09-13 10:00:49,978 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Exception for 
> BP-337226066-192.168.193.217-1468912147102:blk_1074859607_1160899
> java.net.SocketTimeoutException: 6 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/192.168.194.144:50010 remote=/192.168.192.162:43105]
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
> at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
> at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
> at java.io.BufferedInputStream.fill(Unknown Source)
> at java.io.BufferedInputStream.read1(Unknown Source)
> at java.io.BufferedInputStream.read(Unknown Source)
> at java.io.DataInputStream.read(Unknown Source)
> at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:849)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:807)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
> at java.lang.Thread.run(Unknown Source)
> After this exception, HBase tried to close/rollover the WAL file but that 
> call also failed and WAL file couldn't be closed. After this HBase closed the 
> region server
> After some time, Lease Recovery got triggered for this file and following 
> exceptions starts occurring:
> 2016-09-13 11:51:11,743 WARN 
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to 
> obtain replica info for block 
> (=BP-337226066-192.168.193.217-1468912147102:blk_1074859607_1161187) from 
> datanode (=DatanodeInfoWithStorage[192.168.192.162:50010,null,null])
> java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: getBytesOnDisk() < 
> getVisibleLength(), rip=ReplicaBeingWritten, blk_1074859607_1161187, RBW
>   getNumBytes() = 45524696
>   getBytesOnDisk()  = 45483527
>   getVisibleLength()= 45511557
>   getVolume()   = /opt/reflex/data/yarn/datanode/current
>   getBlockFile()= 
> /opt/reflex/data/yarn/datanode/current/BP-337226066-192.168.193.217-1468912147102/current/rbw/blk_1074859607
>   bytesAcked=45511557
>   bytesOnDisk=45483527
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2278)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2254)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2542)
> at 
> org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55)
> at 
> org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcIn

[jira] [Commented] (HDFS-14672) Backport HDFS-12703 to branch-2

2019-07-26 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894244#comment-16894244
 ] 

He Xiaoqiao commented on HDFS-14672:


{quote}Re-launcher Jenkins to verify patch HDFS-12703.branch-2.001.patch.
{quote}
Jenkins is not triggered, upload [^HDFS-12703.branch-2.002.patch] which exactly 
the same as [^HDFS-12703.branch-2.001.patch] in order to trigger pre-commit 
build and test.

> Backport HDFS-12703 to branch-2
> ---
>
> Key: HDFS-14672
> URL: https://issues.apache.org/jira/browse/HDFS-14672
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-12703.branch-2.001.patch, 
> HDFS-12703.branch-2.002.patch
>
>
> Currently, `decommission monitor exception cause namenode fatal` is only in 
> trunk (branch-3). This JIRA aims to backport this bugfix to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14672) Backport HDFS-12703 to branch-2

2019-07-26 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894244#comment-16894244
 ] 

He Xiaoqiao edited comment on HDFS-14672 at 7/27/19 3:41 AM:
-

Thanks [~elgoiri] and [~xkrogen] for your reviews.
{quote}Re-launcher Jenkins to verify patch HDFS-12703.branch-2.001.patch.
{quote}
Jenkins is not triggered, upload [^HDFS-12703.branch-2.002.patch] which exactly 
the same as [^HDFS-12703.branch-2.001.patch] in order to trigger pre-commit 
build and test.


was (Author: hexiaoqiao):
{quote}Re-launcher Jenkins to verify patch HDFS-12703.branch-2.001.patch.
{quote}
Jenkins is not triggered, upload [^HDFS-12703.branch-2.002.patch] which exactly 
the same as [^HDFS-12703.branch-2.001.patch] in order to trigger pre-commit 
build and test.

> Backport HDFS-12703 to branch-2
> ---
>
> Key: HDFS-14672
> URL: https://issues.apache.org/jira/browse/HDFS-14672
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-12703.branch-2.001.patch, 
> HDFS-12703.branch-2.002.patch
>
>
> Currently, `decommission monitor exception cause namenode fatal` is only in 
> trunk (branch-3). This JIRA aims to backport this bugfix to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14672) Backport HDFS-12703 to branch-2

2019-07-26 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HDFS-14672:
---
Attachment: HDFS-12703.branch-2.002.patch

> Backport HDFS-12703 to branch-2
> ---
>
> Key: HDFS-14672
> URL: https://issues.apache.org/jira/browse/HDFS-14672
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-12703.branch-2.001.patch, 
> HDFS-12703.branch-2.002.patch
>
>
> Currently, `decommission monitor exception cause namenode fatal` is only in 
> trunk (branch-3). This JIRA aims to backport this bugfix to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14461) RBF: Fix intermittently failing kerberos related unit test

2019-07-26 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894243#comment-16894243
 ] 

He Xiaoqiao commented on HDFS-14461:


failed unit tests are related with this patch, I will check it later.

> RBF: Fix intermittently failing kerberos related unit test
> --
>
> Key: HDFS-14461
> URL: https://issues.apache.org/jira/browse/HDFS-14461
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14461.001.patch, HDFS-14461.002.patch
>
>
> TestRouterHttpDelegationToken#testGetDelegationToken fails intermittently. It 
> may be due to some race condition before using the keytab that's created for 
> testing.
>  
> {code:java}
>  Failed
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.testGetDelegationToken
>  Failing for the past 1 build (Since 
> [!https://builds.apache.org/static/1e9ab9cc/images/16x16/red.png! 
> #26721|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/] )
>  [Took 89 
> ms.|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/testReport/org.apache.hadoop.hdfs.server.federation.security/TestRouterHttpDelegationToken/testGetDelegationToken/history]
>   
>  Error Message
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED
> h3. Stacktrace
> org.apache.hadoop.service.ServiceStateException: 
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
>  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:173) 
> at 
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.setup(TestRouterHttpDelegationToken.java:99)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) 
> Caused by: org.apache.hadoop.security.KerberosAuthException: failure to 
> login: for principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integri

[jira] [Commented] (HDFS-14632) Reduce useless #getNumLiveDataNodes call in SafeModeMonitor

2019-07-26 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894242#comment-16894242
 ] 

He Xiaoqiao commented on HDFS-14632:


It is clean to cherry pick [^HDFS-14632.006.patch] to branch-3.2 and branch-3.1 
through local verify.
[~jojochuang], Please feel free to cherry pick to branch-3.2 and branch-3.1. I 
have no operation privileges. Thanks.

> Reduce useless #getNumLiveDataNodes call in SafeModeMonitor
> ---
>
> Key: HDFS-14632
> URL: https://issues.apache.org/jira/browse/HDFS-14632
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.9.0
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 2.10.0, 3.3.0
>
> Attachments: HDFS-14632.001.patch, HDFS-14632.002.patch, 
> HDFS-14632.003.patch, HDFS-14632.004.patch, HDFS-14632.005.patch, 
> HDFS-14632.006.patch
>
>
> As mentioned HDFS-14171, SafeModeMonitor invoke useless #getNumLiveDataNodes 
> if not config DataNode threshold.
> The root cause is BlockManagerSafeMode#reportStatus(for trunk branch) or 
> SafeModeInfo#reportStatus (for branch-2.8) print status every 20 seconds and 
> invoke SafeModeInfo#getTurnOffTip.
> Optimize with checking if config DataNode threshold then decide to if get 
> number of live DataNodes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1856) Make changes required for Non-HA to use new HA code in OM.

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1856?focusedWorklogId=283720&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283720
 ]

ASF GitHub Bot logged work on HDDS-1856:


Author: ASF GitHub Bot
Created on: 27/Jul/19 03:26
Start Date: 27/Jul/19 03:26
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1174: HDDS-1856. Make 
required changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#issuecomment-515647805
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 75 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 25 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 734 | trunk passed |
   | +1 | compile | 361 | trunk passed |
   | +1 | checkstyle | 71 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 898 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | trunk passed |
   | 0 | spotbugs | 434 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 632 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 574 | the patch passed |
   | +1 | compile | 376 | the patch passed |
   | +1 | javac | 376 | the patch passed |
   | -0 | checkstyle | 41 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 726 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | the patch passed |
   | +1 | findbugs | 800 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 331 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2061 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 8252 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.web.client.TestBuckets |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1174 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux aeb69cfe5e59 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2fe450c |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/1/testReport/ |
   | Max. process+thread count | 4080 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283720)
Time Spent: 1h 20m  (was: 1h 10m)

> Make changes required for Non-HA to use new HA code in OM.
> 

[jira] [Commented] (HDFS-14425) Native build fails on macos due to jlong in hdfs.c

2019-07-26 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894219#comment-16894219
 ] 

Hudson commented on HDFS-14425:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16994 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16994/])
HDFS-14425. Native build fails on macos due to jlong in hdfs.c (#741) (weichiu: 
rev 2fe450cb5e294a39e30b0d253c1e09135d967ba8)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c


> Native build fails on macos due to jlong in hdfs.c
> --
>
> Key: HDFS-14425
> URL: https://issues.apache.org/jira/browse/HDFS-14425
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
>  Labels: mac
> Fix For: 3.3.0
>
>
> [WARNING] 
> /Users/xx/tmp/idea/hadoop-3.2.0-src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c:3033:/Users/xx/tmp/idea/hadoop-3.2.0-src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c:3033:4949::
>   warningwarning: : incompatible pointer types passing 'tOffset *' (aka 'long 
> long *') to parameter of type 'jlong *' (aka 'long *') 
> [-Wincompatible-pointer-types]incompatible pointer types passing 'tOffset *' 
> (aka 'long long *') to parameter of type 'jlong *' (aka 'long *') 
> [-Wincompatible-pointer-types]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14034) Support getQuotaUsage API in WebHDFS

2019-07-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894217#comment-16894217
 ] 

Hadoop QA commented on HDFS-14034:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
58s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  3s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 
714 unchanged - 8 fixed = 718 total (was 722) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
1s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
51s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 7s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}192m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14034 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12975907/HDFS-14034.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0019ca8af5f2 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchproce

[jira] [Commented] (HDFS-14632) Reduce useless #getNumLiveDataNodes call in SafeModeMonitor

2019-07-26 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894205#comment-16894205
 ] 

Wei-Chiu Chuang commented on HDFS-14632:


I think this is safe to cherry pick into lower branches. At least, 3.2 and 3.1.

> Reduce useless #getNumLiveDataNodes call in SafeModeMonitor
> ---
>
> Key: HDFS-14632
> URL: https://issues.apache.org/jira/browse/HDFS-14632
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.9.0
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 2.10.0, 3.3.0
>
> Attachments: HDFS-14632.001.patch, HDFS-14632.002.patch, 
> HDFS-14632.003.patch, HDFS-14632.004.patch, HDFS-14632.005.patch, 
> HDFS-14632.006.patch
>
>
> As mentioned HDFS-14171, SafeModeMonitor invoke useless #getNumLiveDataNodes 
> if not config DataNode threshold.
> The root cause is BlockManagerSafeMode#reportStatus(for trunk branch) or 
> SafeModeInfo#reportStatus (for branch-2.8) print status every 20 seconds and 
> invoke SafeModeInfo#getTurnOffTip.
> Optimize with checking if config DataNode threshold then decide to if get 
> number of live DataNodes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12967) NNBench should support multi-cluster access

2019-07-26 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894203#comment-16894203
 ] 

Wei-Chiu Chuang commented on HDFS-12967:


[~zhangchen] do you have an email address? I'd like to credit you in the git 
commit message. Thanks.

> NNBench should support multi-cluster access
> ---
>
> Key: HDFS-12967
> URL: https://issues.apache.org/jira/browse/HDFS-12967
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: benchmarks
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-12967-001.patch, HDFS-12967-002.patch, 
> HDFS-12967-003.patch, HDFS-12967-004.patch
>
>
> Sometimes we need to run NNBench for some scaling tests after made some 
> improvements on NameNode, so we have to deploy a new HDFS cluster and a new 
> Yarn cluster.
> If NNBench support multi-cluster access, we only need to deploy a new HDFS 
> test cluster and add it to existing YARN cluster, it'll make the scaling test 
> easier.
> Even more, if we want to do some A-B test, we have to run NNBench on 
> different HDFS clusters, this patch will be helpful.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12703) Exceptions are fatal to decommissioning monitor

2019-07-26 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12703:
---
Fix Version/s: 3.1.3

> Exceptions are fatal to decommissioning monitor
> ---
>
> Key: HDFS-12703
> URL: https://issues.apache.org/jira/browse/HDFS-12703
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: He Xiaoqiao
>Priority: Critical
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-12703.001.patch, HDFS-12703.002.patch, 
> HDFS-12703.003.patch, HDFS-12703.004.patch, HDFS-12703.005.patch, 
> HDFS-12703.006.patch, HDFS-12703.007.patch, HDFS-12703.008.patch, 
> HDFS-12703.009.patch, HDFS-12703.010.patch, HDFS-12703.011.patch, 
> HDFS-12703.012.patch, HDFS-12703.013.patch
>
>
> The {{DecommissionManager.Monitor}} runs as an executor scheduled task.  If 
> an exception occurs, all decommissioning ceases until the NN is restarted.  
> Per javadoc for {{executor#scheduleAtFixedRate}}: *If any execution of the 
> task encounters an exception, subsequent executions are suppressed*.  The 
> monitor thread is alive but blocked waiting for an executor task that will 
> never come.  The code currently disposes of the future so the actual 
> exception that aborted the task is gone.
> Failover is insufficient since the task is also likely dead on the standby.  
> Replication queue init after the transition to active will fix the under 
> replication of blocks on currently decommissioning nodes but future nodes 
> never decommission.  The standby must be bounced prior to failover – and 
> hopefully the error condition does not reoccur.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12967) NNBench should support multi-cluster access

2019-07-26 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894200#comment-16894200
 ] 

Wei-Chiu Chuang commented on HDFS-12967:


+1

> NNBench should support multi-cluster access
> ---
>
> Key: HDFS-12967
> URL: https://issues.apache.org/jira/browse/HDFS-12967
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: benchmarks
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-12967-001.patch, HDFS-12967-002.patch, 
> HDFS-12967-003.patch, HDFS-12967-004.patch
>
>
> Sometimes we need to run NNBench for some scaling tests after made some 
> improvements on NameNode, so we have to deploy a new HDFS cluster and a new 
> Yarn cluster.
> If NNBench support multi-cluster access, we only need to deploy a new HDFS 
> test cluster and add it to existing YARN cluster, it'll make the scaling test 
> easier.
> Even more, if we want to do some A-B test, we have to run NNBench on 
> different HDFS clusters, this patch will be helpful.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14135) TestWebHdfsTimeouts Fails intermittently in trunk

2019-07-26 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894193#comment-16894193
 ] 

Masatake Iwasaki commented on HDFS-14135:
-

Thanks [~xkrogen] for taking care. I'm going to backport the patch for older 
branch.

> TestWebHdfsTimeouts Fails intermittently in trunk
> -
>
> Key: HDFS-14135
> URL: https://issues.apache.org/jira/browse/HDFS-14135
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14135-01.patch, HDFS-14135-02.patch, 
> HDFS-14135-03.patch, HDFS-14135-04.patch, HDFS-14135-05.patch, 
> HDFS-14135-06.patch, HDFS-14135-07.patch, HDFS-14135-08.patch, 
> HDFS-14135.009.patch, HDFS-14135.010.patch, HDFS-14135.011.patch, 
> HDFS-14135.012.patch, HDFS-14135.013.patch
>
>
> Reference to failure
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/982/testReport/junit/org.apache.hadoop.hdfs.web/TestWebHdfsTimeouts/



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13783) Balancer: make balancer to be a long service process for easy to monitor it.

2019-07-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894185#comment-16894185
 ] 

Hadoop QA commented on HDFS-13783:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 479 unchanged - 0 fixed = 486 total (was 479) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}177m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks 
|
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.0 Server=19.03.0 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-13783 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12975978/HDFS-13783.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 490342c3cc12 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 62efb63 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.

[jira] [Work logged] (HDDS-1852) Fix typo in TestOmAcls

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1852?focusedWorklogId=283681&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283681
 ]

ASF GitHub Bot logged work on HDDS-1852:


Author: ASF GitHub Bot
Created on: 26/Jul/19 23:21
Start Date: 26/Jul/19 23:21
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1173: HDDS-1852. Fix 
typo in TestOmAcls
URL: https://github.com/apache/hadoop/pull/1173#issuecomment-515626965
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 102 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 622 | trunk passed |
   | +1 | compile | 383 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 904 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | trunk passed |
   | 0 | spotbugs | 429 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 620 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 576 | the patch passed |
   | +1 | compile | 369 | the patch passed |
   | +1 | javac | 369 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 664 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | the patch passed |
   | +1 | findbugs | 639 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 369 | hadoop-hdds in the patch failed. |
   | -1 | unit | 275 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 6172 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.security.TestOzoneDelegationTokenSecretManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1173/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1173 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 66af5783b60d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 62efb63 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1173/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1173/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1173/1/testReport/ |
   | Max. process+thread count | 1408 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1173/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283681)
Time Spent: 40m  (was: 0.5h)

> Fix typo in TestOmAcls
> --
>
> Key: HDDS-1852
> URL: https://issues.apache.org/jira/browse/HDDS-1852
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Dinesh Chitlangia
>Assignee: Doroszlai, Attila
>Priority: Trivial
>  Labels: newbie, pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In test class TestOmAcls.java, correct the typo 
> {code}OzoneAccessAuthrizerTest{code}
> {code:java}
> class OzoneAccessAuthrizerTest implements IAccessAuthorizer {
>   @Override
>   public boolean checkAccess(IOzoneObj ozoneObje

[jira] [Updated] (HDDS-1856) Make changes required for Non-HA to use new HA code in OM.

2019-07-26 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1856:
-
Target Version/s: 0.5.0
  Status: Patch Available  (was: Open)

> Make changes required for Non-HA to use new HA code in OM.
> --
>
> Key: HDDS-1856
> URL: https://issues.apache.org/jira/browse/HDDS-1856
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> In this Jira following things will be implemented:
>  # Make the necessary changes for non-HA code path to use Cache and 
> DoubleBuffer.
>  ## When adding to double buffer, return future. This future will be used in 
> the non-HA path to wait for this, and when it is completed return response to 
> the client.
>  ## Add to double-buffer will happen inside validateAndUpdateCache. In this 
> way, in non-HA, when multiple RPC handler threads are calling preExecute and 
> validateAndUpdateCache, the order inserted in to double buffer will happen in 
> the order requests are received.
>  
> In this Jira, we shall not convert non-ha code path to use this, as security 
> and acl work is not completed to use this new model.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1856) Make changes required for Non-HA to use new HA code in OM.

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1856?focusedWorklogId=283668&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283668
 ]

ASF GitHub Bot logged work on HDDS-1856:


Author: ASF GitHub Bot
Created on: 26/Jul/19 22:45
Start Date: 26/Jul/19 22:45
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1174: 
HDDS-1856. Make required changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283668)
Time Spent: 1h 10m  (was: 1h)

> Make changes required for Non-HA to use new HA code in OM.
> --
>
> Key: HDDS-1856
> URL: https://issues.apache.org/jira/browse/HDDS-1856
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> In this Jira following things will be implemented:
>  # Make the necessary changes for non-HA code path to use Cache and 
> DoubleBuffer.
>  ## When adding to double buffer, return future. This future will be used in 
> the non-HA path to wait for this, and when it is completed return response to 
> the client.
>  ## Add to double-buffer will happen inside validateAndUpdateCache. In this 
> way, in non-HA, when multiple RPC handler threads are calling preExecute and 
> validateAndUpdateCache, the order inserted in to double buffer will happen in 
> the order requests are received.
>  
> In this Jira, we shall not convert non-ha code path to use this, as security 
> and acl work is not completed to use this new model.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1834) parent directories not found in secure setup due to ACL check

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1834?focusedWorklogId=283667&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283667
 ]

ASF GitHub Bot logged work on HDDS-1834:


Author: ASF GitHub Bot
Created on: 26/Jul/19 22:45
Start Date: 26/Jul/19 22:45
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1171: HDDS-1834. parent 
directories not found in secure setup due to ACL check
URL: https://github.com/apache/hadoop/pull/1171#issuecomment-515620872
 
 
   Cool, Acceptance tests are passing. I am +1. Leaving here open so that 
@lokeshj1703 gets a chance to look at this.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283667)
Time Spent: 50m  (was: 40m)

> parent directories not found in secure setup due to ACL check
> -
>
> Key: HDDS-1834
> URL: https://issues.apache.org/jira/browse/HDDS-1834
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> ozonesecure-ozonefs acceptance test is failing, because {{ozone fs -mkdir 
> -p}} only creates key for the specific directory, not its parents.
> {noformat}
> ozone fs -mkdir -p o3fs://bucket1.fstest/testdir/deep
> {noformat}
> Previous result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/176/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/
> testdir/deep/
> {noformat}
> Current result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/177/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/deep/
> {noformat}
> The failure happens on first operation that tries to use {{testdir/}} 
> directly:
> {noformat}
> $ ozone fs -touch o3fs://bucket1.fstest/testdir/TOUCHFILE.txt
> ls: `o3fs://bucket1.fstest/testdir': No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14425) Native build fails on macos due to jlong in hdfs.c

2019-07-26 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-14425.

   Resolution: Fixed
Fix Version/s: 3.3.0

Merged into trunk.
Thanks [~hunhun]!

> Native build fails on macos due to jlong in hdfs.c
> --
>
> Key: HDFS-14425
> URL: https://issues.apache.org/jira/browse/HDFS-14425
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
>  Labels: mac
> Fix For: 3.3.0
>
>
> [WARNING] 
> /Users/xx/tmp/idea/hadoop-3.2.0-src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c:3033:/Users/xx/tmp/idea/hadoop-3.2.0-src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c:3033:4949::
>   warningwarning: : incompatible pointer types passing 'tOffset *' (aka 'long 
> long *') to parameter of type 'jlong *' (aka 'long *') 
> [-Wincompatible-pointer-types]incompatible pointer types passing 'tOffset *' 
> (aka 'long long *') to parameter of type 'jlong *' (aka 'long *') 
> [-Wincompatible-pointer-types]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1867) Invalid Prometheus metric name from JvmMetrics

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1867?focusedWorklogId=283662&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283662
 ]

ASF GitHub Bot logged work on HDDS-1867:


Author: ASF GitHub Bot
Created on: 26/Jul/19 22:33
Start Date: 26/Jul/19 22:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1172: HDDS-1867. 
Invalid Prometheus metric name from JvmMetrics
URL: https://github.com/apache/hadoop/pull/1172#issuecomment-515618953
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 74 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 581 | trunk passed |
   | +1 | compile | 360 | trunk passed |
   | +1 | checkstyle | 68 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 891 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | trunk passed |
   | 0 | spotbugs | 427 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 625 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 542 | the patch passed |
   | +1 | compile | 372 | the patch passed |
   | +1 | javac | 372 | the patch passed |
   | +1 | checkstyle | 79 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 812 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | the patch passed |
   | +1 | findbugs | 638 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 367 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1927 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 7863 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1172/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1172 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 637a8a18a4c4 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 62efb63 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1172/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1172/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1172/1/testReport/ |
   | Max. process+thread count | 4053 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/framework U: hadoop-hdds/framework |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1172/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283662)
Time Spent: 50m  (was: 40m)

> Invalid Prometheus metric name from JvmMetrics
> --
>
> Key: HDDS-1867
> URL: https://issues.apache.or

[jira] [Commented] (HDDS-1833) RefCountedDB printing of stacktrace should be moved to trace logging

2019-07-26 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894172#comment-16894172
 ] 

Eric Yang commented on HDDS-1833:
-

[~swagle] Sorry, I don't think that is true.  

>From [Java 
>spec|https://docs.oracle.com/javase/specs/jls/se7/html/jls-15.html#jls-15.12.4]:

{quote}
Example 15.12.4.1-2. Evaluation Order During Method Invocation

As part of an instance method invocation (§15.12), there is an expression that 
denotes the object to be invoked. This expression appears to be fully evaluated 
before any part of any argument expression to the method invocation is 
evaluated.{quote}

ExceptionUtils.getStackTrace() is fully evaluated before trace method 
invocation.

> RefCountedDB printing of stacktrace should be moved to trace logging
> 
>
> Key: HDDS-1833
> URL: https://issues.apache.org/jira/browse/HDDS-1833
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-1833.01.patch, HDDS-1833.02.patch, 
> HDDS-1833.03.patch
>
>
> RefCountedDB logs the stackTrace for both increment and decrement, this 
> pollutes the logs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14425) Native build fails on macos due to jlong in hdfs.c

2019-07-26 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894173#comment-16894173
 ] 

Wei-Chiu Chuang commented on HDFS-14425:


I am on macOS Mojave 10.14.6, but I don't have the issue (branch-3.2 and trunk. 
3.2.0 fails due to YARN-9487)

{noformat}
$ gcc --version
Configured with: --prefix=/Library/Developer/CommandLineTools/usr 
--with-gxx-include-dir=/Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk/usr/include/c++/4.2.1
Apple LLVM version 10.0.1 (clang-1001.0.46.4)
Target: x86_64-apple-darwin18.7.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
{noformat}

+1 I think it doesn't hurt to add this fix. At the very least it doesn't break 
the build on my local machine.

> Native build fails on macos due to jlong in hdfs.c
> --
>
> Key: HDFS-14425
> URL: https://issues.apache.org/jira/browse/HDFS-14425
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
>  Labels: mac
>
> [WARNING] 
> /Users/xx/tmp/idea/hadoop-3.2.0-src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c:3033:/Users/xx/tmp/idea/hadoop-3.2.0-src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c:3033:4949::
>   warningwarning: : incompatible pointer types passing 'tOffset *' (aka 'long 
> long *') to parameter of type 'jlong *' (aka 'long *') 
> [-Wincompatible-pointer-types]incompatible pointer types passing 'tOffset *' 
> (aka 'long long *') to parameter of type 'jlong *' (aka 'long *') 
> [-Wincompatible-pointer-types]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13321) Inadequate information for handling catch clauses

2019-07-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894171#comment-16894171
 ] 

Hadoop QA commented on HDFS-13321:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
30s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-mapreduce-client-core in trunk failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
30s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
51s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 27s{color} | {color:orange} root: The patch generated 2 new + 47 unchanged - 
0 fixed = 49 total (was 47) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
33s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}105m 
41s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
12s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m  
4s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
59s{color} | {color:green} The patch does not generate ASF 

[jira] [Updated] (HDFS-14425) Native build fails on macos due to jlong in hdfs.c

2019-07-26 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14425:
---
Labels: mac  (was: )

> Native build fails on macos due to jlong in hdfs.c
> --
>
> Key: HDFS-14425
> URL: https://issues.apache.org/jira/browse/HDFS-14425
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
>  Labels: mac
>
> [WARNING] 
> /Users/xx/tmp/idea/hadoop-3.2.0-src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c:3033:/Users/xx/tmp/idea/hadoop-3.2.0-src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c:3033:4949::
>   warningwarning: : incompatible pointer types passing 'tOffset *' (aka 'long 
> long *') to parameter of type 'jlong *' (aka 'long *') 
> [-Wincompatible-pointer-types]incompatible pointer types passing 'tOffset *' 
> (aka 'long long *') to parameter of type 'jlong *' (aka 'long *') 
> [-Wincompatible-pointer-types]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14672) Backport HDFS-12703 to branch-2

2019-07-26 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894159#comment-16894159
 ] 

Íñigo Goiri commented on HDFS-14672:


The patch looks good to me.
It's just wrapping the exception and fixing the log messages.
Not many issues with branch-2 vs trunk in this case.
Everything matches.
+1 on  [^HDFS-12703.branch-2.001.patch].


> Backport HDFS-12703 to branch-2
> ---
>
> Key: HDFS-14672
> URL: https://issues.apache.org/jira/browse/HDFS-14672
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-12703.branch-2.001.patch
>
>
> Currently, `decommission monitor exception cause namenode fatal` is only in 
> trunk (branch-3). This JIRA aims to backport this bugfix to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14425) Native build fails on macos due to jlong in hdfs.c

2019-07-26 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-14425:
--

Assignee: hunshenshi

> Native build fails on macos due to jlong in hdfs.c
> --
>
> Key: HDFS-14425
> URL: https://issues.apache.org/jira/browse/HDFS-14425
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
>
> [WARNING] 
> /Users/xx/tmp/idea/hadoop-3.2.0-src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c:3033:/Users/xx/tmp/idea/hadoop-3.2.0-src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c:3033:4949::
>   warningwarning: : incompatible pointer types passing 'tOffset *' (aka 'long 
> long *') to parameter of type 'jlong *' (aka 'long *') 
> [-Wincompatible-pointer-types]incompatible pointer types passing 'tOffset *' 
> (aka 'long long *') to parameter of type 'jlong *' (aka 'long *') 
> [-Wincompatible-pointer-types]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1856) Make changes required for Non-HA to use new HA code in OM.

2019-07-26 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1856:
-
Description: 
In this Jira following things will be implemented:
 # Make the necessary changes for non-HA code path to use Cache and 
DoubleBuffer.

 ## When adding to double buffer, return future. This future will be used in 
the non-HA path to wait for this, and when it is completed return response to 
the client.
 ## Add to double-buffer will happen inside validateAndUpdateCache. In this 
way, in non-HA, when multiple RPC handler threads are calling preExecute and 
validateAndUpdateCache, the order inserted in to double buffer will happen in 
the order requests are received.

 

In this Jira, we shall not convert non-ha code path to use this, as security 
and acl work is not completed to use this new model.

 

 

  was:
In this Jira following things will be implemented:
 # Make the necessary changes for non-HA code path to use Cache and 
DoubleBuffer.

 ## When adding to double buffer, return future. This future will be used in 
non-HA path to wait for this, and when it is completed return response to the 
client.
 ## Add to double-buffer will happen inside validateAndUpdateCache. In this 
way, in non-HA, when multiple rpc handler threads are calling preExecute and 
validateAndUpdateCache, the order inserted in to double buffer will happen in 
the order requests are received.

 

In this Jira, we shall not convert non-ha code path to use this, as security 
and acl work is not completed to use this new model.

 

 


> Make changes required for Non-HA to use new HA code in OM.
> --
>
> Key: HDDS-1856
> URL: https://issues.apache.org/jira/browse/HDDS-1856
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> In this Jira following things will be implemented:
>  # Make the necessary changes for non-HA code path to use Cache and 
> DoubleBuffer.
>  ## When adding to double buffer, return future. This future will be used in 
> the non-HA path to wait for this, and when it is completed return response to 
> the client.
>  ## Add to double-buffer will happen inside validateAndUpdateCache. In this 
> way, in non-HA, when multiple RPC handler threads are calling preExecute and 
> validateAndUpdateCache, the order inserted in to double buffer will happen in 
> the order requests are received.
>  
> In this Jira, we shall not convert non-ha code path to use this, as security 
> and acl work is not completed to use this new model.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1856) Make changes required for Non-HA to use new HA code in OM.

2019-07-26 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1856:
-
Summary: Make changes required for Non-HA to use new HA code in OM.  (was: 
Merge HA and Non-HA code in OM)

> Make changes required for Non-HA to use new HA code in OM.
> --
>
> Key: HDDS-1856
> URL: https://issues.apache.org/jira/browse/HDDS-1856
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> In this Jira following things will be implemented:
>  # Make the necessary changes for non-HA code path to use Cache and 
> DoubleBuffer.
>  ## When adding to double buffer, return future. This future will be used in 
> non-HA path to wait for this, and when it is completed return response to the 
> client.
>  ## Add to double-buffer will happen inside validateAndUpdateCache. In this 
> way, in non-HA, when multiple rpc handler threads are calling preExecute and 
> validateAndUpdateCache, the order inserted in to double buffer will happen in 
> the order requests are received.
>  
> In this Jira, we shall not convert non-ha code path to use this, as security 
> and acl work is not completed to use this new model.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1856) Merge HA and Non-HA code in OM

2019-07-26 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1856:
-
Description: 
In this Jira following things will be implemented:
 # Make the necessary changes for non-HA code path to use Cache and 
DoubleBuffer.

 ## When adding to double buffer, return future. This future will be used in 
non-HA path to wait for this, and when it is completed return response to the 
client.
 ## Add to double-buffer will happen inside validateAndUpdateCache. In this 
way, in non-HA, when multiple rpc handler threads are calling preExecute and 
validateAndUpdateCache, the order inserted in to double buffer will happen in 
the order requests are received.

 

In this Jira, we shall not convert non-ha code path to use this, as security 
and acl work is not completed to use this new model.

 

 

  was:
In this Jira following things will be implemented:
 # Make the non-HA code path use Cache and DoubleBuffer.
 # Use OMClientRequest/OMClientResponse classes implemented as part of HA to be 
used in Non-HA code path.

 

Removing of old code will not be done in this Jira, this will be done in 
further Jiras.

 


> Merge HA and Non-HA code in OM
> --
>
> Key: HDDS-1856
> URL: https://issues.apache.org/jira/browse/HDDS-1856
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> In this Jira following things will be implemented:
>  # Make the necessary changes for non-HA code path to use Cache and 
> DoubleBuffer.
>  ## When adding to double buffer, return future. This future will be used in 
> non-HA path to wait for this, and when it is completed return response to the 
> client.
>  ## Add to double-buffer will happen inside validateAndUpdateCache. In this 
> way, in non-HA, when multiple rpc handler threads are calling preExecute and 
> validateAndUpdateCache, the order inserted in to double buffer will happen in 
> the order requests are received.
>  
> In this Jira, we shall not convert non-ha code path to use this, as security 
> and acl work is not completed to use this new model.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1856) Merge HA and Non-HA code in OM

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1856?focusedWorklogId=283630&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283630
 ]

ASF GitHub Bot logged work on HDDS-1856:


Author: ASF GitHub Bot
Created on: 26/Jul/19 21:31
Start Date: 26/Jul/19 21:31
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1166: 
HDDS-1856. Merge HA and Non-HA code in OM.
URL: https://github.com/apache/hadoop/pull/1166
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283630)
Time Spent: 1h  (was: 50m)

> Merge HA and Non-HA code in OM
> --
>
> Key: HDDS-1856
> URL: https://issues.apache.org/jira/browse/HDDS-1856
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> In this Jira following things will be implemented:
>  # Make the non-HA code path use Cache and DoubleBuffer.
>  # Use OMClientRequest/OMClientResponse classes implemented as part of HA to 
> be used in Non-HA code path.
>  
> Removing of old code will not be done in this Jira, this will be done in 
> further Jiras.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1829) On OM reload/restart OmMetrics#numKeys should be updated

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1829?focusedWorklogId=283629&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283629
 ]

ASF GitHub Bot logged work on HDDS-1829:


Author: ASF GitHub Bot
Created on: 26/Jul/19 21:27
Start Date: 26/Jul/19 21:27
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1164: HDDS-1829 On 
OM reload/restart OmMetrics#numKeys should be updated
URL: https://github.com/apache/hadoop/pull/1164#discussion_r307915136
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBTable.java
 ##
 @@ -183,4 +183,14 @@ public String getName() throws IOException {
   public void close() throws Exception {
 // Nothing do for a Column Family.
   }
+
+  @Override
+  public long getEstimatedKeyCount() throws IOException {
+try {
+  return db.getLongProperty(handle, "rocksdb.estimate-num-keys");
+} catch (RocksDBException e) {
+  throw new IOException(
+  "Failed to get estimated key count of table.");
 
 Review comment:
   Though I'm not sure how the exception would affect Ratis, I found there are 
a bunch of methods in this class that are converting RocksDBException into 
IOException as well, like `RDBTable#get`, `RDBTable#delete`, `RDBTable#getName`.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283629)
Time Spent: 2h  (was: 1h 50m)

> On OM reload/restart OmMetrics#numKeys should be updated
> 
>
> Key: HDDS-1829
> URL: https://issues.apache.org/jira/browse/HDDS-1829
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> When OM is restarted or the state is reloaded, OM Metrics is re-initialized. 
> The saved numKeys value might not be valid as the DB state could have 
> changed. Hence, the numKeys metric must be updated with the correct value on 
> metrics re-initialization.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1816) ContainerStateMachine should limit number of pending apply transactions

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1816?focusedWorklogId=283627&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283627
 ]

ASF GitHub Bot logged work on HDDS-1816:


Author: ASF GitHub Bot
Created on: 26/Jul/19 21:23
Start Date: 26/Jul/19 21:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1150: HDDS-1816: 
ContainerStateMachine should limit number of pending apply transactions
URL: https://github.com/apache/hadoop/pull/1150#issuecomment-515603777
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 63 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 10 | Maven dependency ordering for branch |
   | +1 | mvninstall | 648 | trunk passed |
   | +1 | compile | 376 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 824 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | trunk passed |
   | 0 | spotbugs | 431 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 627 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 594 | the patch passed |
   | +1 | compile | 383 | the patch passed |
   | +1 | javac | 383 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 651 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | the patch passed |
   | +1 | findbugs | 664 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 378 | hadoop-hdds in the patch failed. |
   | -1 | unit | 323 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 6195 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.security.TestOzoneDelegationTokenSecretManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1150/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1150 |
   | JIRA Issue | HDDS-1816 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 482d16b8f3c6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 62efb63 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1150/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1150/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1150/4/testReport/ |
   | Max. process+thread count | 1352 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1150/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283627)
Time Spent: 1.5h  (was: 1h 20m)

> ContainerStateMachine should limit number of pending apply transactions
> ---
>
> Key: HDDS-1816
> URL: https://issues.apache.org/jira/browse/HDDS-1816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
> 

[jira] [Commented] (HDDS-1816) ContainerStateMachine should limit number of pending apply transactions

2019-07-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894137#comment-16894137
 ] 

Hadoop QA commented on HDDS-1816:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
3s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  7m 
11s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 18s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 23s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.ozone.security.TestOzoneDelegationTokenSecretManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1150/4/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/1150 |
| JIRA Issue | HDDS-1816 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvns

[jira] [Commented] (HDFS-14672) Backport HDFS-12703 to branch-2

2019-07-26 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894132#comment-16894132
 ] 

Erik Krogen commented on HDFS-14672:


[~elgoiri] I wonder if you're interested in helping to review this branch-2 
backport since you did the trunk review? I'm happy to do the work of getting it 
in the branches. Let me know if you don't have time and I'll try to understand 
the patch next week.

> Backport HDFS-12703 to branch-2
> ---
>
> Key: HDFS-14672
> URL: https://issues.apache.org/jira/browse/HDFS-14672
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-12703.branch-2.001.patch
>
>
> Currently, `decommission monitor exception cause namenode fatal` is only in 
> trunk (branch-3). This JIRA aims to backport this bugfix to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14449) Expose total number of dt in jmx for Namenode

2019-07-26 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894126#comment-16894126
 ] 

Íñigo Goiri commented on HDFS-14449:


Sorry for the delay.
A small typo: "memroy".
Can we do a static import for assertEquals too?

> Expose total number of dt in jmx for Namenode
> -
>
> Key: HDFS-14449
> URL: https://issues.apache.org/jira/browse/HDFS-14449
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14449.001.patch, HDFS-14449.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1829) On OM reload/restart OmMetrics#numKeys should be updated

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1829?focusedWorklogId=283613&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283613
 ]

ASF GitHub Bot logged work on HDDS-1829:


Author: ASF GitHub Bot
Created on: 26/Jul/19 20:47
Start Date: 26/Jul/19 20:47
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1164: HDDS-1829 On 
OM reload/restart OmMetrics#numKeys should be updated
URL: https://github.com/apache/hadoop/pull/1164#discussion_r307904383
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBTable.java
 ##
 @@ -183,4 +183,14 @@ public String getName() throws IOException {
   public void close() throws Exception {
 // Nothing do for a Column Family.
   }
+
+  @Override
+  public long getEstimatedKeyCount() throws IOException {
+try {
+  return db.getLongProperty(handle, "rocksdb.estimate-num-keys");
+} catch (RocksDBException e) {
+  throw new IOException(
+  "Failed to get estimated key count of the table.");
 
 Review comment:
   Sure! Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283613)
Time Spent: 1h 50m  (was: 1h 40m)

> On OM reload/restart OmMetrics#numKeys should be updated
> 
>
> Key: HDDS-1829
> URL: https://issues.apache.org/jira/browse/HDDS-1829
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> When OM is restarted or the state is reloaded, OM Metrics is re-initialized. 
> The saved numKeys value might not be valid as the DB state could have 
> changed. Hence, the numKeys metric must be updated with the correct value on 
> metrics re-initialization.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1829) On OM reload/restart OmMetrics#numKeys should be updated

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1829?focusedWorklogId=283612&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283612
 ]

ASF GitHub Bot logged work on HDDS-1829:


Author: ASF GitHub Bot
Created on: 26/Jul/19 20:47
Start Date: 26/Jul/19 20:47
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1164: HDDS-1829 On 
OM reload/restart OmMetrics#numKeys should be updated
URL: https://github.com/apache/hadoop/pull/1164#discussion_r307904331
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/TypedTable.java
 ##
 @@ -205,6 +205,16 @@ public String getName() throws IOException {
 return rawTable.getName();
   }
 
+  @Override
+  public long getEstimatedKeyCount() throws IOException {
+if (rawTable instanceof RDBTable) {
+  return rawTable.getEstimatedKeyCount();
+}
+throw new IllegalArgumentException(
+"Unsupported operation getEstimatedKeyCount() on table type " +
+rawTable.getClass().getCanonicalName());
 
 Review comment:
   Thanks for that. I've pushed a commit for this.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283612)
Time Spent: 1h 40m  (was: 1.5h)

> On OM reload/restart OmMetrics#numKeys should be updated
> 
>
> Key: HDDS-1829
> URL: https://issues.apache.org/jira/browse/HDDS-1829
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> When OM is restarted or the state is reloaded, OM Metrics is re-initialized. 
> The saved numKeys value might not be valid as the DB state could have 
> changed. Hence, the numKeys metric must be updated with the correct value on 
> metrics re-initialization.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1829) On OM reload/restart OmMetrics#numKeys should be updated

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1829?focusedWorklogId=283606&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283606
 ]

ASF GitHub Bot logged work on HDDS-1829:


Author: ASF GitHub Bot
Created on: 26/Jul/19 20:39
Start Date: 26/Jul/19 20:39
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #1164: HDDS-1829 On OM 
reload/restart OmMetrics#numKeys should be updated
URL: https://github.com/apache/hadoop/pull/1164#issuecomment-515592360
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283606)
Time Spent: 1.5h  (was: 1h 20m)

> On OM reload/restart OmMetrics#numKeys should be updated
> 
>
> Key: HDDS-1829
> URL: https://issues.apache.org/jira/browse/HDDS-1829
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When OM is restarted or the state is reloaded, OM Metrics is re-initialized. 
> The saved numKeys value might not be valid as the DB state could have 
> changed. Hence, the numKeys metric must be updated with the correct value on 
> metrics re-initialization.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1864) Turn on topology aware read in TestFailureHandlingByClient

2019-07-26 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894099#comment-16894099
 ] 

Hudson commented on HDDS-1864:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16993 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16993/])
HDDS-1864. Turn on topology aware read in TestFailureHandlingByClient. 
(31469764+bshashikant: rev c01e137273fe531b124c390fadb4c8b39b7fe65b)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestFailureHandlingByClient.java


> Turn on topology aware read in TestFailureHandlingByClient
> --
>
> Key: HDDS-1864
> URL: https://issues.apache.org/jira/browse/HDDS-1864
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1839) Change topology sorting related logs in Pipeline from INFO to DEBUG

2019-07-26 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894098#comment-16894098
 ] 

Hudson commented on HDDS-1839:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16993 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16993/])
HDDS-1839: Change topology sorting related logs in Pipeline from INFO to (xyao: 
rev c7c7a889a88dc37931c15286ca99ca41b08a3d36)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java


> Change topology sorting related logs in Pipeline from INFO to DEBUG
> ---
>
> Key: HDDS-1839
> URL: https://issues.apache.org/jira/browse/HDDS-1839
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.4.1
>Reporter: Xiaoyu Yao
>Assignee: Junjie Chen
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> This will avoid output like 
> {code}
> 2019-07-19 22:36:40 INFO  Pipeline:342 - Serialize nodesInOrder 
> [610d4084-7cce-4691-b43a-f9dd5cdb8809\{ip: 192.168.144.3, host: 
> ozonesecure-mr_datanode_1.ozonesecure-mr_default, networkLocation: 
> /default-rack, certSerialId: null}] in pipeline 
> PipelineID=f9ba269c-aba9-4a42-946c-4048d02cb7d1
> 2019-07-19 22:36:40 INFO  Pipeline:342 - Deserialize nodesInOrder 
> [610d4084-7cce-4691-b43a-f9dd5cdb8809\{ip: 192.168.144.3, host: 
> ozonesecure-mr_datanode_1.ozonesecure-mr_default, networkLocation: 
> /default-rack, certSerialId: null}] in pipeline 
> PipelineID=f9ba269c-aba9-4a42-946c-4048d02cb7d1
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14673) The console log is noisy when using DNSDomainNameResolver to resolve NameNode.

2019-07-26 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894100#comment-16894100
 ] 

Hudson commented on HDFS-14673:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16993 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16993/])
HDFS-14673. The console log is noisy when using DNSDomainNameResolver to 
(elgoiri: rev ecc8acfd242ab933d2bd616fffacacca9011a6b1)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java


> The console log is noisy when using DNSDomainNameResolver to resolve NameNode.
> --
>
> Key: HDFS-14673
> URL: https://issues.apache.org/jira/browse/HDFS-14673
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: kevin su
>Priority: Minor
>  Labels: newbie
> Fix For: 3.3.0
>
>
> The following log is displayed in every hdfs command when using 
> DNSDomainNameResolver.
> {noformat}
> -bash-4.2$ hadoop fs -ls /
> 19/07/25 14:32:28 INFO ha.AbstractNNFailoverProxyProvider: Namenode domain 
> name will be resolved with org.apache.hadoop.net.DNSDomainNameResolver
> (snip)
> {noformat}
> Can we change the log level from info to debug?
> This issue is originally reported by [~tasanuma].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14034) Support getQuotaUsage API in WebHDFS

2019-07-26 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894080#comment-16894080
 ] 

Wei-Chiu Chuang commented on HDFS-14034:


Triggered a precommit rebuild for v4

> Support getQuotaUsage API in WebHDFS
> 
>
> Key: HDFS-14034
> URL: https://issues.apache.org/jira/browse/HDFS-14034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, webhdfs
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14034.000.patch, HDFS-14034.001.patch, 
> HDFS-14034.002.patch, HDFS-14034.004.patch
>
>
> HDFS-8898 added support for a new API, {{getQuotaUsage}} which can fetch 
> quota usage on a directory with significantly lower impact than the similar 
> {{getContentSummary}}. This JIRA is to track adding support for this API to 
> WebHDFS. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14034) Support getQuotaUsage API in WebHDFS

2019-07-26 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894078#comment-16894078
 ] 

Wei-Chiu Chuang commented on HDFS-14034:


I am not seeing any issues. So I am +1. THanks [~csun] and [~xkrogen]

> Support getQuotaUsage API in WebHDFS
> 
>
> Key: HDFS-14034
> URL: https://issues.apache.org/jira/browse/HDFS-14034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, webhdfs
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14034.000.patch, HDFS-14034.001.patch, 
> HDFS-14034.002.patch, HDFS-14034.004.patch
>
>
> HDFS-8898 added support for a new API, {{getQuotaUsage}} which can fetch 
> quota usage on a directory with significantly lower impact than the similar 
> {{getContentSummary}}. This JIRA is to track adding support for this API to 
> WebHDFS. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1834) parent directories not found in secure setup due to ACL check

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1834?focusedWorklogId=283571&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283571
 ]

ASF GitHub Bot logged work on HDDS-1834:


Author: ASF GitHub Bot
Created on: 26/Jul/19 19:39
Start Date: 26/Jul/19 19:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1171: HDDS-1834. 
parent directories not found in secure setup due to ACL check
URL: https://github.com/apache/hadoop/pull/1171#issuecomment-515575195
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 95 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 624 | trunk passed |
   | +1 | compile | 381 | trunk passed |
   | +1 | checkstyle | 77 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 954 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | trunk passed |
   | 0 | spotbugs | 450 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 667 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for patch |
   | +1 | mvninstall | 574 | the patch passed |
   | +1 | compile | 428 | the patch passed |
   | +1 | javac | 428 | the patch passed |
   | +1 | checkstyle | 84 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 741 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 201 | the patch passed |
   | +1 | findbugs | 737 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 349 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2374 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 8691 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1171/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1171 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 53051023b9d1 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c7c7a88 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1171/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1171/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1171/1/testReport/ |
   | Max. process+thread count | 5366 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1171/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283571)
Time Spent: 40m  (was: 0.5h)

> parent directories not found in secure setup due to ACL check
> -
>
> Key: HDDS-1834
> URL: https://issues.apache.org/jira/b

[jira] [Work logged] (HDDS-1200) Ozone Data Scrubbing : Checksum verification for chunks

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1200?focusedWorklogId=283559&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283559
 ]

ASF GitHub Bot logged work on HDDS-1200:


Author: ASF GitHub Bot
Created on: 26/Jul/19 19:28
Start Date: 26/Jul/19 19:28
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1154: 
[HDDS-1200] Add support for checksum verification in data scrubber
URL: https://github.com/apache/hadoop/pull/1154#discussion_r307881322
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java
 ##
 @@ -220,43 +229,66 @@ private void checkBlockDB() throws IOException {
   throw new IOException(dbFileErrorMsg);
 }
 
-
 onDiskContainerData.setDbFile(dbFile);
 try(ReferenceCountedDB db =
-BlockUtils.getDB(onDiskContainerData, checkConfig)) {
-  iterateBlockDB(db);
-}
-  }
+BlockUtils.getDB(onDiskContainerData, checkConfig);
+KeyValueBlockIterator kvIter = new KeyValueBlockIterator(containerID,
+new File(onDiskContainerData.getContainerPath( {
 
-  private void iterateBlockDB(ReferenceCountedDB db)
-  throws IOException {
-Preconditions.checkState(db != null);
-
-// get "normal" keys from the Block DB
-try(KeyValueBlockIterator kvIter = new KeyValueBlockIterator(containerID,
-new File(onDiskContainerData.getContainerPath( {
-
-  // ensure there is a chunk file for each key in the DB
-  while (kvIter.hasNext()) {
+  while(kvIter.hasNext()) {
 BlockData block = kvIter.nextBlock();
-
-List chunkInfoList = block.getChunks();
-for (ContainerProtos.ChunkInfo chunk : chunkInfoList) {
-  File chunkFile;
-  chunkFile = ChunkUtils.getChunkFile(onDiskContainerData,
+for(ContainerProtos.ChunkInfo chunk : block.getChunks()) {
+  File chunkFile = ChunkUtils.getChunkFile(onDiskContainerData,
   ChunkInfo.getFromProtoBuf(chunk));
-
   if (!chunkFile.exists()) {
 // concurrent mutation in Block DB? lookup the block again.
 byte[] bdata = db.getStore().get(
 Longs.toByteArray(block.getBlockID().getLocalID()));
-if (bdata == null) {
-  LOG.trace("concurrency with delete, ignoring deleted block");
-  break; // skip to next block from kvIter
-} else {
-  String errorStr = "Missing chunk file "
-  + chunkFile.getAbsolutePath();
-  throw new IOException(errorStr);
+if (bdata != null) {
+  throw new IOException("Missing chunk file "
+  + chunkFile.getAbsolutePath());
+}
+  } else if (chunk.getChecksumData().getType()
+  != ContainerProtos.ChecksumType.NONE){
 
 Review comment:
   makes sense.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283559)
Time Spent: 3h 10m  (was: 3h)

> Ozone Data Scrubbing : Checksum verification for chunks
> ---
>
> Key: HDDS-1200
> URL: https://issues.apache.org/jira/browse/HDDS-1200
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Supratim Deka
>Assignee: Hrishikesh Gadre
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Background scrubber should read each chunk and verify the checksum.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13734) Add Heapsize variables for HDFS daemons

2019-07-26 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894069#comment-16894069
 ] 

Eric Yang commented on HDFS-13734:
--

[~bdscheller] Sorry, I agree with [~aw].  HDFS_*_OPTS is preferred for a number 
of reasons, like -Xms setting and gc policy flags etc.  By giving -Xmx flag 
without optimize other flag may create other problems for novice users and 
complicates config management.  YARN_*_HEAPSIZE are not good examples to follow.

> Add Heapsize variables for HDFS daemons
> ---
>
> Key: HDFS-13734
> URL: https://issues.apache.org/jira/browse/HDFS-13734
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, journal-node, namenode
>Affects Versions: 3.0.3
>Reporter: Brandon Scheller
>Priority: Major
>
> Currently there are no variables to set HDFS daemon heapsize differently. 
> While still possible through adding the -Xmx to HDFS_*DAEMON*_OPTS, this is 
> not intuitive for this relatively common setting.
> YARN currently has these separate YARN_*DAEMON*_HEAPSIZE variables supported 
> so it seems natural for HDFS too.
> It also looks like HDFS use to have this for namenode with 
> HADOOP_NAMENODE_INIT_HEAPSIZE
> This JIRA is to have these configurations added/supported



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1833) RefCountedDB printing of stacktrace should be moved to trace logging

2019-07-26 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894061#comment-16894061
 ] 

Siddharth Wagle edited comment on HDDS-1833 at 7/26/19 7:01 PM:


Verified that arguments are lazily evaluated after level check, so the if would 
not be needed. org.slf4j.impl.Log4jLoggerAdapter#trace(java.lang.String, 
java.lang.Object...)


was (Author: swagle):
Verified that arguments are lazily evaluated so the if would not be needed. 
org.slf4j.impl.Log4jLoggerAdapter#trace(java.lang.String, java.lang.Object...)

> RefCountedDB printing of stacktrace should be moved to trace logging
> 
>
> Key: HDDS-1833
> URL: https://issues.apache.org/jira/browse/HDDS-1833
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-1833.01.patch, HDDS-1833.02.patch, 
> HDDS-1833.03.patch
>
>
> RefCountedDB logs the stackTrace for both increment and decrement, this 
> pollutes the logs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1833) RefCountedDB printing of stacktrace should be moved to trace logging

2019-07-26 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894061#comment-16894061
 ] 

Siddharth Wagle commented on HDDS-1833:
---

Verified that arguments are lazily evaluated so the if would not be needed. 
org.slf4j.impl.Log4jLoggerAdapter#trace(java.lang.String, java.lang.Object...)

> RefCountedDB printing of stacktrace should be moved to trace logging
> 
>
> Key: HDDS-1833
> URL: https://issues.apache.org/jira/browse/HDDS-1833
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-1833.01.patch, HDDS-1833.02.patch, 
> HDDS-1833.03.patch
>
>
> RefCountedDB logs the stackTrace for both increment and decrement, this 
> pollutes the logs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14135) TestWebHdfsTimeouts Fails intermittently in trunk

2019-07-26 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-14135:
---
Fix Version/s: (was: 3.1.3)
   (was: 3.2.1)

> TestWebHdfsTimeouts Fails intermittently in trunk
> -
>
> Key: HDFS-14135
> URL: https://issues.apache.org/jira/browse/HDFS-14135
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14135-01.patch, HDFS-14135-02.patch, 
> HDFS-14135-03.patch, HDFS-14135-04.patch, HDFS-14135-05.patch, 
> HDFS-14135-06.patch, HDFS-14135-07.patch, HDFS-14135-08.patch, 
> HDFS-14135.009.patch, HDFS-14135.010.patch, HDFS-14135.011.patch, 
> HDFS-14135.012.patch, HDFS-14135.013.patch
>
>
> Reference to failure
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/982/testReport/junit/org.apache.hadoop.hdfs.web/TestWebHdfsTimeouts/



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14135) TestWebHdfsTimeouts Fails intermittently in trunk

2019-07-26 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894049#comment-16894049
 ] 

Erik Krogen commented on HDFS-14135:


Actually this broke all branches besides trunk. {{AssumptionViolatedException}} 
was used but this is only present in [JUnit 
4.12|https://junit.org/junit4/javadoc/4.12/org/junit/AssumptionViolatedException.html]
 and older branches are on 4.11.  I reverted from all branches besides trunk 
(branch-3.2 and branch-3.1)

> TestWebHdfsTimeouts Fails intermittently in trunk
> -
>
> Key: HDFS-14135
> URL: https://issues.apache.org/jira/browse/HDFS-14135
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14135-01.patch, HDFS-14135-02.patch, 
> HDFS-14135-03.patch, HDFS-14135-04.patch, HDFS-14135-05.patch, 
> HDFS-14135-06.patch, HDFS-14135-07.patch, HDFS-14135-08.patch, 
> HDFS-14135.009.patch, HDFS-14135.010.patch, HDFS-14135.011.patch, 
> HDFS-14135.012.patch, HDFS-14135.013.patch
>
>
> Reference to failure
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/982/testReport/junit/org.apache.hadoop.hdfs.web/TestWebHdfsTimeouts/



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1786) Datanodes takeSnapshot should delete previously created snapshots

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1786?focusedWorklogId=283549&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283549
 ]

ASF GitHub Bot logged work on HDDS-1786:


Author: ASF GitHub Bot
Created on: 26/Jul/19 18:32
Start Date: 26/Jul/19 18:32
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1163: HDDS-1786 
: Datanodes takeSnapshot should delete previously created s…
URL: https://github.com/apache/hadoop/pull/1163#discussion_r307862979
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachineInt.java
 ##
 @@ -53,7 +53,7 @@
  * Tests the containerStateMachine failure handling.
  */
 
-public class TestContainerStateMachine {
+public class TestContainerStateMachineInt {
 
 Review comment:
   @bharatviswa504 I just changed this since I added a class called 
TestContainerStateMachine in correct package. So I added "int" to this class 
name to mark it as an integration test. Do you feel it is OK to have 2 classes 
with same name? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283549)
Time Spent: 2h 20m  (was: 2h 10m)

> Datanodes takeSnapshot should delete previously created snapshots
> -
>
> Key: HDDS-1786
> URL: https://issues.apache.org/jira/browse/HDDS-1786
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Right now, after after taking a new snapshot, the previous snapshot file is 
> left in the raft log directory. When a new snapshot is taken, the previous 
> snapshots should be deleted.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1786) Datanodes takeSnapshot should delete previously created snapshots

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1786?focusedWorklogId=283537&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283537
 ]

ASF GitHub Bot logged work on HDDS-1786:


Author: ASF GitHub Bot
Created on: 26/Jul/19 18:25
Start Date: 26/Jul/19 18:25
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1163: HDDS-1786 
: Datanodes takeSnapshot should delete previously created s…
URL: https://github.com/apache/hadoop/pull/1163#discussion_r307860073
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -256,6 +259,7 @@ public void persistContainerSet(OutputStream out) throws 
IOException {
   public long takeSnapshot() throws IOException {
 TermIndex ti = getLastAppliedTermIndex();
 long startTime = Time.monotonicNow();
+SingleFileSnapshotInfo lastSnapshot = storage.findLatestSnapshot();
 
 Review comment:
   Yes, but this will change.  I am planning to take up @bshashikant's 
suggestion.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283537)
Time Spent: 2h 10m  (was: 2h)

> Datanodes takeSnapshot should delete previously created snapshots
> -
>
> Key: HDDS-1786
> URL: https://issues.apache.org/jira/browse/HDDS-1786
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Right now, after after taking a new snapshot, the previous snapshot file is 
> left in the raft log directory. When a new snapshot is taken, the previous 
> snapshots should be deleted.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1391) Add ability in OM to serve delta updates through an API.

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1391?focusedWorklogId=283534&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283534
 ]

ASF GitHub Bot logged work on HDDS-1391:


Author: ASF GitHub Bot
Created on: 26/Jul/19 18:23
Start Date: 26/Jul/19 18:23
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #1033: HDDS-1391 : Add 
ability in OM to serve delta updates through an API.
URL: https://github.com/apache/hadoop/pull/1033#issuecomment-515553165
 
 
   The test failures seem unrelated to the patch. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283534)
Time Spent: 4h 40m  (was: 4.5h)

> Add ability in OM to serve delta updates through an API.
> 
>
> Key: HDDS-1391
> URL: https://issues.apache.org/jira/browse/HDDS-1391
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Added an RPC end point to serve the set of updates in OM RocksDB from a given 
> sequence number.
> This will be used by Recon (HDDS-1105) to push the data to all the tasks that 
> will keep their aggregate data up to date. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14673) The console log is noisy when using DNSDomainNameResolver to resolve NameNode.

2019-07-26 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-14673:
--

Assignee: kevin su

> The console log is noisy when using DNSDomainNameResolver to resolve NameNode.
> --
>
> Key: HDFS-14673
> URL: https://issues.apache.org/jira/browse/HDFS-14673
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: kevin su
>Priority: Minor
>  Labels: newbie
> Fix For: 3.3.0
>
>
> The following log is displayed in every hdfs command when using 
> DNSDomainNameResolver.
> {noformat}
> -bash-4.2$ hadoop fs -ls /
> 19/07/25 14:32:28 INFO ha.AbstractNNFailoverProxyProvider: Namenode domain 
> name will be resolved with org.apache.hadoop.net.DNSDomainNameResolver
> (snip)
> {noformat}
> Can we change the log level from info to debug?
> This issue is originally reported by [~tasanuma].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14673) The console log is noisy when using DNSDomainNameResolver to resolve NameNode.

2019-07-26 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894028#comment-16894028
 ] 

Íñigo Goiri commented on HDFS-14673:


Merged the PR.
I assume [~pingsutw] is the author there.

> The console log is noisy when using DNSDomainNameResolver to resolve NameNode.
> --
>
> Key: HDFS-14673
> URL: https://issues.apache.org/jira/browse/HDFS-14673
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
> Fix For: 3.3.0
>
>
> The following log is displayed in every hdfs command when using 
> DNSDomainNameResolver.
> {noformat}
> -bash-4.2$ hadoop fs -ls /
> 19/07/25 14:32:28 INFO ha.AbstractNNFailoverProxyProvider: Namenode domain 
> name will be resolved with org.apache.hadoop.net.DNSDomainNameResolver
> (snip)
> {noformat}
> Can we change the log level from info to debug?
> This issue is originally reported by [~tasanuma].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14673) The console log is noisy when using DNSDomainNameResolver to resolve NameNode.

2019-07-26 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14673:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> The console log is noisy when using DNSDomainNameResolver to resolve NameNode.
> --
>
> Key: HDFS-14673
> URL: https://issues.apache.org/jira/browse/HDFS-14673
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
> Fix For: 3.3.0
>
>
> The following log is displayed in every hdfs command when using 
> DNSDomainNameResolver.
> {noformat}
> -bash-4.2$ hadoop fs -ls /
> 19/07/25 14:32:28 INFO ha.AbstractNNFailoverProxyProvider: Namenode domain 
> name will be resolved with org.apache.hadoop.net.DNSDomainNameResolver
> (snip)
> {noformat}
> Can we change the log level from info to debug?
> This issue is originally reported by [~tasanuma].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14350) dfs.datanode.ec.reconstruction.threads not take effect

2019-07-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894022#comment-16894022
 ] 

Hadoop QA commented on HDFS-14350:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-hdfs in trunk failed. {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
46s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
33s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-582/3/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/582 |
| JIRA Issue | HDFS-14350 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 6ff9f36636f3 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / c7c7a88 |
| Default Java | 1.8.0_212 |
| javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-582/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
 |

[jira] [Updated] (HDDS-1864) Turn on topology aware read in TestFailureHandlingByClient

2019-07-26 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-1864:
--
  Resolution: Fixed
   Fix Version/s: 0.5.0
Target Version/s: 0.5.0
  Status: Resolved  (was: Patch Available)

> Turn on topology aware read in TestFailureHandlingByClient
> --
>
> Key: HDDS-1864
> URL: https://issues.apache.org/jira/browse/HDDS-1864
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1864) Turn on topology aware read in TestFailureHandlingByClient

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1864?focusedWorklogId=283520&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283520
 ]

ASF GitHub Bot logged work on HDDS-1864:


Author: ASF GitHub Bot
Created on: 26/Jul/19 17:42
Start Date: 26/Jul/19 17:42
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on issue #1168: HDDS-1864. Turn 
on topology aware read in TestFailureHandlingByClient.
URL: https://github.com/apache/hadoop/pull/1168#issuecomment-515540250
 
 
   Thanks @ChenSammi for working on this. I have committed this change to trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283520)
Time Spent: 1h  (was: 50m)

> Turn on topology aware read in TestFailureHandlingByClient
> --
>
> Key: HDDS-1864
> URL: https://issues.apache.org/jira/browse/HDDS-1864
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1864) Turn on topology aware read in TestFailureHandlingByClient

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1864?focusedWorklogId=283518&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283518
 ]

ASF GitHub Bot logged work on HDDS-1864:


Author: ASF GitHub Bot
Created on: 26/Jul/19 17:41
Start Date: 26/Jul/19 17:41
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #1168: HDDS-1864. 
Turn on topology aware read in TestFailureHandlingByClient.
URL: https://github.com/apache/hadoop/pull/1168
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283518)
Time Spent: 50m  (was: 40m)

> Turn on topology aware read in TestFailureHandlingByClient
> --
>
> Key: HDDS-1864
> URL: https://issues.apache.org/jira/browse/HDDS-1864
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1864) Turn on topology aware read in TestFailureHandlingByClient

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1864?focusedWorklogId=283516&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283516
 ]

ASF GitHub Bot logged work on HDDS-1864:


Author: ASF GitHub Bot
Created on: 26/Jul/19 17:41
Start Date: 26/Jul/19 17:41
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on issue #1168: HDDS-1864. Turn 
on topology aware read in TestFailureHandlingByClient.
URL: https://github.com/apache/hadoop/pull/1168#issuecomment-515539947
 
 
   +1. LGTM.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283516)
Time Spent: 40m  (was: 0.5h)

> Turn on topology aware read in TestFailureHandlingByClient
> --
>
> Key: HDDS-1864
> URL: https://issues.apache.org/jira/browse/HDDS-1864
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1200) Ozone Data Scrubbing : Checksum verification for chunks

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1200?focusedWorklogId=283499&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283499
 ]

ASF GitHub Bot logged work on HDDS-1200:


Author: ASF GitHub Bot
Created on: 26/Jul/19 17:28
Start Date: 26/Jul/19 17:28
Worklog Time Spent: 10m 
  Work Description: hgadre commented on pull request #1154: [HDDS-1200] Add 
support for checksum verification in data scrubber
URL: https://github.com/apache/hadoop/pull/1154#discussion_r307839297
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
 ##
 @@ -68,11 +68,16 @@
   public static final String HDDS_CONTAINERSCRUB_ENABLED =
   "hdds.containerscrub.enabled";
   public static final boolean HDDS_CONTAINERSCRUB_ENABLED_DEFAULT = false;
+
   public static final boolean HDDS_SCM_SAFEMODE_ENABLED_DEFAULT = true;
   public static final String HDDS_SCM_SAFEMODE_MIN_DATANODE =
   "hdds.scm.safemode.min.datanode";
   public static final int HDDS_SCM_SAFEMODE_MIN_DATANODE_DEFAULT = 1;
 
+  public static final String HDDS_CONTAINER_SCANNER_VOLUME_BYTES_PER_SECOND =
+  "hdds.container.scanner.volume.bytes.per.second";
 
 Review comment:
   @swagle the property name is loosely modeled after HDFS. So i think we can 
keep it that way.
   @anuengineer thanks for the info. Let me refactor the logic here to use 
configuration based API.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283499)
Time Spent: 2h 50m  (was: 2h 40m)

> Ozone Data Scrubbing : Checksum verification for chunks
> ---
>
> Key: HDDS-1200
> URL: https://issues.apache.org/jira/browse/HDDS-1200
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Supratim Deka
>Assignee: Hrishikesh Gadre
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Background scrubber should read each chunk and verify the checksum.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14601) NameNode HA with single DNS record for NameNode discovery prevent running ZKFC

2019-07-26 Thread Fengnan Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894009#comment-16894009
 ] 

Fengnan Li edited comment on HDFS-14601 at 7/26/19 5:32 PM:


[~kkori] I don't think this usage is supported by DNS. DNS was mainly used by 
HDFS/Router clients for accessing the cluster, not an internal service 
discovery for now since that requires the users (zkfc) here to be able to 
discover NN.

I will go to ZKFC code and come back later.

I feel like I need to have the usage doc written sooner to help the community. 
Internally we make Yarn use DNS to routers and it works fine. 


was (Author: fengnanli):
[~kkori] I don't think this usage is supported by DNS. DNS was mainly used by 
HDFS/Router clients for accessing the cluster, not an internal service 
discovery for now since that needs the users (zkfc) here to discover NN.

I will go to ZKFC code and come back later.

I feel like I need to have the usage doc written sooner to help the community. 
Internally we make Yarn use DNS to routers and it works fine. 

> NameNode HA with single DNS record for NameNode discovery prevent running ZKFC
> --
>
> Key: HDFS-14601
> URL: https://issues.apache.org/jira/browse/HDFS-14601
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kei Kori
>Assignee: Fengnan Li
>Priority: Major
>
> ZKFC seems not treat one DNS record for NameNode discovery as multiple 
> NameNodes, so launching ZKFC is blocked on NameNodes which has only one 
> "dfs.ha.namenodes" definition with DNS for resolving multiple NameNodes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1200) Ozone Data Scrubbing : Checksum verification for chunks

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1200?focusedWorklogId=283500&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283500
 ]

ASF GitHub Bot logged work on HDDS-1200:


Author: ASF GitHub Bot
Created on: 26/Jul/19 17:29
Start Date: 26/Jul/19 17:29
Worklog Time Spent: 10m 
  Work Description: hgadre commented on pull request #1154: [HDDS-1200] Add 
support for checksum verification in data scrubber
URL: https://github.com/apache/hadoop/pull/1154#discussion_r307839423
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
 ##
 @@ -68,11 +68,16 @@
   public static final String HDDS_CONTAINERSCRUB_ENABLED =
   "hdds.containerscrub.enabled";
   public static final boolean HDDS_CONTAINERSCRUB_ENABLED_DEFAULT = false;
+
   public static final boolean HDDS_SCM_SAFEMODE_ENABLED_DEFAULT = true;
   public static final String HDDS_SCM_SAFEMODE_MIN_DATANODE =
   "hdds.scm.safemode.min.datanode";
   public static final int HDDS_SCM_SAFEMODE_MIN_DATANODE_DEFAULT = 1;
 
+  public static final String HDDS_CONTAINER_SCANNER_VOLUME_BYTES_PER_SECOND =
+  "hdds.container.scanner.volume.bytes.per.second";
+  public static final long
+  HDDS_CONTAINER_SCANNER_VOLUME_BYTES_PER_SECOND_DEFAULT = 1048576L;
 
 
 Review comment:
   ok. let me use that.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283500)
Time Spent: 3h  (was: 2h 50m)

> Ozone Data Scrubbing : Checksum verification for chunks
> ---
>
> Key: HDDS-1200
> URL: https://issues.apache.org/jira/browse/HDDS-1200
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Supratim Deka
>Assignee: Hrishikesh Gadre
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Background scrubber should read each chunk and verify the checksum.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14099) Unknown frame descriptor when decompressing multiple frames in ZStandardDecompressor

2019-07-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894008#comment-16894008
 ] 

Hadoop QA commented on HDFS-14099:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m  
6s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
27s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-441/2/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/441 |
| JIRA Issue | HDFS-14099 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 059dd185f2bb 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / c7c7a88 |
| Default Java | 1.8.0_212 |
| checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-441/2/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-commo

[jira] [Assigned] (HDFS-14601) NameNode HA with single DNS record for NameNode discovery prevent running ZKFC

2019-07-26 Thread Fengnan Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengnan Li reassigned HDFS-14601:
-

Assignee: Fengnan Li

> NameNode HA with single DNS record for NameNode discovery prevent running ZKFC
> --
>
> Key: HDFS-14601
> URL: https://issues.apache.org/jira/browse/HDFS-14601
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kei Kori
>Assignee: Fengnan Li
>Priority: Major
>
> ZKFC seems not treat one DNS record for NameNode discovery as multiple 
> NameNodes, so launching ZKFC is blocked on NameNodes which has only one 
> "dfs.ha.namenodes" definition with DNS for resolving multiple NameNodes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14601) NameNode HA with single DNS record for NameNode discovery prevent running ZKFC

2019-07-26 Thread Fengnan Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894009#comment-16894009
 ] 

Fengnan Li commented on HDFS-14601:
---

[~kkori] I don't think this usage is supported by DNS. DNS was mainly used by 
HDFS/Router clients for accessing the cluster, not an internal service 
discovery for now since that needs the users (zkfc) here to discover NN.

I will go to ZKFC code and come back later.

I feel like I need to have the usage doc written sooner to help the community. 
Internally we make Yarn use DNS to routers and it works fine. 

> NameNode HA with single DNS record for NameNode discovery prevent running ZKFC
> --
>
> Key: HDFS-14601
> URL: https://issues.apache.org/jira/browse/HDFS-14601
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kei Kori
>Priority: Major
>
> ZKFC seems not treat one DNS record for NameNode discovery as multiple 
> NameNodes, so launching ZKFC is blocked on NameNodes which has only one 
> "dfs.ha.namenodes" definition with DNS for resolving multiple NameNodes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1852) Fix typo in TestOmAcls

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1852?focusedWorklogId=283508&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283508
 ]

ASF GitHub Bot logged work on HDDS-1852:


Author: ASF GitHub Bot
Created on: 26/Jul/19 17:36
Start Date: 26/Jul/19 17:36
Worklog Time Spent: 10m 
  Work Description: dchitlangia commented on issue #1173: HDDS-1852. Fix 
typo in TestOmAcls
URL: https://github.com/apache/hadoop/pull/1173#issuecomment-515538450
 
 
   @adoroszlai  Thanks for working on this. +1 (non binding) pending Jenkins.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283508)
Time Spent: 0.5h  (was: 20m)

> Fix typo in TestOmAcls
> --
>
> Key: HDDS-1852
> URL: https://issues.apache.org/jira/browse/HDDS-1852
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Dinesh Chitlangia
>Assignee: Doroszlai, Attila
>Priority: Trivial
>  Labels: newbie, pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In test class TestOmAcls.java, correct the typo 
> {code}OzoneAccessAuthrizerTest{code}
> {code:java}
> class OzoneAccessAuthrizerTest implements IAccessAuthorizer {
>   @Override
>   public boolean checkAccess(IOzoneObj ozoneObject, RequestContext context)
>   throws OMException {
> return false;
>   }
> {code}
> Change {code}OzoneAccessAuthrizerTest{code} to 
> {code}OzoneAccessAuthorizerTest{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1200) Ozone Data Scrubbing : Checksum verification for chunks

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1200?focusedWorklogId=283498&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283498
 ]

ASF GitHub Bot logged work on HDDS-1200:


Author: ASF GitHub Bot
Created on: 26/Jul/19 17:25
Start Date: 26/Jul/19 17:25
Worklog Time Spent: 10m 
  Work Description: hgadre commented on pull request #1154: [HDDS-1200] Add 
support for checksum verification in data scrubber
URL: https://github.com/apache/hadoop/pull/1154#discussion_r307838235
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerMetadataScanner.java
 ##
 @@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.apache.hadoop.ozone.container.ozoneimpl;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.commons.net.ntp.TimeStamp;
+import org.apache.hadoop.ozone.container.common.interfaces.Container;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Iterator;
+
+/**
+ * This class is responsible to perform metadata verification of the
+ * containers.
+ */
+public class ContainerMetadataScanner extends Thread {
+  public static final Logger LOG =
+  LoggerFactory.getLogger(ContainerMetadataScanner.class);
+
+  private final ContainerController controller;
+  /**
+   * True if the thread is stopping.
+   * Protected by this object's lock.
+   */
+  private boolean stopping = false;
+
+  public ContainerMetadataScanner(ContainerController controller) {
+this.controller = controller;
+setName("ContainerMetadataScanner");
+setDaemon(true);
+  }
+
+  @Override
+  public void run() {
+/**
+ * the outer daemon loop exits on down()
+ */
+LOG.info("Background ContainerMetadataScanner starting up");
+while (!stopping) {
+  scrub();
+  if (!stopping) {
+try {
+  Thread.sleep(30); /* 5 min between scans */
 
 Review comment:
   This logic was present in ContainerScrubber.java before this patch. I just 
refactored it. Let me make it configurable.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283498)
Time Spent: 2h 40m  (was: 2.5h)

> Ozone Data Scrubbing : Checksum verification for chunks
> ---
>
> Key: HDDS-1200
> URL: https://issues.apache.org/jira/browse/HDDS-1200
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Supratim Deka
>Assignee: Hrishikesh Gadre
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Background scrubber should read each chunk and verify the checksum.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1804) TestCloseContainerHandlingByClient#estBlockWrites fails intermittently

2019-07-26 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh resolved HDDS-1804.
-
Resolution: Duplicate

This is fixed via HDDS-1817. Duping it.

> TestCloseContainerHandlingByClient#estBlockWrites fails intermittently
> --
>
> Key: HDDS-1804
> URL: https://issues.apache.org/jira/browse/HDDS-1804
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.5.0
>
>
> The test fails intermittently as reported here:
> [https://builds.apache.org/job/hadoop-multibranch/job/PR-1082/1/testReport/org.apache.hadoop.ozone.client.rpc/TestCloseContainerHandlingByClient/testBlockWrites/]
> {code:java}
> java.lang.IllegalArgumentException
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientManager.acquireClient(XceiverClientManager.java:150)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientManager.acquireClientForReadData(XceiverClientManager.java:143)
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockInputStream.getChunkInfos(BlockInputStream.java:154)
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockInputStream.initialize(BlockInputStream.java:118)
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockInputStream.read(BlockInputStream.java:222)
>   at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:171)
>   at 
> org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:47)
>   at java.io.InputStream.read(InputStream.java:101)
>   at 
> org.apache.hadoop.ozone.container.ContainerTestHelper.validateData(ContainerTestHelper.java:709)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient.validateData(TestCloseContainerHandlingByClient.java:401)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient.testBlockWrites(TestCloseContainerHandlingByClient.java:471)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h.

[jira] [Commented] (HDDS-1806) Handle writeStateMachine Failures in Ozone

2019-07-26 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894004#comment-16894004
 ] 

Mukul Kumar Singh commented on HDDS-1806:
-

cc [~sdeka]

> Handle writeStateMachine Failures in Ozone
> --
>
> Key: HDDS-1806
> URL: https://issues.apache.org/jira/browse/HDDS-1806
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.5.0
>
>
>  
> {code:java}
> Unexpected Storage Container Exception: 
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  ContainerID 3 does not exist
> Stacktrace
> java.io.IOException: Unexpected Storage Container Exception: 
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  ContainerID 3 does not exist at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.setIoException(BlockOutputStream.java:549)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:540)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.lambda$writeChunkToContainer$2(BlockOutputStream.java:615)
>  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602) 
> at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
>  at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748) Caused by: 
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  ContainerID 3 does not exist at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:536)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:537)
>  ... 7 more
> {code}
> The error propagated to client is erroneous. The container creation failed as 
> a result disk full   condition but never propagated to client.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1806) Handle writeStateMachine Failures in Ozone

2019-07-26 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1806:

Summary: Handle writeStateMachine Failures in Ozone  (was: 
TestDataValidateWithSafeByteOperations tests are failing)

> Handle writeStateMachine Failures in Ozone
> --
>
> Key: HDDS-1806
> URL: https://issues.apache.org/jira/browse/HDDS-1806
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.5.0
>
>
>  
> {code:java}
> Unexpected Storage Container Exception: 
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  ContainerID 3 does not exist
> Stacktrace
> java.io.IOException: Unexpected Storage Container Exception: 
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  ContainerID 3 does not exist at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.setIoException(BlockOutputStream.java:549)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:540)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.lambda$writeChunkToContainer$2(BlockOutputStream.java:615)
>  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602) 
> at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
>  at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748) Caused by: 
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  ContainerID 3 does not exist at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:536)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:537)
>  ... 7 more
> {code}
> The error propagated to client is erroneous. The container creation failed as 
> a result disk full   condition but never propagated to client.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14672) Backport HDFS-12703 to branch-2

2019-07-26 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16893990#comment-16893990
 ] 

He Xiaoqiao edited comment on HDFS-14672 at 7/26/19 5:21 PM:
-

Re-launcher Jenkins to verify patch [^HDFS-12703.branch-2.001.patch].
[~xkrogen], I just test [^HDFS-12703.branch-2.001.patch] based on branch-2 at 
local, it could build successfully and work well. Would you mind take another 
reviews?


was (Author: hexiaoqiao):
re-launcher Jenkins to verify patch [^HDFS-12703.branch-2.001.patch].

> Backport HDFS-12703 to branch-2
> ---
>
> Key: HDFS-14672
> URL: https://issues.apache.org/jira/browse/HDFS-14672
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-12703.branch-2.001.patch
>
>
> Currently, `decommission monitor exception cause namenode fatal` is only in 
> trunk (branch-3). This JIRA aims to backport this bugfix to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1200) Ozone Data Scrubbing : Checksum verification for chunks

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1200?focusedWorklogId=283494&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283494
 ]

ASF GitHub Bot logged work on HDDS-1200:


Author: ASF GitHub Bot
Created on: 26/Jul/19 17:18
Start Date: 26/Jul/19 17:18
Worklog Time Spent: 10m 
  Work Description: hgadre commented on pull request #1154: [HDDS-1200] Add 
support for checksum verification in data scrubber
URL: https://github.com/apache/hadoop/pull/1154#discussion_r307835843
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainerCheck.java
 ##
 @@ -120,10 +132,70 @@ public TestKeyValueContainerCheck(String metadataImpl) {
 container.close();
 
 // next run checks on a Closed Container
-valid = kvCheck.fullCheck();
+valid = kvCheck.fullCheck(new DataTransferThrottler(
+HddsConfigKeys.HDDS_CONTAINER_SCANNER_VOLUME_BYTES_PER_SECOND_DEFAULT),
+null);
 assertTrue(valid);
   }
 
+  /**
+   * Sanity test, when there are corruptions induced.
+   * @throws Exception
+   */
+  @Test
+  public void testKeyValueContainerCheckCorruption() throws Exception {
 
 Review comment:
   sure will do.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283494)
Time Spent: 2.5h  (was: 2h 20m)

> Ozone Data Scrubbing : Checksum verification for chunks
> ---
>
> Key: HDDS-1200
> URL: https://issues.apache.org/jira/browse/HDDS-1200
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Supratim Deka
>Assignee: Hrishikesh Gadre
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Background scrubber should read each chunk and verify the checksum.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1200) Ozone Data Scrubbing : Checksum verification for chunks

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1200?focusedWorklogId=283493&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283493
 ]

ASF GitHub Bot logged work on HDDS-1200:


Author: ASF GitHub Bot
Created on: 26/Jul/19 17:18
Start Date: 26/Jul/19 17:18
Worklog Time Spent: 10m 
  Work Description: hgadre commented on pull request #1154: [HDDS-1200] Add 
support for checksum verification in data scrubber
URL: https://github.com/apache/hadoop/pull/1154#discussion_r307835754
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerDataScanner.java
 ##
 @@ -0,0 +1,108 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.container.ozoneimpl;
+
+import java.io.IOException;
+import java.util.Iterator;
+
+import org.apache.hadoop.hdfs.util.Canceler;
+import org.apache.hadoop.hdfs.util.DataTransferThrottler;
+import org.apache.hadoop.ozone.container.common.interfaces.Container;
+import org.apache.hadoop.ozone.container.common.volume.HddsVolume;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * VolumeScanner scans a single volume.  Each VolumeScanner has its own thread.
+ * They are all managed by the DataNode's BlockScanner.
+ */
+public class ContainerDataScanner extends Thread {
+  public static final Logger LOG =
+  LoggerFactory.getLogger(ContainerDataScanner.class);
+
+  /**
+   * The volume that we're scanning.
+   */
+  private final HddsVolume volume;
+  private final ContainerController controller;
+  private final DataTransferThrottler throttler;
+  private final Canceler canceler;
+
+  /**
+   * True if the thread is stopping.
+   * Protected by this object's lock.
+   */
+  private volatile boolean stopping = false;
+
+
+  public ContainerDataScanner(ContainerController controller,
+  HddsVolume volume, long bytesPerSec) {
+this.controller = controller;
+this.volume = volume;
+this.throttler = new DataTransferThrottler(bytesPerSec);
+this.canceler = new Canceler();
+setName("ContainerDataScanner(" + volume + ")");
+setDaemon(true);
+  }
+
+  @Override
+  public void run() {
+LOG.trace("{}: thread starting.", this);
+try {
+  while (!stopping) {
+Iterator itr = controller.getContainers(volume);
+while (!stopping && itr.hasNext()) {
+  Container c = itr.next();
+  try {
+if (c.shouldScanData()) {
+  if(!c.scanData(throttler, canceler)) {
+controller.markContainerUnhealthy(
+c.getContainerData().getContainerID());
+  }
+}
+  } catch (IOException ex) {
+long containerId = c.getContainerData().getContainerID();
+LOG.warn("Unexpected exception while scanning container "
++ containerId, ex);
 
 Review comment:
   Yes we do mark the container as unhealthy in case of I/O errors. But there 
are some cases where we can not mark a container as unhealthy (e.g. when the 
rocksdb metadata is deleted or corrupted). In that case we just send an ICR to 
SCM. Here is relevant code snippet - 
https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java#L900-L915
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283493)
Time Spent: 2h 20m  (was: 2h 10m)

> Ozone Data Scrubbing : Checksum verification for chunks
> ---
>
> Key: HDDS-1200
> URL: https://issues.apache.org/jira/browse/HDDS-1200
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  

[jira] [Created] (HDDS-1869) Turn on Network Topology related configs for MiniOzoneChaosCluster

2019-07-26 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1869:
-

 Summary: Turn on Network Topology related configs for 
MiniOzoneChaosCluster
 Key: HDDS-1869
 URL: https://issues.apache.org/jira/browse/HDDS-1869
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client, SCM
Affects Versions: 0.5.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.5.0


currently, network toplogy awareness feature is disabled by default in Ozone. 
This jira will enable it in MiniChaosCluster tests.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1200) Ozone Data Scrubbing : Checksum verification for chunks

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1200?focusedWorklogId=283488&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283488
 ]

ASF GitHub Bot logged work on HDDS-1200:


Author: ASF GitHub Bot
Created on: 26/Jul/19 17:13
Start Date: 26/Jul/19 17:13
Worklog Time Spent: 10m 
  Work Description: hgadre commented on pull request #1154: [HDDS-1200] Add 
support for checksum verification in data scrubber
URL: https://github.com/apache/hadoop/pull/1154#discussion_r307833921
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java
 ##
 @@ -220,43 +229,66 @@ private void checkBlockDB() throws IOException {
   throw new IOException(dbFileErrorMsg);
 }
 
-
 onDiskContainerData.setDbFile(dbFile);
 try(ReferenceCountedDB db =
-BlockUtils.getDB(onDiskContainerData, checkConfig)) {
-  iterateBlockDB(db);
-}
-  }
+BlockUtils.getDB(onDiskContainerData, checkConfig);
+KeyValueBlockIterator kvIter = new KeyValueBlockIterator(containerID,
+new File(onDiskContainerData.getContainerPath( {
 
-  private void iterateBlockDB(ReferenceCountedDB db)
-  throws IOException {
-Preconditions.checkState(db != null);
-
-// get "normal" keys from the Block DB
-try(KeyValueBlockIterator kvIter = new KeyValueBlockIterator(containerID,
-new File(onDiskContainerData.getContainerPath( {
-
-  // ensure there is a chunk file for each key in the DB
-  while (kvIter.hasNext()) {
+  while(kvIter.hasNext()) {
 BlockData block = kvIter.nextBlock();
-
-List chunkInfoList = block.getChunks();
-for (ContainerProtos.ChunkInfo chunk : chunkInfoList) {
-  File chunkFile;
-  chunkFile = ChunkUtils.getChunkFile(onDiskContainerData,
+for(ContainerProtos.ChunkInfo chunk : block.getChunks()) {
+  File chunkFile = ChunkUtils.getChunkFile(onDiskContainerData,
   ChunkInfo.getFromProtoBuf(chunk));
-
   if (!chunkFile.exists()) {
 // concurrent mutation in Block DB? lookup the block again.
 byte[] bdata = db.getStore().get(
 Longs.toByteArray(block.getBlockID().getLocalID()));
-if (bdata == null) {
-  LOG.trace("concurrency with delete, ignoring deleted block");
-  break; // skip to next block from kvIter
-} else {
-  String errorStr = "Missing chunk file "
-  + chunkFile.getAbsolutePath();
-  throw new IOException(errorStr);
+if (bdata != null) {
+  throw new IOException("Missing chunk file "
+  + chunkFile.getAbsolutePath());
+}
+  } else if (chunk.getChecksumData().getType()
+  != ContainerProtos.ChecksumType.NONE){
 
 Review comment:
   Ok let me refactor. Regarding second question - i want to avoid disk I/O 
when we know that we don't have checksum to verify against. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283488)
Time Spent: 2h 10m  (was: 2h)

> Ozone Data Scrubbing : Checksum verification for chunks
> ---
>
> Key: HDDS-1200
> URL: https://issues.apache.org/jira/browse/HDDS-1200
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Supratim Deka
>Assignee: Hrishikesh Gadre
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Background scrubber should read each chunk and verify the checksum.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1852) Fix typo in TestOmAcls

2019-07-26 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1852:

Status: Patch Available  (was: In Progress)

> Fix typo in TestOmAcls
> --
>
> Key: HDDS-1852
> URL: https://issues.apache.org/jira/browse/HDDS-1852
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Dinesh Chitlangia
>Assignee: Doroszlai, Attila
>Priority: Trivial
>  Labels: newbie, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In test class TestOmAcls.java, correct the typo 
> {code}OzoneAccessAuthrizerTest{code}
> {code:java}
> class OzoneAccessAuthrizerTest implements IAccessAuthorizer {
>   @Override
>   public boolean checkAccess(IOzoneObj ozoneObject, RequestContext context)
>   throws OMException {
> return false;
>   }
> {code}
> Change {code}OzoneAccessAuthrizerTest{code} to 
> {code}OzoneAccessAuthorizerTest{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1852) Fix typo in TestOmAcls

2019-07-26 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila reassigned HDDS-1852:
---

Assignee: Doroszlai, Attila

> Fix typo in TestOmAcls
> --
>
> Key: HDDS-1852
> URL: https://issues.apache.org/jira/browse/HDDS-1852
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Dinesh Chitlangia
>Assignee: Doroszlai, Attila
>Priority: Trivial
>  Labels: newbie
>
> In test class TestOmAcls.java, correct the typo 
> {code}OzoneAccessAuthrizerTest{code}
> {code:java}
> class OzoneAccessAuthrizerTest implements IAccessAuthorizer {
>   @Override
>   public boolean checkAccess(IOzoneObj ozoneObject, RequestContext context)
>   throws OMException {
> return false;
>   }
> {code}
> Change {code}OzoneAccessAuthrizerTest{code} to 
> {code}OzoneAccessAuthorizerTest{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1852) Fix typo in TestOmAcls

2019-07-26 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1852 started by Doroszlai, Attila.
---
> Fix typo in TestOmAcls
> --
>
> Key: HDDS-1852
> URL: https://issues.apache.org/jira/browse/HDDS-1852
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Dinesh Chitlangia
>Assignee: Doroszlai, Attila
>Priority: Trivial
>  Labels: newbie
>
> In test class TestOmAcls.java, correct the typo 
> {code}OzoneAccessAuthrizerTest{code}
> {code:java}
> class OzoneAccessAuthrizerTest implements IAccessAuthorizer {
>   @Override
>   public boolean checkAccess(IOzoneObj ozoneObject, RequestContext context)
>   throws OMException {
> return false;
>   }
> {code}
> Change {code}OzoneAccessAuthrizerTest{code} to 
> {code}OzoneAccessAuthorizerTest{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1852) Fix typo in TestOmAcls

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1852?focusedWorklogId=283486&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283486
 ]

ASF GitHub Bot logged work on HDDS-1852:


Author: ASF GitHub Bot
Created on: 26/Jul/19 17:09
Start Date: 26/Jul/19 17:09
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1173: HDDS-1852. Fix 
typo in TestOmAcls
URL: https://github.com/apache/hadoop/pull/1173#issuecomment-515530040
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283486)
Time Spent: 20m  (was: 10m)

> Fix typo in TestOmAcls
> --
>
> Key: HDDS-1852
> URL: https://issues.apache.org/jira/browse/HDDS-1852
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Dinesh Chitlangia
>Assignee: Doroszlai, Attila
>Priority: Trivial
>  Labels: newbie, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In test class TestOmAcls.java, correct the typo 
> {code}OzoneAccessAuthrizerTest{code}
> {code:java}
> class OzoneAccessAuthrizerTest implements IAccessAuthorizer {
>   @Override
>   public boolean checkAccess(IOzoneObj ozoneObject, RequestContext context)
>   throws OMException {
> return false;
>   }
> {code}
> Change {code}OzoneAccessAuthrizerTest{code} to 
> {code}OzoneAccessAuthorizerTest{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1852) Fix typo in TestOmAcls

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1852?focusedWorklogId=283485&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283485
 ]

ASF GitHub Bot logged work on HDDS-1852:


Author: ASF GitHub Bot
Created on: 26/Jul/19 17:08
Start Date: 26/Jul/19 17:08
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1173: HDDS-1852. 
Fix typo in TestOmAcls
URL: https://github.com/apache/hadoop/pull/1173
 
 
   ## What changes were proposed in this pull request?
   
   Fix typo, remove unnecessary `throws`.
   
   ## How was this patch tested?
   
   Ran unit test.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283485)
Time Spent: 10m
Remaining Estimate: 0h

> Fix typo in TestOmAcls
> --
>
> Key: HDDS-1852
> URL: https://issues.apache.org/jira/browse/HDDS-1852
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Dinesh Chitlangia
>Assignee: Doroszlai, Attila
>Priority: Trivial
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In test class TestOmAcls.java, correct the typo 
> {code}OzoneAccessAuthrizerTest{code}
> {code:java}
> class OzoneAccessAuthrizerTest implements IAccessAuthorizer {
>   @Override
>   public boolean checkAccess(IOzoneObj ozoneObject, RequestContext context)
>   throws OMException {
> return false;
>   }
> {code}
> Change {code}OzoneAccessAuthrizerTest{code} to 
> {code}OzoneAccessAuthorizerTest{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1852) Fix typo in TestOmAcls

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1852:
-
Labels: newbie pull-request-available  (was: newbie)

> Fix typo in TestOmAcls
> --
>
> Key: HDDS-1852
> URL: https://issues.apache.org/jira/browse/HDDS-1852
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Dinesh Chitlangia
>Assignee: Doroszlai, Attila
>Priority: Trivial
>  Labels: newbie, pull-request-available
>
> In test class TestOmAcls.java, correct the typo 
> {code}OzoneAccessAuthrizerTest{code}
> {code:java}
> class OzoneAccessAuthrizerTest implements IAccessAuthorizer {
>   @Override
>   public boolean checkAccess(IOzoneObj ozoneObject, RequestContext context)
>   throws OMException {
> return false;
>   }
> {code}
> Change {code}OzoneAccessAuthrizerTest{code} to 
> {code}OzoneAccessAuthorizerTest{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14672) Backport HDFS-12703 to branch-2

2019-07-26 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HDFS-14672:
---
Status: Patch Available  (was: Open)

re-launcher Jenkins to verify patch [^HDFS-12703.branch-2.001.patch].

> Backport HDFS-12703 to branch-2
> ---
>
> Key: HDFS-14672
> URL: https://issues.apache.org/jira/browse/HDFS-14672
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-12703.branch-2.001.patch
>
>
> Currently, `decommission monitor exception cause namenode fatal` is only in 
> trunk (branch-3). This JIRA aims to backport this bugfix to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14672) Backport HDFS-12703 to branch-2

2019-07-26 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HDFS-14672:
---
Status: Open  (was: Patch Available)

> Backport HDFS-12703 to branch-2
> ---
>
> Key: HDFS-14672
> URL: https://issues.apache.org/jira/browse/HDFS-14672
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-12703.branch-2.001.patch
>
>
> Currently, `decommission monitor exception cause namenode fatal` is only in 
> trunk (branch-3). This JIRA aims to backport this bugfix to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13783) Balancer: make balancer to be a long service process for easy to monitor it.

2019-07-26 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16893983#comment-16893983
 ] 

Erik Krogen commented on HDFS-13783:


It looks like {{TestDirectoryScanner}} is already tracked in HDFS-14669.

The new methods added for monitoring look great! I think we can actually 
leverage this to make the test more robust and run faster:
{code}
  Thread.sleep(1);
  assertTrue(Balancer.getExceptionsSinceLastBalance() > 0);
{code}
we should be able to replace this with:
{code}
GenericTestUtils.waitFor(() -> Balancer.getExceptionsSinceLastBalance() > 0, 
1000, 2);
{code}
Also, the newly added fields should probably be {{volatile}} since they may be 
accessed from a thread besides the one that is doing the updating.

Other than this, we still need to fix the checkstyle. I usually use the diff 
report produced by Jenkins:
{code}
checkstyle  
https://builds.apache.org/job/PreCommit-HDFS-Build/27297/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
{code}
This shows only the diff and not any existing issues. You can also do this 
yourself using the 
[{{test-patch}}|https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute#HowToContribute-Testingyourpatch]
 script included in the repo.

> Balancer: make balancer to be a long service process for easy to monitor it.
> 
>
> Key: HDFS-13783
> URL: https://issues.apache.org/jira/browse/HDFS-13783
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer & mover
>Reporter: maobaolong
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-13783-001.patch, HDFS-13783-002.patch, 
> HDFS-13783.003.patch, HDFS-13783.004.patch, HDFS-13783.005.patch
>
>
> If we have a long service process of balancer, like namenode, datanode, we 
> can get metrics of balancer, the metrics can tell us the status of balancer, 
> the amount of block it has moved, 
> We can get or set the balance plan by the balancer webUI. So many things we 
> can do if we have a long balancer service process.
> So, shall we start to plan the new Balancer? Hope this feature can enter the 
> next release of hadoop.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14456) HAState#prepareToEnterState needn't a lock

2019-07-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16893978#comment-16893978
 ] 

Hadoop QA commented on HDFS-14456:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
44s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
35s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.TestDFSInotifyEventInputStream |
|   | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-770/2/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/770 |
| JIRA Issue | HDFS-14456 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux d47cd5de8039 4.4.0-138-generic #164-Ubuntu SMP Tue

[jira] [Work logged] (HDDS-1867) Invalid Prometheus metric name from JvmMetrics

2019-07-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1867?focusedWorklogId=283457&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283457
 ]

ASF GitHub Bot logged work on HDDS-1867:


Author: ASF GitHub Bot
Created on: 26/Jul/19 16:42
Start Date: 26/Jul/19 16:42
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1172: HDDS-1867. Invalid 
Prometheus metric name from JvmMetrics
URL: https://github.com/apache/hadoop/pull/1172#issuecomment-515521940
 
 
   @anuengineer please review
   
   @aajisaka FYI
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 283457)
Time Spent: 40m  (was: 0.5h)

> Invalid Prometheus metric name from JvmMetrics
> --
>
> Key: HDDS-1867
> URL: https://issues.apache.org/jira/browse/HDDS-1867
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {noformat}
> target=http://scm:9876/prom msg="append failed" err="invalid metric type 
> \"_old _generation counter\""
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-07-26 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-1768:

Summary: Audit xxxAcl methods in OzoneManager  (was: Audit permission 
failures from authorizer)

> Audit xxxAcl methods in OzoneManager
> 
>
> Key: HDDS-1768
> URL: https://issues.apache.org/jira/browse/HDDS-1768
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> Audit permission failures from authorizer



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14676) Calls to libhdfs (via pyarrow) hang after a while

2019-07-26 Thread Fred Tzeng (JIRA)
Fred Tzeng created HDFS-14676:
-

 Summary: Calls to libhdfs (via pyarrow) hang after a while
 Key: HDFS-14676
 URL: https://issues.apache.org/jira/browse/HDFS-14676
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, libhdfs
Affects Versions: 3.0.3
 Environment: hadoop-3.0.3 
python 3.6 
Centos7
Reporter: Fred Tzeng


I'm using the pyarrow HDFS client in a long running (forever) app that makes 
connections to HDFS (via libhdfs) as external requests come in and destroys the 
connection as soon as the request is handled. This happens a large amount of 
times on separate threads and everything works great.

The problem is, after the app idles for a while (perhaps hours) and no HDFS 
connections are made during this time, when the next connection is attempted, 
it hangs. No exceptions are thrown. As soon as I restart my python app, the 
HDFS connection works just fine again.

I'm using the precompiled libhdfs.so directly from the hadoop-3.0.3 
distribution. Do I typically need to recompile libhdfs.so for my OS, or is the 
one out of the box typically fine?

I've checked with the Arrow community first- they've recommended I check with 
the Hadoop community since all the pyarrow client does is pass through the 
commands to libhdfs.

Any suggestions on debugging this hanging issue would be appreciated.

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >