[jira] [Commented] (HDFS-14779) Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns

2019-08-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916398#comment-16916398
 ] 

Hadoop QA commented on HDFS-14779:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 44m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 35s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}169m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSClientRetries |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14779 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978628/HDFS-14779.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 001c7bec3c7a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 07e3cf9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27687/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27687/testReport/ |
| Max. process+thread count | 3983 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27687/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was 

[jira] [Updated] (HDFS-14772) RBF: hdfs-rbf-site.xml can't be loaded automatically

2019-08-26 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14772:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> RBF: hdfs-rbf-site.xml can't be loaded automatically
> 
>
> Key: HDFS-14772
> URL: https://issues.apache.org/jira/browse/HDFS-14772
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Yuxuan Wang
>Assignee: Yuxuan Wang
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14772.001.patch, HDFS-14772.002.patch, 
> HDFS-14772.003.patch, HDFS-14772.004.patch
>
>
> ISSUE:
> hdfs-rbf-site.xml can't be loaded automatically
> WHY:
> Currently the code is 
> {code:title=RBFConfigKeys.java|borderStyle=solid}
>   static {
> Configuration.addDefaultResource(HDFS_RBF_SITE_XML);
>   }
> {code}
> But it will never be executed unless we explicitly load the class.
> HOW TO FIX:
> Reference to class *HdfsConfiguration*, make a method
> {code:title=RBFConfigKeys.java|borderStyle=solid}
>   public static void init() {
>   }
> {code}
> and call it in other class.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14772) RBF: hdfs-rbf-site.xml can't be loaded automatically

2019-08-26 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916392#comment-16916392
 ] 

Takanobu Asanuma commented on HDFS-14772:
-

Committed to trunk. Thanks for your contribution, [~John Smith], and thanks for 
your reviews and comments [~surendrasingh] and others!

> RBF: hdfs-rbf-site.xml can't be loaded automatically
> 
>
> Key: HDFS-14772
> URL: https://issues.apache.org/jira/browse/HDFS-14772
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Yuxuan Wang
>Assignee: Yuxuan Wang
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14772.001.patch, HDFS-14772.002.patch, 
> HDFS-14772.003.patch, HDFS-14772.004.patch
>
>
> ISSUE:
> hdfs-rbf-site.xml can't be loaded automatically
> WHY:
> Currently the code is 
> {code:title=RBFConfigKeys.java|borderStyle=solid}
>   static {
> Configuration.addDefaultResource(HDFS_RBF_SITE_XML);
>   }
> {code}
> But it will never be executed unless we explicitly load the class.
> HOW TO FIX:
> Reference to class *HdfsConfiguration*, make a method
> {code:title=RBFConfigKeys.java|borderStyle=solid}
>   public static void init() {
>   }
> {code}
> and call it in other class.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14497) Write lock held by metasave impact following RPC processing

2019-08-26 Thread He Xiaoqiao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916380#comment-16916380
 ] 

He Xiaoqiao commented on HDFS-14497:


verify failed unit test and both passed at local, please take a review. 
[~jojochuang] Thanks.

> Write lock held by metasave impact following RPC processing
> ---
>
> Key: HDFS-14497
> URL: https://issues.apache.org/jira/browse/HDFS-14497
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14497-addendum.001.patch, HDFS-14497.001.patch
>
>
> NameNode meta save hold global write lock currently, so following RPC r/w 
> request or inner-thread of NameNode could be paused if they try to acquire 
> global read/write lock and have to wait before metasave release it.
> I propose to change write lock to read lock and let some read request could 
> be process normally. I think it could not change informations which meta save 
> try to get if we try to open read request.
> Actually, we need ensure that there are only one thread to execute metaSave, 
> otherwise, output streams could meet exception especially both streams hold 
> the same file handle or some other same output stream.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14768) In some cases, erasure blocks are corruption when they are reconstruct.

2019-08-26 Thread guojh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guojh updated HDFS-14768:
-
Attachment: (was: HDFS-14768.001.patch)

> In some cases, erasure blocks are corruption  when they are reconstruct.
> 
>
> Key: HDFS-14768
> URL: https://issues.apache.org/jira/browse/HDFS-14768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, hdfs, namenode
>Affects Versions: 3.0.2
>Reporter: guojh
>Priority: Major
>  Labels: patch
> Fix For: 3.3.0
>
>
> Policy is RS-6-3-1024K, version is hadoop 3.0.2;
> We suppose a file's block Index is [0,1,2,3,4,5,6,7,8], And decommission 
> index[3,4], increase the index 6 datanode's
> pendingReplicationWithoutTargets  that make it large than 
> replicationStreamsHardLimit(we set 14). Then, After the method 
> chooseSourceDatanodes of BlockMananger, the liveBlockIndices is 
> [0,1,2,3,4,5,7,8], Block Counter is, Live:7, Decommission:2. 
> In method scheduleReconstruction of BlockManager, the additionalReplRequired 
> is 9 - 7 = 2. After Namenode choose two target Datanode, will assign a 
> erasureCode task to target datanode.
> When datanode get the task will build  targetIndices from liveBlockIndices 
> and target length. the code is blow.
> {code:java}
> // code placeholder
> targetIndices = new short[targets.length];
> private void initTargetIndices() { 
>   BitSet bitset = reconstructor.getLiveBitSet();
>   int m = 0; hasValidTargets = false; 
>   for (int i = 0; i < dataBlkNum + parityBlkNum; i++) {  
> if (!bitset.get) {    
>   if (reconstructor.getBlockLen > 0) {
>        if (m < targets.length) {
>          targetIndices[m++] = (short)i;
>          hasValidTargets = true;
>         }
>       }
>     }
>  }
> {code}
> targetIndices[0]=6, and targetIndices[1] is aways 0 from initial value.
> The StripedReader is  aways create reader from first 6 index block, and is 
> [0,1,2,3,4,5]
> Use the index [0,1,2,3,4,5] to build target index[6,0] will trigger the isal 
> bug. the block index6's data is corruption(all data is zero).
> I write a unit test can stabilize repreduce.
> {code:java}
> // code placeholder
> public void testFileDecommission() throws Exception {
>   LOG.info("Starting test testFileDecommission");
>   final Path ecFile = new Path(ecDir, "testFileDecommission");
>   int writeBytes = cellSize * dataBlocks;
>   writeStripedFile(dfs, ecFile, writeBytes);
>   Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks());
>   FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes);
>   LocatedBlocks locatedBlocks =
>   StripedFileTestUtil.getLocatedBlocks(ecFile, dfs);
>   LocatedBlock lb = dfs.getClient().getLocatedBlocks(ecFile.toString(), 0)
>   .get(0);
>   DatanodeInfo[] dnLocs = lb.getLocations();
>   LocatedStripedBlock lastBlock =
>   (LocatedStripedBlock)locatedBlocks.getLastLocatedBlock();
>   DatanodeInfo[] storageInfos = lastBlock.getLocations();
>   // 
>   DatanodeDescriptor datanodeDescriptor = 
> cluster.getNameNode().getNamesystem()
>   
> .getBlockManager().getDatanodeManager().getDatanode(storageInfos[6].getDatanodeUuid());
>   for (int i = 0; i < 100; i++) {
> datanodeDescriptor.incrementPendingReplicationWithoutTargets();
>   }
>   assertEquals(dataBlocks + parityBlocks, dnLocs.length);
>   int[] decommNodeIndex = {3, 4};
>   final List decommisionNodes = new ArrayList();
>   // add the node which will be decommissioning
>   decommisionNodes.add(dnLocs[decommNodeIndex[0]]);
>   decommisionNodes.add(dnLocs[decommNodeIndex[1]]);
>   decommissionNode(0, decommisionNodes, AdminStates.DECOMMISSIONED);
>   assertEquals(decommisionNodes.size(), fsn.getNumDecomLiveDataNodes());
>   //assertNull(checkFile(dfs, ecFile, 9, decommisionNodes, numDNs));
>   // Ensure decommissioned datanode is not automatically shutdown
>   DFSClient client = getDfsClient(cluster.getNameNode(0), conf);
>   assertEquals("All datanodes must be alive", numDNs,
>   client.datanodeReport(DatanodeReportType.LIVE).length);
>   FileChecksum fileChecksum2 = dfs.getFileChecksum(ecFile, writeBytes);
>   Assert.assertTrue("Checksum mismatches!",
>   fileChecksum1.equals(fileChecksum2));
>   StripedFileTestUtil.checkData(dfs, ecFile, writeBytes, decommisionNodes,
>   null, blockGroupSize);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14768) In some cases, erasure blocks are corruption when they are reconstruct.

2019-08-26 Thread guojh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guojh updated HDFS-14768:
-
Attachment: (was: HDFS-14768.000.patch)

> In some cases, erasure blocks are corruption  when they are reconstruct.
> 
>
> Key: HDFS-14768
> URL: https://issues.apache.org/jira/browse/HDFS-14768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, hdfs, namenode
>Affects Versions: 3.0.2
>Reporter: guojh
>Priority: Major
>  Labels: patch
> Fix For: 3.3.0
>
> Attachments: HDFS-14768.001.patch
>
>
> Policy is RS-6-3-1024K, version is hadoop 3.0.2;
> We suppose a file's block Index is [0,1,2,3,4,5,6,7,8], And decommission 
> index[3,4], increase the index 6 datanode's
> pendingReplicationWithoutTargets  that make it large than 
> replicationStreamsHardLimit(we set 14). Then, After the method 
> chooseSourceDatanodes of BlockMananger, the liveBlockIndices is 
> [0,1,2,3,4,5,7,8], Block Counter is, Live:7, Decommission:2. 
> In method scheduleReconstruction of BlockManager, the additionalReplRequired 
> is 9 - 7 = 2. After Namenode choose two target Datanode, will assign a 
> erasureCode task to target datanode.
> When datanode get the task will build  targetIndices from liveBlockIndices 
> and target length. the code is blow.
> {code:java}
> // code placeholder
> targetIndices = new short[targets.length];
> private void initTargetIndices() { 
>   BitSet bitset = reconstructor.getLiveBitSet();
>   int m = 0; hasValidTargets = false; 
>   for (int i = 0; i < dataBlkNum + parityBlkNum; i++) {  
> if (!bitset.get) {    
>   if (reconstructor.getBlockLen > 0) {
>        if (m < targets.length) {
>          targetIndices[m++] = (short)i;
>          hasValidTargets = true;
>         }
>       }
>     }
>  }
> {code}
> targetIndices[0]=6, and targetIndices[1] is aways 0 from initial value.
> The StripedReader is  aways create reader from first 6 index block, and is 
> [0,1,2,3,4,5]
> Use the index [0,1,2,3,4,5] to build target index[6,0] will trigger the isal 
> bug. the block index6's data is corruption(all data is zero).
> I write a unit test can stabilize repreduce.
> {code:java}
> // code placeholder
> public void testFileDecommission() throws Exception {
>   LOG.info("Starting test testFileDecommission");
>   final Path ecFile = new Path(ecDir, "testFileDecommission");
>   int writeBytes = cellSize * dataBlocks;
>   writeStripedFile(dfs, ecFile, writeBytes);
>   Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks());
>   FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes);
>   LocatedBlocks locatedBlocks =
>   StripedFileTestUtil.getLocatedBlocks(ecFile, dfs);
>   LocatedBlock lb = dfs.getClient().getLocatedBlocks(ecFile.toString(), 0)
>   .get(0);
>   DatanodeInfo[] dnLocs = lb.getLocations();
>   LocatedStripedBlock lastBlock =
>   (LocatedStripedBlock)locatedBlocks.getLastLocatedBlock();
>   DatanodeInfo[] storageInfos = lastBlock.getLocations();
>   // 
>   DatanodeDescriptor datanodeDescriptor = 
> cluster.getNameNode().getNamesystem()
>   
> .getBlockManager().getDatanodeManager().getDatanode(storageInfos[6].getDatanodeUuid());
>   for (int i = 0; i < 100; i++) {
> datanodeDescriptor.incrementPendingReplicationWithoutTargets();
>   }
>   assertEquals(dataBlocks + parityBlocks, dnLocs.length);
>   int[] decommNodeIndex = {3, 4};
>   final List decommisionNodes = new ArrayList();
>   // add the node which will be decommissioning
>   decommisionNodes.add(dnLocs[decommNodeIndex[0]]);
>   decommisionNodes.add(dnLocs[decommNodeIndex[1]]);
>   decommissionNode(0, decommisionNodes, AdminStates.DECOMMISSIONED);
>   assertEquals(decommisionNodes.size(), fsn.getNumDecomLiveDataNodes());
>   //assertNull(checkFile(dfs, ecFile, 9, decommisionNodes, numDNs));
>   // Ensure decommissioned datanode is not automatically shutdown
>   DFSClient client = getDfsClient(cluster.getNameNode(0), conf);
>   assertEquals("All datanodes must be alive", numDNs,
>   client.datanodeReport(DatanodeReportType.LIVE).length);
>   FileChecksum fileChecksum2 = dfs.getFileChecksum(ecFile, writeBytes);
>   Assert.assertTrue("Checksum mismatches!",
>   fileChecksum1.equals(fileChecksum2));
>   StripedFileTestUtil.checkData(dfs, ecFile, writeBytes, decommisionNodes,
>   null, blockGroupSize);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14768) In some cases, erasure blocks are corruption when they are reconstruct.

2019-08-26 Thread guojh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guojh updated HDFS-14768:
-
Attachment: HDFS-14768.001.patch

> In some cases, erasure blocks are corruption  when they are reconstruct.
> 
>
> Key: HDFS-14768
> URL: https://issues.apache.org/jira/browse/HDFS-14768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, hdfs, namenode
>Affects Versions: 3.0.2
>Reporter: guojh
>Priority: Major
>  Labels: patch
> Fix For: 3.3.0
>
> Attachments: HDFS-14768.000.patch, HDFS-14768.001.patch
>
>
> Policy is RS-6-3-1024K, version is hadoop 3.0.2;
> We suppose a file's block Index is [0,1,2,3,4,5,6,7,8], And decommission 
> index[3,4], increase the index 6 datanode's
> pendingReplicationWithoutTargets  that make it large than 
> replicationStreamsHardLimit(we set 14). Then, After the method 
> chooseSourceDatanodes of BlockMananger, the liveBlockIndices is 
> [0,1,2,3,4,5,7,8], Block Counter is, Live:7, Decommission:2. 
> In method scheduleReconstruction of BlockManager, the additionalReplRequired 
> is 9 - 7 = 2. After Namenode choose two target Datanode, will assign a 
> erasureCode task to target datanode.
> When datanode get the task will build  targetIndices from liveBlockIndices 
> and target length. the code is blow.
> {code:java}
> // code placeholder
> targetIndices = new short[targets.length];
> private void initTargetIndices() { 
>   BitSet bitset = reconstructor.getLiveBitSet();
>   int m = 0; hasValidTargets = false; 
>   for (int i = 0; i < dataBlkNum + parityBlkNum; i++) {  
> if (!bitset.get) {    
>   if (reconstructor.getBlockLen > 0) {
>        if (m < targets.length) {
>          targetIndices[m++] = (short)i;
>          hasValidTargets = true;
>         }
>       }
>     }
>  }
> {code}
> targetIndices[0]=6, and targetIndices[1] is aways 0 from initial value.
> The StripedReader is  aways create reader from first 6 index block, and is 
> [0,1,2,3,4,5]
> Use the index [0,1,2,3,4,5] to build target index[6,0] will trigger the isal 
> bug. the block index6's data is corruption(all data is zero).
> I write a unit test can stabilize repreduce.
> {code:java}
> // code placeholder
> public void testFileDecommission() throws Exception {
>   LOG.info("Starting test testFileDecommission");
>   final Path ecFile = new Path(ecDir, "testFileDecommission");
>   int writeBytes = cellSize * dataBlocks;
>   writeStripedFile(dfs, ecFile, writeBytes);
>   Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks());
>   FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes);
>   LocatedBlocks locatedBlocks =
>   StripedFileTestUtil.getLocatedBlocks(ecFile, dfs);
>   LocatedBlock lb = dfs.getClient().getLocatedBlocks(ecFile.toString(), 0)
>   .get(0);
>   DatanodeInfo[] dnLocs = lb.getLocations();
>   LocatedStripedBlock lastBlock =
>   (LocatedStripedBlock)locatedBlocks.getLastLocatedBlock();
>   DatanodeInfo[] storageInfos = lastBlock.getLocations();
>   // 
>   DatanodeDescriptor datanodeDescriptor = 
> cluster.getNameNode().getNamesystem()
>   
> .getBlockManager().getDatanodeManager().getDatanode(storageInfos[6].getDatanodeUuid());
>   for (int i = 0; i < 100; i++) {
> datanodeDescriptor.incrementPendingReplicationWithoutTargets();
>   }
>   assertEquals(dataBlocks + parityBlocks, dnLocs.length);
>   int[] decommNodeIndex = {3, 4};
>   final List decommisionNodes = new ArrayList();
>   // add the node which will be decommissioning
>   decommisionNodes.add(dnLocs[decommNodeIndex[0]]);
>   decommisionNodes.add(dnLocs[decommNodeIndex[1]]);
>   decommissionNode(0, decommisionNodes, AdminStates.DECOMMISSIONED);
>   assertEquals(decommisionNodes.size(), fsn.getNumDecomLiveDataNodes());
>   //assertNull(checkFile(dfs, ecFile, 9, decommisionNodes, numDNs));
>   // Ensure decommissioned datanode is not automatically shutdown
>   DFSClient client = getDfsClient(cluster.getNameNode(0), conf);
>   assertEquals("All datanodes must be alive", numDNs,
>   client.datanodeReport(DatanodeReportType.LIVE).length);
>   FileChecksum fileChecksum2 = dfs.getFileChecksum(ecFile, writeBytes);
>   Assert.assertTrue("Checksum mismatches!",
>   fileChecksum1.equals(fileChecksum2));
>   StripedFileTestUtil.checkData(dfs, ecFile, writeBytes, decommisionNodes,
>   null, blockGroupSize);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1981) Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED state

2019-08-26 Thread Lokesh Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-1981:
--
Fix Version/s: 0.5.0

> Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED 
> state
> ---
>
> Key: HDDS-1981
> URL: https://issues.apache.org/jira/browse/HDDS-1981
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED 
> state. This will ensure that the metadata is persisted.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14710) RBF: Improve some RPC performances

2019-08-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916369#comment-16916369
 ] 

Hadoop QA commented on HDFS-14710:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 44m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 53s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14710 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978634/HDFS-14710-trunk-005.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 440358c29190 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d70f523 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27688/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27688/testReport/ |
| Max. process+thread count | 1625 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 

[jira] [Work logged] (HDDS-1981) Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED state

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1981?focusedWorklogId=301704=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301704
 ]

ASF GitHub Bot logged work on HDDS-1981:


Author: ASF GitHub Bot
Created on: 27/Aug/19 04:53
Start Date: 27/Aug/19 04:53
Worklog Time Spent: 10m 
  Work Description: lokeshj1703 commented on issue #1319: HDDS-1981: 
Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED state
URL: https://github.com/apache/hadoop/pull/1319#issuecomment-525136428
 
 
   @bshashikant @supratimdeka @nandakumar131  Thanks for reviewing the PR. I 
have merged it with trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301704)
Time Spent: 3h  (was: 2h 50m)

> Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED 
> state
> ---
>
> Key: HDDS-1981
> URL: https://issues.apache.org/jira/browse/HDDS-1981
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED 
> state. This will ensure that the metadata is persisted.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1981) Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED state

2019-08-26 Thread Lokesh Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-1981:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED 
> state
> ---
>
> Key: HDDS-1981
> URL: https://issues.apache.org/jira/browse/HDDS-1981
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED 
> state. This will ensure that the metadata is persisted.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1981) Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED state

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1981?focusedWorklogId=301703=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301703
 ]

ASF GitHub Bot logged work on HDDS-1981:


Author: ASF GitHub Bot
Created on: 27/Aug/19 04:52
Start Date: 27/Aug/19 04:52
Worklog Time Spent: 10m 
  Work Description: lokeshj1703 commented on pull request #1319: HDDS-1981: 
Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED state
URL: https://github.com/apache/hadoop/pull/1319
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301703)
Time Spent: 2h 50m  (was: 2h 40m)

> Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED 
> state
> ---
>
> Key: HDDS-1981
> URL: https://issues.apache.org/jira/browse/HDDS-1981
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED 
> state. This will ensure that the metadata is persisted.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=301698=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301698
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 27/Aug/19 04:10
Start Date: 27/Aug/19 04:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-525128821
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 48 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 24 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 499 | Maven dependency ordering for branch |
   | +1 | mvninstall | 946 | trunk passed |
   | +1 | compile | 471 | trunk passed |
   | +1 | checkstyle | 85 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 953 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 212 | trunk passed |
   | 0 | spotbugs | 444 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 665 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 48 | Maven dependency ordering for patch |
   | +1 | mvninstall | 575 | the patch passed |
   | +1 | compile | 402 | the patch passed |
   | +1 | javac | 402 | the patch passed |
   | +1 | checkstyle | 99 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 1 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 694 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 188 | the patch passed |
   | +1 | findbugs | 684 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 329 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2113 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 59 | The patch does not generate ASF License warnings. |
   | | | 9390 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/20/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle shellcheck shelldocs |
   | uname | Linux 8c8f60c8bcd6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 07e3cf9 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/20/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/20/testReport/ |
   | Max. process+thread count | 5321 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/dist 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/20/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301698)
Time Spent: 8h  (was: 7h 50m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>

[jira] [Commented] (HDFS-14772) RBF: hdfs-rbf-site.xml can't be loaded automatically

2019-08-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916325#comment-16916325
 ] 

Hadoop QA commented on HDFS-14772:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 43s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
|   | hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14772 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978631/HDFS-14772.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6a5d2d36c676 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDFS-14771) Backport HDFS-14617 to branch-2 (Improve fsimage load time by writing sub-sections to the fsimage index)

2019-08-26 Thread He Xiaoqiao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916314#comment-16916314
 ] 

He Xiaoqiao commented on HDFS-14771:


Thanks [~kihwal] for your response, HDFS-14617 has discussed the compatibility 
for this changes to branch trunk. I am verifying the compatibility for 
branch-2, will attach the result when finished.
{quote}Is layout version being changed?{quote}
not changed layout version in current demo patch, I agree to change it from 
version 1 to version 2. And I believe we should update for branch trunk also. 
cc [~sodonnell],[~jojochuang].

> Backport HDFS-14617 to branch-2 (Improve fsimage load time by writing 
> sub-sections to the fsimage index)
> 
>
> Key: HDFS-14771
> URL: https://issues.apache.org/jira/browse/HDFS-14771
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14771.branch-2.001.patch
>
>
> This JIRA aims to backport HDFS-14617 to branch-2: fsimage load time by 
> writing sub-sections to the fsimage index.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12904) Add DataTransferThrottler to the Datanode transfers

2019-08-26 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-12904:
---
Attachment: HDFS-12904.005.patch

> Add DataTransferThrottler to the Datanode transfers
> ---
>
> Key: HDFS-12904
> URL: https://issues.apache.org/jira/browse/HDFS-12904
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Íñigo Goiri
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-12904.000.patch, HDFS-12904.001.patch, 
> HDFS-12904.002.patch, HDFS-12904.003.patch, HDFS-12904.005.patch
>
>
> The {{DataXceiverServer}} already uses throttling for the balancing. The 
> Datanode should also allow throttling the regular data transfers.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1933) Datanode should use hostname in place of ip addresses to allow DN's to work when ipaddress change

2019-08-26 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916304#comment-16916304
 ] 

Li Cheng commented on HDDS-1933:


 

[~msingh] [~elek]

As I look around, this seems relevant in kubernetes: 
[https://github.com/kubernetes/dns/issues/266].

Can you try setting {{publishNotReadyAddresses=true}}  in kube  to see if it 
resolves? This issue seems more likely to be the thing with kubernetes. 

Setting DFS_DATANODE_USE_DN_HOSTNAME_DEFAULT = true is another option. Shall we 
consider to use hostname as default? (current default option is IP). [~xyao] 
[~Sammi] Datanode uses java DNS lookup once hostname is not explicitly set in 
config in start process and ip address is retrieved from the dns hostname via 
InetAddress.getHostName. So then hostname and ip address are essentially the 
same with hostname being more well-rounded (Ip-ish hostname, not like 
'localhost'). So I think setting hostname as the default is a viable option.

> Datanode should use hostname in place of ip addresses to allow DN's to work 
> when ipaddress change
> -
>
> Key: HDDS-1933
> URL: https://issues.apache.org/jira/browse/HDDS-1933
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Priority: Blocker
>
> This was noticed by [~elek] while deploying Ozone on Kubernetes based 
> environment.
> When the datanode ip address change on restart, the Datanode details cease to 
> be correct for the datanode. and this prevents the cluster from functioning 
> after a restart.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13541) NameNode Port based selective encryption

2019-08-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916298#comment-16916298
 ] 

Hadoop QA commented on HDFS-13541:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
56s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 7s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
33s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
37s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
11s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
26s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
41s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
50s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 29s{color} | {color:orange} root: The patch generated 12 new + 1656 
unchanged - 9 fixed = 1668 total (was 1665) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
46s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
38s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| 

[jira] [Updated] (HDFS-14779) Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns

2019-08-26 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HDFS-14779:
-
Status: Patch Available  (was: Open)

> Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns
> 
>
> Key: HDFS-14779
> URL: https://issues.apache.org/jira/browse/HDFS-14779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HDFS-14779.001.patch
>
>
> {noformat}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdfs: Compilation failure
> [ERROR] 
> /Users/jhung/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java:[364,31]
>  incompatible types: java.lang.String cannot be converted to 
> java.lang.Throwable
> [ERROR] {noformat}
> Logger changed from o.a.commons.logging.Log to slf4j logger in branch-3.2 
> (ref: HDFS-13695), so HDFS-14674 did not apply cleanly to branch-3.1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14710) RBF: Improve some RPC performances

2019-08-26 Thread xuzq (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916275#comment-16916275
 ] 

xuzq commented on HDFS-14710:
-

Thanks [~elgoiri] for the comment, it's very helpful for me. Please review 
[^HDFS-14710-trunk-005.patch]

> RBF: Improve some RPC performances
> --
>
> Key: HDFS-14710
> URL: https://issues.apache.org/jira/browse/HDFS-14710
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Minor
> Attachments: HDFS-14710-trunk-001.patch, HDFS-14710-trunk-002.patch, 
> HDFS-14710-trunk-003.patch, HDFS-14710-trunk-004.patch, 
> HDFS-14710-trunk-005.patch
>
>
> We can improve some RPC performance if the extendedBlock is not null.
> Such as addBlock, getAdditionalDatanode and complete.
> Since HDFS encourages user to write large files, so the extendedBlock is not 
> null in most case.
> In the scenario of Multiple Destination and large file, the effect is more 
> obvious.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14710) RBF: Improve some RPC performances

2019-08-26 Thread xuzq (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuzq updated HDFS-14710:

Attachment: HDFS-14710-trunk-005.patch

> RBF: Improve some RPC performances
> --
>
> Key: HDFS-14710
> URL: https://issues.apache.org/jira/browse/HDFS-14710
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Minor
> Attachments: HDFS-14710-trunk-001.patch, HDFS-14710-trunk-002.patch, 
> HDFS-14710-trunk-003.patch, HDFS-14710-trunk-004.patch, 
> HDFS-14710-trunk-005.patch
>
>
> We can improve some RPC performance if the extendedBlock is not null.
> Such as addBlock, getAdditionalDatanode and complete.
> Since HDFS encourages user to write large files, so the extendedBlock is not 
> null in most case.
> In the scenario of Multiple Destination and large file, the effect is more 
> obvious.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14745) Backport HDFS persistent memory read cache support to branch-3.1

2019-08-26 Thread Feilong He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916268#comment-16916268
 ] 

Feilong He commented on HDFS-14745:
---

Thanks [~tangzhankun], it's OK to change the target to 3.1.4. 

> Backport HDFS persistent memory read cache support to branch-3.1
> 
>
> Key: HDFS-14745
> URL: https://issues.apache.org/jira/browse/HDFS-14745
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
>  Labels: cache, datanode
> Fix For: 3.3.0
>
> Attachments: HDFS-14745-branch-3.1-000.patch
>
>
> We are proposing to backport the patches for HDFS-13762, HDFS persistent 
> memory read cache support, to branch-3.1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14772) RBF: hdfs-rbf-site.xml can't be loaded automatically

2019-08-26 Thread Yuxuan Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuxuan Wang updated HDFS-14772:
---
Attachment: HDFS-14772.004.patch

> RBF: hdfs-rbf-site.xml can't be loaded automatically
> 
>
> Key: HDFS-14772
> URL: https://issues.apache.org/jira/browse/HDFS-14772
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Yuxuan Wang
>Assignee: Yuxuan Wang
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14772.001.patch, HDFS-14772.002.patch, 
> HDFS-14772.003.patch, HDFS-14772.004.patch
>
>
> ISSUE:
> hdfs-rbf-site.xml can't be loaded automatically
> WHY:
> Currently the code is 
> {code:title=RBFConfigKeys.java|borderStyle=solid}
>   static {
> Configuration.addDefaultResource(HDFS_RBF_SITE_XML);
>   }
> {code}
> But it will never be executed unless we explicitly load the class.
> HOW TO FIX:
> Reference to class *HdfsConfiguration*, make a method
> {code:title=RBFConfigKeys.java|borderStyle=solid}
>   public static void init() {
>   }
> {code}
> and call it in other class.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14772) RBF: hdfs-rbf-site.xml can't be loaded automatically

2019-08-26 Thread Yuxuan Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916267#comment-16916267
 ] 

Yuxuan Wang commented on HDFS-14772:


Fix checksytle, pending Jenkins.

> RBF: hdfs-rbf-site.xml can't be loaded automatically
> 
>
> Key: HDFS-14772
> URL: https://issues.apache.org/jira/browse/HDFS-14772
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Yuxuan Wang
>Assignee: Yuxuan Wang
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14772.001.patch, HDFS-14772.002.patch, 
> HDFS-14772.003.patch, HDFS-14772.004.patch
>
>
> ISSUE:
> hdfs-rbf-site.xml can't be loaded automatically
> WHY:
> Currently the code is 
> {code:title=RBFConfigKeys.java|borderStyle=solid}
>   static {
> Configuration.addDefaultResource(HDFS_RBF_SITE_XML);
>   }
> {code}
> But it will never be executed unless we explicitly load the class.
> HOW TO FIX:
> Reference to class *HdfsConfiguration*, make a method
> {code:title=RBFConfigKeys.java|borderStyle=solid}
>   public static void init() {
>   }
> {code}
> and call it in other class.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13541) NameNode Port based selective encryption

2019-08-26 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916256#comment-16916256
 ] 

Konstantin Shvachko commented on HDFS-13541:


Comments for branch-2 v02 patch:
# {{SaslDataTransferServer}} has a bunch of new unused imports. Not sure where 
do they come from.
# Long lines in {{BlockManager}}
# In {{SaslDataTransferClient.checkTrustAndSend()}}
{code}LOG.info("SASL encryption trust check: localHostTrusted = {}, "{code}
is info level, while on trunk it is debug. Should it be debug for branch-2 as 
well?
# Do we need any changes in {{PBHelperClient}}?

> NameNode Port based selective encryption
> 
>
> Key: HDFS-13541
> URL: https://issues.apache.org/jira/browse/HDFS-13541
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
>  Labels: release-blocker
> Attachments: HDFS-13541-branch-2.001.patch, 
> HDFS-13541-branch-2.002.patch, HDFS-13541-branch-3.1.001.patch, 
> HDFS-13541-branch-3.1.002.patch, HDFS-13541-branch-3.2.001.patch, 
> HDFS-13541-branch-3.2.002.patch, NameNode Port based selective 
> encryption-v1.pdf
>
>
> Here at LinkedIn, one issue we face is that we need to enforce different 
> security requirement based on the location of client and the cluster. 
> Specifically, for clients from outside of the data center, it is required by 
> regulation that all traffic must be encrypted. But for clients within the 
> same data center, unencrypted connections are more desired to avoid the high 
> encryption overhead. 
> HADOOP-10221 introduced pluggable SASL resolver, based on which HADOOP-10335 
> introduced WhitelistBasedResolver which solves the same problem. However we 
> found it difficult to fit into our environment for several reasons. In this 
> JIRA, on top of pluggable SASL resolver, *we propose a different approach of 
> running RPC two ports on NameNode, and the two ports will be enforcing 
> encrypted and unencrypted connections respectively, and the following 
> DataNode access will simply follow the same behaviour of 
> encryption/unencryption*. Then by blocking unencrypted port on datacenter 
> firewall, we can completely block unencrypted external access.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-26 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916254#comment-16916254
 ] 

Arpit Agarwal commented on HDFS-2470:
-

Thanks [~eyang]. Is the NameNode up?

 

{{StorageDirectory}} is annotated as private, and I also could not find any 
reference to it in HBase or ZooKeeper source code. Could the failure be 
unrelated?

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron T. Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, 
> HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch, HDFS-2470.09.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916247#comment-16916247
 ] 

Hudson commented on HDFS-2470:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17188 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17188/])
HDFS-2470. NN should automatically set permissions on (arp: rev 
07e3cf952eac9e47e7bd5e195b0f9fc28c468313)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JNStorage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStartup.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java


> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron T. Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, 
> HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch, HDFS-2470.09.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14779) Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns

2019-08-26 Thread Chen Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916241#comment-16916241
 ] 

Chen Liang commented on HDFS-14779:
---

Thanks for working on this [~jhung]. v001 patch LGTM, +1.

> Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns
> 
>
> Key: HDFS-14779
> URL: https://issues.apache.org/jira/browse/HDFS-14779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HDFS-14779.001.patch
>
>
> {noformat}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdfs: Compilation failure
> [ERROR] 
> /Users/jhung/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java:[364,31]
>  incompatible types: java.lang.String cannot be converted to 
> java.lang.Throwable
> [ERROR] {noformat}
> Logger changed from o.a.commons.logging.Log to slf4j logger in branch-3.2 
> (ref: HDFS-13695), so HDFS-14674 did not apply cleanly to branch-3.1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14779) Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns

2019-08-26 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916235#comment-16916235
 ] 

Jonathan Hung commented on HDFS-14779:
--

attached 001 patch which adds exception to log message instead of exception's 
message. This will work for both slf4j (branch-3.2 and up) as well as 
o.a.commons.logging (branch-3.1 and lower)
[~vagarychen] can you take a look? Thanks!

> Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns
> 
>
> Key: HDFS-14779
> URL: https://issues.apache.org/jira/browse/HDFS-14779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HDFS-14779.001.patch
>
>
> {noformat}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdfs: Compilation failure
> [ERROR] 
> /Users/jhung/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java:[364,31]
>  incompatible types: java.lang.String cannot be converted to 
> java.lang.Throwable
> [ERROR] {noformat}
> Logger changed from o.a.commons.logging.Log to slf4j logger in branch-3.2 
> (ref: HDFS-13695), so HDFS-14674 did not apply cleanly to branch-3.1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14779) Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns

2019-08-26 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung reassigned HDFS-14779:


Assignee: Jonathan Hung

> Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns
> 
>
> Key: HDFS-14779
> URL: https://issues.apache.org/jira/browse/HDFS-14779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HDFS-14779.001.patch
>
>
> {noformat}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdfs: Compilation failure
> [ERROR] 
> /Users/jhung/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java:[364,31]
>  incompatible types: java.lang.String cannot be converted to 
> java.lang.Throwable
> [ERROR] {noformat}
> Logger changed from o.a.commons.logging.Log to slf4j logger in branch-3.2 
> (ref: HDFS-13695), so HDFS-14674 did not apply cleanly to branch-3.1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14779) Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns

2019-08-26 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HDFS-14779:
-
Attachment: HDFS-14779.001.patch

> Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns
> 
>
> Key: HDFS-14779
> URL: https://issues.apache.org/jira/browse/HDFS-14779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Priority: Major
> Attachments: HDFS-14779.001.patch
>
>
> {noformat}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdfs: Compilation failure
> [ERROR] 
> /Users/jhung/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java:[364,31]
>  incompatible types: java.lang.String cannot be converted to 
> java.lang.Throwable
> [ERROR] {noformat}
> Logger changed from o.a.commons.logging.Log to slf4j logger in branch-3.2 
> (ref: HDFS-13695), so HDFS-14674 did not apply cleanly to branch-3.1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14779) Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns

2019-08-26 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HDFS-14779:
-
Summary: Fix logging error in 
TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns  (was: Fix logging error in 
TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns in branch-3.1)

> Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns
> 
>
> Key: HDFS-14779
> URL: https://issues.apache.org/jira/browse/HDFS-14779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Priority: Major
>
> {noformat}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdfs: Compilation failure
> [ERROR] 
> /Users/jhung/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java:[364,31]
>  incompatible types: java.lang.String cannot be converted to 
> java.lang.Throwable
> [ERROR] {noformat}
> Logger changed from o.a.commons.logging.Log to slf4j logger in branch-3.2 
> (ref: HDFS-13695), so HDFS-14674 did not apply cleanly to branch-3.1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-26 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1909:
-
Status: Patch Available  (was: Open)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 50m
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=301596=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301596
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 26/Aug/19 23:48
Start Date: 26/Aug/19 23:48
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-525076456
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301596)
Time Spent: 7h 50m  (was: 7h 40m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 50m
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=301595=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301595
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 26/Aug/19 23:48
Start Date: 26/Aug/19 23:48
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-524522115
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 49 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 21 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 683 | trunk passed |
   | +1 | compile | 375 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 833 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | trunk passed |
   | 0 | spotbugs | 432 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 623 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 529 | the patch passed |
   | +1 | compile | 356 | the patch passed |
   | +1 | javac | 356 | the patch passed |
   | -0 | checkstyle | 34 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 630 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 155 | the patch passed |
   | +1 | findbugs | 632 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 275 | hadoop-hdds in the patch passed. |
   | -1 | unit | 333 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 6055 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.recon.recovery.TestReconOmMetadataManagerImpl |
   |   | hadoop.ozone.recon.spi.impl.TestOzoneManagerServiceProviderImpl |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/19/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 617da2d7d6af 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d2225c8 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/19/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/19/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/19/testReport/ |
   | Max. process+thread count | 1338 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/integration-test 
hadoop-ozone/ozone-manager hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/19/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301595)
Time Spent: 7h 40m  (was: 7.5h)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: 

[jira] [Updated] (HDFS-14779) Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns in branch-3.1

2019-08-26 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HDFS-14779:
-
Description: 
{noformat}[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-hdfs: Compilation failure
[ERROR] 
/Users/jhung/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java:[364,31]
 incompatible types: java.lang.String cannot be converted to java.lang.Throwable
[ERROR] {noformat}

Logger changed from o.a.commons.logging.Log to slf4j logger in branch-3.2 (ref: 
HDFS-13695), so HDFS-14674 did not apply cleanly to branch-3.1.

  was:
{noformat}[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-hdfs: Compilation failure
[ERROR] 
/Users/jhung/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java:[364,31]
 incompatible types: java.lang.String cannot be converted to java.lang.Throwable
[ERROR] {noformat}

Logger changed from o.a.commons.logging.Log to slf4j logger in branch-3.2, so 
HDFS-14674 did not apply cleanly to branch-3.1.


> Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns in 
> branch-3.1
> --
>
> Key: HDFS-14779
> URL: https://issues.apache.org/jira/browse/HDFS-14779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Priority: Major
>
> {noformat}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdfs: Compilation failure
> [ERROR] 
> /Users/jhung/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java:[364,31]
>  incompatible types: java.lang.String cannot be converted to 
> java.lang.Throwable
> [ERROR] {noformat}
> Logger changed from o.a.commons.logging.Log to slf4j logger in branch-3.2 
> (ref: HDFS-13695), so HDFS-14674 did not apply cleanly to branch-3.1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2041) Don't depend on DFSUtil to check HTTP policy

2019-08-26 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2041:


 Summary: Don't depend on DFSUtil to check HTTP policy
 Key: HDDS-2041
 URL: https://issues.apache.org/jira/browse/HDDS-2041
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: website
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


Currently, BaseHttpServer uses DFSUtil to get Http policy. With this, when http 
policy is set to HTTPS on hdfs-site.xml, ozone http servers try to come up with 
HTTPS and fail if SSL certificates are not present in the required location.

Ozone web UIs should not depend on HDFS config to determine HTTP policy. 
Instead, it should have its own config to determine the policy. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1946) CertificateClient should not persist keys/certs to ozone.metadata.dir

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1946?focusedWorklogId=301580=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301580
 ]

ASF GitHub Bot logged work on HDDS-1946:


Author: ASF GitHub Bot
Created on: 26/Aug/19 23:05
Start Date: 26/Aug/19 23:05
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1311: HDDS-1946. 
CertificateClient should not persist keys/certs to ozone.m…
URL: https://github.com/apache/hadoop/pull/1311#issuecomment-525066689
 
 
   Created https://issues.apache.org/jira/browse/HDDS-2040 to track the 
integration test failure.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301580)
Time Spent: 2h 40m  (was: 2.5h)

> CertificateClient should not persist keys/certs to ozone.metadata.dir
> -
>
> Key: HDDS-1946
> URL: https://issues.apache.org/jira/browse/HDDS-1946
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> For example, when OM and SCM are deployed on the same host with 
> ozone.metadata.dir defined. SCM can start successfully but OM can not because 
> the key/cert from OM will collide with SCM.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-26 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916216#comment-16916216
 ] 

Eric Yang commented on HDFS-2470:
-

[~swagle] Thank you for patch 09, unfortunately, this patch breaks HBase for 
some reason.  HBase does not show exact error, but fail to start HBase Region 
server.  It appears that there is an exception thrown, but the error menifested 
in HBase as ZooKeeper ACL exception:

{code}
2019-08-26 14:45:42,597 WARN  
[regionserver/eyang-3.vpc.cloudera.com/10.65.52.68:16020-SendThread(eyang-4.vpc.cloudera.com:2181)]
 client.ZooKeeperSaslClient: Could not login: the client is being asked for a 
password, but the Zookeeper client code does not currently support obtaining a 
password from the user. Make sure that the client is configured to use a ticket 
cache (using the JAAS configuration setting 'useTicketCache=true)' and restart 
the client. If you still get this message after that, the TGT in the ticket 
cache has expired and must be manually refreshed. To do so, first determine if 
you are using a password or a keytab. If the former, run kinit in a Unix shell 
in the environment of the user who is running this Zookeeper client using the 
command 'kinit ' (where  is the name of the client's Kerberos 
principal). If the latter, do 'kinit -k -t  ' (where  is 
the name of the Kerberos principal, and  is the location of the keytab 
file). After manually refreshing your cache, restart this client. If you 
continue to see this message after manually refreshing your cache, ensure that 
your KDC host's clock is in sync with this host's clock.
2019-08-26 14:45:42,598 WARN  
[regionserver/eyang-3.vpc.cloudera.com/10.65.52.68:16020-SendThread(eyang-4.vpc.cloudera.com:2181)]
 zookeeper.ClientCnxn: SASL configuration failed: 
javax.security.auth.login.LoginException: No password provided Will continue 
connection to Zookeeper server without SASL authentication, if Zookeeper server 
allows it.
2019-08-26 14:45:42,598 INFO  
[regionserver/eyang-3.vpc.cloudera.com/10.65.52.68:16020-SendThread(eyang-4.vpc.cloudera.com:2181)]
 zookeeper.ClientCnxn: Opening socket connection to server 
eyang-4.vpc.cloudera.com/10.65.53.170:2181
2019-08-26 14:45:42,598 INFO  
[regionserver/eyang-3.vpc.cloudera.com/10.65.52.68:16020-SendThread(eyang-4.vpc.cloudera.com:2181)]
 zookeeper.ClientCnxn: Socket connection established to 
eyang-4.vpc.cloudera.com/10.65.53.170:2181, initiating session
2019-08-26 14:45:42,601 INFO  
[regionserver/eyang-3.vpc.cloudera.com/10.65.52.68:16020-SendThread(eyang-4.vpc.cloudera.com:2181)]
 zookeeper.ClientCnxn: Session establishment complete on server 
eyang-4.vpc.cloudera.com/10.65.53.170:2181, sessionid = 0x200010a127c0070, 
negotiated timeout = 6
2019-08-26 14:45:45,659 INFO  
[regionserver/eyang-3.vpc.cloudera.com/10.65.52.68:16020] ipc.RpcServer: 
Stopping server on 16020
2019-08-26 14:45:45,659 INFO  
[regionserver/eyang-3.vpc.cloudera.com/10.65.52.68:16020] 
token.AuthenticationTokenSecretManager: Stopping leader election, because: 
SecretManager stopping
2019-08-26 14:45:45,660 INFO  [RpcServer.listener,port=16020] ipc.RpcServer: 
RpcServer.listener,port=16020: stopping
2019-08-26 14:45:45,660 INFO  [RpcServer.responder] ipc.RpcServer: 
RpcServer.responder: stopped
2019-08-26 14:45:45,660 INFO  [RpcServer.responder] ipc.RpcServer: 
RpcServer.responder: stopping
2019-08-26 14:45:45,660 FATAL 
[regionserver/eyang-3.vpc.cloudera.com/10.65.52.68:16020] 
regionserver.HRegionServer: ABORTING region server 
eyang-3.vpc.cloudera.com,16020,1566855941147: Initialization of RS failed.  
Hence aborting RS.
java.io.IOException: Received the shutdown message while waiting.
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:819)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:772)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:744)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:889)
at java.lang.Thread.run(Thread.java:748)
{code}

When the patch is removed, HBase was able to start successfully.  I dig pretty 
deep in HBase source code, but StorageDirectory is not used in the code base.  
I am validated that the Datanode directory default permission doesn't change by 
patch 09.  More studies is required to understand the root cause of the 
incompatibility.

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron T. Myers
>Assignee: 

[jira] [Comment Edited] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-26 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916216#comment-16916216
 ] 

Eric Yang edited comment on HDFS-2470 at 8/26/19 11:15 PM:
---

[~swagle] Thank you for patch 09, unfortunately, this patch breaks HBase for 
some reason.  HBase does not show exact error, but fail to start HBase Region 
server.  It appears that there is an exception thrown, but the error menifested 
in HBase as ZooKeeper ACL exception:

{code}
2019-08-26 14:45:42,597 WARN  
[regionserver/eyang-3.vpc.cloudera.com/10.65.52.68:16020-SendThread(eyang-4.vpc.cloudera.com:2181)]
 client.ZooKeeperSaslClient: Could not login: the client is being asked for a 
password, but the Zookeeper client code does not currently support obtaining a 
password from the user. Make sure that the client is configured to use a ticket 
cache (using the JAAS configuration setting 'useTicketCache=true)' and restart 
the client. If you still get this message after that, the TGT in the ticket 
cache has expired and must be manually refreshed. To do so, first determine if 
you are using a password or a keytab. If the former, run kinit in a Unix shell 
in the environment of the user who is running this Zookeeper client using the 
command 'kinit ' (where  is the name of the client's Kerberos 
principal). If the latter, do 'kinit -k -t  ' (where  is 
the name of the Kerberos principal, and  is the location of the keytab 
file). After manually refreshing your cache, restart this client. If you 
continue to see this message after manually refreshing your cache, ensure that 
your KDC host's clock is in sync with this host's clock.
2019-08-26 14:45:42,598 WARN  
[regionserver/eyang-3.vpc.cloudera.com/10.65.52.68:16020-SendThread(eyang-4.vpc.cloudera.com:2181)]
 zookeeper.ClientCnxn: SASL configuration failed: 
javax.security.auth.login.LoginException: No password provided Will continue 
connection to Zookeeper server without SASL authentication, if Zookeeper server 
allows it.
2019-08-26 14:45:42,598 INFO  
[regionserver/eyang-3.vpc.cloudera.com/10.65.52.68:16020-SendThread(eyang-4.vpc.cloudera.com:2181)]
 zookeeper.ClientCnxn: Opening socket connection to server 
eyang-4.vpc.cloudera.com/10.65.53.170:2181
2019-08-26 14:45:42,598 INFO  
[regionserver/eyang-3.vpc.cloudera.com/10.65.52.68:16020-SendThread(eyang-4.vpc.cloudera.com:2181)]
 zookeeper.ClientCnxn: Socket connection established to 
eyang-4.vpc.cloudera.com/10.65.53.170:2181, initiating session
2019-08-26 14:45:42,601 INFO  
[regionserver/eyang-3.vpc.cloudera.com/10.65.52.68:16020-SendThread(eyang-4.vpc.cloudera.com:2181)]
 zookeeper.ClientCnxn: Session establishment complete on server 
eyang-4.vpc.cloudera.com/10.65.53.170:2181, sessionid = 0x200010a127c0070, 
negotiated timeout = 6
2019-08-26 14:45:45,659 INFO  
[regionserver/eyang-3.vpc.cloudera.com/10.65.52.68:16020] ipc.RpcServer: 
Stopping server on 16020
2019-08-26 14:45:45,659 INFO  
[regionserver/eyang-3.vpc.cloudera.com/10.65.52.68:16020] 
token.AuthenticationTokenSecretManager: Stopping leader election, because: 
SecretManager stopping
2019-08-26 14:45:45,660 INFO  [RpcServer.listener,port=16020] ipc.RpcServer: 
RpcServer.listener,port=16020: stopping
2019-08-26 14:45:45,660 INFO  [RpcServer.responder] ipc.RpcServer: 
RpcServer.responder: stopped
2019-08-26 14:45:45,660 INFO  [RpcServer.responder] ipc.RpcServer: 
RpcServer.responder: stopping
2019-08-26 14:45:45,660 FATAL 
[regionserver/eyang-3.vpc.cloudera.com/10.65.52.68:16020] 
regionserver.HRegionServer: ABORTING region server 
eyang-3.vpc.cloudera.com,16020,1566855941147: Initialization of RS failed.  
Hence aborting RS.
java.io.IOException: Received the shutdown message while waiting.
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:819)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:772)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:744)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:889)
at java.lang.Thread.run(Thread.java:748)
{code}

When the patch is removed, HBase was not able to start successfully.  I dig 
pretty deep in HBase source code, but StorageDirectory is not used in the code 
base.  I am validated that the Datanode directory default permission doesn't 
change by patch 09.  More studies is required to understand the root cause of 
the incompatibility.


was (Author: eyang):
[~swagle] Thank you for patch 09, unfortunately, this patch breaks HBase for 
some reason.  HBase does not show exact error, but fail to start HBase Region 
server.  It appears that there is an exception thrown, but the error menifested 
in HBase as ZooKeeper ACL exception:

{code}
2019-08-26 14:45:42,597 WARN  

[jira] [Created] (HDDS-2040) Fix TestSecureContainerServer.testClientServerRatisGrpc integration test failure

2019-08-26 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2040:


 Summary: Fix TestSecureContainerServer.testClientServerRatisGrpc 
integration test failure
 Key: HDDS-2040
 URL: https://issues.apache.org/jira/browse/HDDS-2040
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: Security
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian


The integration test TestSecureContainerServer.testClientServerRatisGrpc fails 
with the following error in trunk:


{code:java}
Caused by: org.apache.ratis.protocol.StateMachineException: 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
Block token verification failed. Fail to find any token (empty or null.)
{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14779) Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns in branch-3.1

2019-08-26 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HDFS-14779:
-
Parent: HDFS-12943
Issue Type: Sub-task  (was: Bug)

> Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns in 
> branch-3.1
> --
>
> Key: HDFS-14779
> URL: https://issues.apache.org/jira/browse/HDFS-14779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Priority: Major
>
> {noformat}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdfs: Compilation failure
> [ERROR] 
> /Users/jhung/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java:[364,31]
>  incompatible types: java.lang.String cannot be converted to 
> java.lang.Throwable
> [ERROR] {noformat}
> Logger changed from o.a.commons.logging.Log to slf4j logger in branch-3.2, so 
> HDFS-14674 did not apply cleanly to branch-3.1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14674) [SBN read] Got an unexpected txid when tail editlog

2019-08-26 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916201#comment-16916201
 ] 

Jonathan Hung commented on HDFS-14674:
--

I'm seeing this issue as well. Filed HDFS-14779 for this.

> [SBN read] Got an unexpected txid when tail editlog
> ---
>
> Key: HDFS-14674
> URL: https://issues.apache.org/jira/browse/HDFS-14674
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Blocker
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14674-001.patch, HDFS-14674-003.patch, 
> HDFS-14674-004.patch, HDFS-14674-005.patch, HDFS-14674-006.patch, 
> HDFS-14674-007.patch, HDFS-14674-008.patch, HDFS-14674-009.patch, 
> HDFS-14674-010.patch, HDFS-14674-011.patch, 
> image-2019-08-22-16-24-06-518.png, image.png
>
>
> Add the following configuration
> !image-2019-08-22-16-24-06-518.png|width=451,height=80!
> error:
> {code:java}
> //
> [2019-07-17T11:50:21.048+08:00] [INFO] [Edit log tailer] : replaying edit 
> log: 1/20512836 transactions completed. (0%) [2019-07-17T11:50:21.059+08:00] 
> [INFO] [Edit log tailer] : Edits file 
> http://ip/getJournal?jid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  of size 3126782311 edits # 500 loaded in 3 seconds 
> [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log tailer] : Reading 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@51ceb7bc 
> expecting start txid #232056752162 [2019-07-17T11:50:21.059+08:00] [INFO] 
> [Edit log tailer] : Start loading edits file 
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  maxTxnipsToRead = 500 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log 
> tailer] : Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit 
> log tailer] ip: Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.061+08:00] [ERROR] [Edit 
> log tailer] : Unknown error encountered while tailing edits. Shutting down 
> standby NN. java.io.IOException: There appears to be a gap in the edit log. 
> We expected txid 232056752162, but got txid 232077264498. at 
> org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:239)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:895) at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>  at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
>  [2019-07-17T11:50:21.064+08:00] [INFO] [Edit log tailer] : Exiting with 
> status 1 [2019-07-17T11:50:21.066+08:00] [INFO] [Thread-1] : SHUTDOWN_MSG: 
> / SHUTDOWN_MSG: 
> Shutting down NameNode at ip 
> /
> {code}
>  
> if dfs.ha.tail-edits.max-txns-per-lock value is 500,when the namenode load 
> the editlog util 500,the current namenode will load the next editlog,but 
> editlog more than 500.So,namenode got an unexpected txid when tail editlog.
>  
>  
> {code:java}
> //
> [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log tailer] : 

[jira] [Updated] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-26 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-2470:

Fix Version/s: 3.2.1
   3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed this. Thanks for the fix [~swagle]  and thanks [~eyang]  for 
reviewing.

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron T. Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, 
> HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch, HDFS-2470.09.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14779) Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns in branch-3.1

2019-08-26 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HDFS-14779:
-
Description: 
{noformat}[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-hdfs: Compilation failure
[ERROR] 
/Users/jhung/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java:[364,31]
 incompatible types: java.lang.String cannot be converted to java.lang.Throwable
[ERROR] {noformat}

Logger changed from o.a.commons.logging.Log to slf4j logger in branch-3.2, so 
HDFS-14674 did not apply cleanly to branch-3.1.

  was:
{noformat}[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-hdfs: Compilation failure
[ERROR] 
/Users/jhung/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java:[364,31]
 incompatible types: java.lang.String cannot be converted to java.lang.Throwable
[ERROR] {noformat}


> Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns in 
> branch-3.1
> --
>
> Key: HDFS-14779
> URL: https://issues.apache.org/jira/browse/HDFS-14779
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Priority: Major
>
> {noformat}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdfs: Compilation failure
> [ERROR] 
> /Users/jhung/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java:[364,31]
>  incompatible types: java.lang.String cannot be converted to 
> java.lang.Throwable
> [ERROR] {noformat}
> Logger changed from o.a.commons.logging.Log to slf4j logger in branch-3.2, so 
> HDFS-14674 did not apply cleanly to branch-3.1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14779) Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns in branch-3.1

2019-08-26 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HDFS-14779:
-
Summary: Fix logging error in 
TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns in branch-3.1  (was: Fix 
logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns)

> Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns in 
> branch-3.1
> --
>
> Key: HDFS-14779
> URL: https://issues.apache.org/jira/browse/HDFS-14779
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Priority: Major
>
> {noformat}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdfs: Compilation failure
> [ERROR] 
> /Users/jhung/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java:[364,31]
>  incompatible types: java.lang.String cannot be converted to 
> java.lang.Throwable
> [ERROR] {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14779) Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns

2019-08-26 Thread Jonathan Hung (Jira)
Jonathan Hung created HDFS-14779:


 Summary: Fix logging error in 
TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns
 Key: HDFS-14779
 URL: https://issues.apache.org/jira/browse/HDFS-14779
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jonathan Hung


{noformat}[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-hdfs: Compilation failure
[ERROR] 
/Users/jhung/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java:[364,31]
 incompatible types: java.lang.String cannot be converted to java.lang.Throwable
[ERROR] {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13541) NameNode Port based selective encryption

2019-08-26 Thread Chen Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916193#comment-16916193
 ] 

Chen Liang commented on HDFS-13541:
---

The branch-2 v001 patch has a bad backport causing an incompatible 
change...post v002 patch.

> NameNode Port based selective encryption
> 
>
> Key: HDFS-13541
> URL: https://issues.apache.org/jira/browse/HDFS-13541
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
>  Labels: release-blocker
> Attachments: HDFS-13541-branch-2.001.patch, 
> HDFS-13541-branch-2.002.patch, HDFS-13541-branch-3.1.001.patch, 
> HDFS-13541-branch-3.1.002.patch, HDFS-13541-branch-3.2.001.patch, 
> HDFS-13541-branch-3.2.002.patch, NameNode Port based selective 
> encryption-v1.pdf
>
>
> Here at LinkedIn, one issue we face is that we need to enforce different 
> security requirement based on the location of client and the cluster. 
> Specifically, for clients from outside of the data center, it is required by 
> regulation that all traffic must be encrypted. But for clients within the 
> same data center, unencrypted connections are more desired to avoid the high 
> encryption overhead. 
> HADOOP-10221 introduced pluggable SASL resolver, based on which HADOOP-10335 
> introduced WhitelistBasedResolver which solves the same problem. However we 
> found it difficult to fit into our environment for several reasons. In this 
> JIRA, on top of pluggable SASL resolver, *we propose a different approach of 
> running RPC two ports on NameNode, and the two ports will be enforcing 
> encrypted and unencrypted connections respectively, and the following 
> DataNode access will simply follow the same behaviour of 
> encryption/unencryption*. Then by blocking unencrypted port on datacenter 
> firewall, we can completely block unencrypted external access.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13541) NameNode Port based selective encryption

2019-08-26 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13541:
--
Attachment: HDFS-13541-branch-2.002.patch

> NameNode Port based selective encryption
> 
>
> Key: HDFS-13541
> URL: https://issues.apache.org/jira/browse/HDFS-13541
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
>  Labels: release-blocker
> Attachments: HDFS-13541-branch-2.001.patch, 
> HDFS-13541-branch-2.002.patch, HDFS-13541-branch-3.1.001.patch, 
> HDFS-13541-branch-3.1.002.patch, HDFS-13541-branch-3.2.001.patch, 
> HDFS-13541-branch-3.2.002.patch, NameNode Port based selective 
> encryption-v1.pdf
>
>
> Here at LinkedIn, one issue we face is that we need to enforce different 
> security requirement based on the location of client and the cluster. 
> Specifically, for clients from outside of the data center, it is required by 
> regulation that all traffic must be encrypted. But for clients within the 
> same data center, unencrypted connections are more desired to avoid the high 
> encryption overhead. 
> HADOOP-10221 introduced pluggable SASL resolver, based on which HADOOP-10335 
> introduced WhitelistBasedResolver which solves the same problem. However we 
> found it difficult to fit into our environment for several reasons. In this 
> JIRA, on top of pluggable SASL resolver, *we propose a different approach of 
> running RPC two ports on NameNode, and the two ports will be enforcing 
> encrypted and unencrypted connections respectively, and the following 
> DataNode access will simply follow the same behaviour of 
> encryption/unencryption*. Then by blocking unencrypted port on datacenter 
> firewall, we can completely block unencrypted external access.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1946) CertificateClient should not persist keys/certs to ozone.metadata.dir

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1946?focusedWorklogId=301551=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301551
 ]

ASF GitHub Bot logged work on HDDS-1946:


Author: ASF GitHub Bot
Created on: 26/Aug/19 22:06
Start Date: 26/Aug/19 22:06
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1311: HDDS-1946. 
CertificateClient should not persist keys/certs to ozone.m…
URL: https://github.com/apache/hadoop/pull/1311#issuecomment-525051986
 
 
   Hi @xiaoyuyao 
   
   I checked the cause of failure for this integration test 
`TestSecureContainerServer.testClientServerRatisGrpc` and it is not related to 
this change. It fails due to block token verification failure and it fails for 
the same reason in trunk.
   
   ```Caused by: org.apache.ratis.protocol.StateMachineException: 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
Block token verification failed. Fail to find any token (empty or null.)```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301551)
Time Spent: 2.5h  (was: 2h 20m)

> CertificateClient should not persist keys/certs to ozone.metadata.dir
> -
>
> Key: HDDS-1946
> URL: https://issues.apache.org/jira/browse/HDDS-1946
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> For example, when OM and SCM are deployed on the same host with 
> ozone.metadata.dir defined. SCM can start successfully but OM can not because 
> the key/cert from OM will collide with SCM.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14497) Write lock held by metasave impact following RPC processing

2019-08-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916166#comment-16916166
 ] 

Hadoop QA commented on HDFS-14497:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}154m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.TestMultipleNNPortQOP |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14497 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978526/HDFS-14497-addendum.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0bc2f7d0a3bc 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d1aa859 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27683/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27683/testReport/ |
| Max. process+thread count | 2749 (vs. ulimit of 5500) |
| 

[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=301541=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301541
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 26/Aug/19 21:48
Start Date: 26/Aug/19 21:48
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1353: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1353#issuecomment-525046724
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 75 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ ozone-0.4.1 Compile Tests _ |
   | 0 | mvndep | 96 | Maven dependency ordering for branch |
   | +1 | mvninstall | 843 | ozone-0.4.1 passed |
   | +1 | compile | 363 | ozone-0.4.1 passed |
   | +1 | checkstyle | 78 | ozone-0.4.1 passed |
   | +1 | mvnsite | 0 | ozone-0.4.1 passed |
   | +1 | shadedclient | 1055 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | ozone-0.4.1 passed |
   | 0 | spotbugs | 430 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 637 | ozone-0.4.1 passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 552 | the patch passed |
   | +1 | compile | 374 | the patch passed |
   | +1 | javac | 374 | the patch passed |
   | -0 | checkstyle | 39 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 741 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | the patch passed |
   | +1 | findbugs | 652 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 341 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2300 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 8744 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1353/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1353 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 770902329df7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | ozone-0.4.1 / ab7605b |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1353/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1353/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1353/1/testReport/ |
   | Max. process+thread count | 4619 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1353/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301541)
Time Spent: 9h 50m  (was: 9h 40m)

> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> URL: https://issues.apache.org/jira/browse/HDDS-1927
> Project: Hadoop Distributed 

[jira] [Work logged] (HDDS-1946) CertificateClient should not persist keys/certs to ozone.metadata.dir

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1946?focusedWorklogId=301535=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301535
 ]

ASF GitHub Bot logged work on HDDS-1946:


Author: ASF GitHub Bot
Created on: 26/Aug/19 21:41
Start Date: 26/Aug/19 21:41
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #1311: HDDS-1946. 
CertificateClient should not persist keys/certs to ozone.m…
URL: https://github.com/apache/hadoop/pull/1311#issuecomment-525044434
 
 
   The test failure in TestSecureContainerServer seems related. Can you take a 
look @vivekratnavel 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301535)
Time Spent: 2h 20m  (was: 2h 10m)

> CertificateClient should not persist keys/certs to ozone.metadata.dir
> -
>
> Key: HDDS-1946
> URL: https://issues.apache.org/jira/browse/HDDS-1946
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> For example, when OM and SCM are deployed on the same host with 
> ozone.metadata.dir defined. SCM can start successfully but OM can not because 
> the key/cert from OM will collide with SCM.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13977) NameNode can kill itself if it tries to send too many txns to a QJM simultaneously

2019-08-26 Thread Chen Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916156#comment-16916156
 ] 

Chen Liang commented on HDFS-13977:
---

Thanks for checking [~xkrogen]. I've committed the branch-2 patch.

> NameNode can kill itself if it tries to send too many txns to a QJM 
> simultaneously
> --
>
> Key: HDFS-13977
> URL: https://issues.apache.org/jira/browse/HDFS-13977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, qjm
>Affects Versions: 2.7.7
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HDFS-13977-branch-2.003.patch, HDFS-13977.000.patch, 
> HDFS-13977.001.patch, HDFS-13977.002.patch, HDFS-13977.003.patch
>
>
> h3. Problem & Logs
> We recently encountered an issue on a large cluster (running 2.7.4) in which 
> the NameNode killed itself because it was unable to communicate with the JNs 
> via QJM. We discovered that it was the result of the NameNode trying to send 
> a huge batch of over 1 million transactions to the JNs in a single RPC:
> {code:title=NameNode Logs}
> WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Remote 
> journal X.X.X.X: failed to
>  write txns 1000-11153636. Will try to write to this JN again after the 
> next log roll.
> ...
> WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Took 1098ms 
> to send a batch of 1153637 edits (335886611 bytes) to remote journal 
> X.X.X.X:
> {code}
> {code:title=JournalNode Logs}
> INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for port 8485: 
> readAndProcess from client X.X.X.X threw exception [java.io.IOException: 
> Requested data length 335886776 is longer than maximum configured RPC length 
> 67108864.  RPC came from X.X.X.X]
> java.io.IOException: Requested data length 335886776 is longer than maximum 
> configured RPC length 67108864.  RPC came from X.X.X.X
> at 
> org.apache.hadoop.ipc.Server$Connection.checkDataLength(Server.java:1610)
> at 
> org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1672)
> at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:897)
> at 
> org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:753)
> at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724)
> {code}
> The JournalNodes rejected the RPC because it had a size well over the 64MB 
> default {{ipc.maximum.data.length}}.
> This was triggered by a huge number of files all hitting a hard lease timeout 
> simultaneously, causing the NN to force-close them all at once. This can be a 
> particularly nasty bug as the NN will attempt to re-send this same huge RPC 
> on restart, as it loads an fsimage which still has all of these open files 
> that need to be force-closed.
> h3. Proposed Solution
> To solve this we propose to modify {{EditsDoubleBuffer}} to add a "hard 
> limit" based on the value of {{ipc.maximum.data.length}}. When {{writeOp()}} 
> or {{writeRaw()}} is called, first check the size of {{bufCurrent}}. If it 
> exceeds the hard limit, block the writer until the buffer is flipped and 
> {{bufCurrent}} becomes {{bufReady}}. This gives some self-throttling to 
> prevent the NameNode from killing itself in this way.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13977) NameNode can kill itself if it tries to send too many txns to a QJM simultaneously

2019-08-26 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13977:
--
Fix Version/s: 3.1.4
   2.10.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> NameNode can kill itself if it tries to send too many txns to a QJM 
> simultaneously
> --
>
> Key: HDFS-13977
> URL: https://issues.apache.org/jira/browse/HDFS-13977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, qjm
>Affects Versions: 2.7.7
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 2.10.0, 3.3.0, 3.2.1, 3.1.4
>
> Attachments: HDFS-13977-branch-2.003.patch, HDFS-13977.000.patch, 
> HDFS-13977.001.patch, HDFS-13977.002.patch, HDFS-13977.003.patch
>
>
> h3. Problem & Logs
> We recently encountered an issue on a large cluster (running 2.7.4) in which 
> the NameNode killed itself because it was unable to communicate with the JNs 
> via QJM. We discovered that it was the result of the NameNode trying to send 
> a huge batch of over 1 million transactions to the JNs in a single RPC:
> {code:title=NameNode Logs}
> WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Remote 
> journal X.X.X.X: failed to
>  write txns 1000-11153636. Will try to write to this JN again after the 
> next log roll.
> ...
> WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Took 1098ms 
> to send a batch of 1153637 edits (335886611 bytes) to remote journal 
> X.X.X.X:
> {code}
> {code:title=JournalNode Logs}
> INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for port 8485: 
> readAndProcess from client X.X.X.X threw exception [java.io.IOException: 
> Requested data length 335886776 is longer than maximum configured RPC length 
> 67108864.  RPC came from X.X.X.X]
> java.io.IOException: Requested data length 335886776 is longer than maximum 
> configured RPC length 67108864.  RPC came from X.X.X.X
> at 
> org.apache.hadoop.ipc.Server$Connection.checkDataLength(Server.java:1610)
> at 
> org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1672)
> at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:897)
> at 
> org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:753)
> at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724)
> {code}
> The JournalNodes rejected the RPC because it had a size well over the 64MB 
> default {{ipc.maximum.data.length}}.
> This was triggered by a huge number of files all hitting a hard lease timeout 
> simultaneously, causing the NN to force-close them all at once. This can be a 
> particularly nasty bug as the NN will attempt to re-send this same huge RPC 
> on restart, as it loads an fsimage which still has all of these open files 
> that need to be force-closed.
> h3. Proposed Solution
> To solve this we propose to modify {{EditsDoubleBuffer}} to add a "hard 
> limit" based on the value of {{ipc.maximum.data.length}}. When {{writeOp()}} 
> or {{writeRaw()}} is called, first check the size of {{bufCurrent}}. If it 
> exceeds the hard limit, block the writer until the buffer is flipped and 
> {{bufCurrent}} becomes {{bufReady}}. This gives some self-throttling to 
> prevent the NameNode from killing itself in this way.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complet

2019-08-26 Thread Siddharth Wagle (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916155#comment-16916155
 ] 

Siddharth Wagle commented on HDDS-1868:
---

This needs some change in Ratis contract as well, right? I don't see a clear 
way of asking a RaftPeer whether there is a leader in XceiverServerRatis.

> Ozone pipelines should be marked as ready only after the leader election is 
> complet
> ---
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.5.0
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2013) Add flag gdprEnabled for BucketInfo in OzoneManager proto

2019-08-26 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2013:

Resolution: Won't Do
Status: Resolved  (was: Patch Available)

> Add flag gdprEnabled for BucketInfo in OzoneManager proto
> -
>
> Key: HDDS-2013
> URL: https://issues.apache.org/jira/browse/HDDS-2013
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2039) Some ozone unit test takes too long to finish.

2019-08-26 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916141#comment-16916141
 ] 

Xiaoyu Yao commented on HDDS-2039:
--

Adding another one that take 10+ minutes to finish.

{code}

[ERROR] Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 625.286 
s <<< FAILURE! - in 
org.apache.hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures

{code}

> Some ozone unit test takes too long to finish.
> --
>
> Key: HDDS-2039
> URL: https://issues.apache.org/jira/browse/HDDS-2039
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Priority: Major
>
> Here are a few \{code}
> [INFO] Running org.apache.hadoop.ozone.om.TestOzoneManagerHA
> [INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 436.08 s - in org.apache.hadoop.ozone.om.TestOzoneManagerHA
> [INFO] Running org.apache.hadoop.ozone.om.TestOzoneManager
> [INFO] Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 259.566 s - in org.apache.hadoop.ozone.om.TestOzoneManager
> [INFO] Running org.apache.hadoop.ozone.om.TestScmSafeMode
> [INFO] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 129.653 s - in org.apache.hadoop.ozone.om.TestScmSafeMode
> [INFO] Running org.apache.hadoop.ozone.om.TestOzoneManagerRestart
> [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 843.129 s - in org.apache.hadoop.ozone.om.TestOzoneManagerRestart
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2039) Some ozone unit test takes too long to finish.

2019-08-26 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-2039:
-
Target Version/s: 0.5.0

> Some ozone unit test takes too long to finish.
> --
>
> Key: HDDS-2039
> URL: https://issues.apache.org/jira/browse/HDDS-2039
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Priority: Major
>
> Here are a few \{code}
> [INFO] Running org.apache.hadoop.ozone.om.TestOzoneManagerHA
> [INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 436.08 s - in org.apache.hadoop.ozone.om.TestOzoneManagerHA
> [INFO] Running org.apache.hadoop.ozone.om.TestOzoneManager
> [INFO] Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 259.566 s - in org.apache.hadoop.ozone.om.TestOzoneManager
> [INFO] Running org.apache.hadoop.ozone.om.TestScmSafeMode
> [INFO] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 129.653 s - in org.apache.hadoop.ozone.om.TestScmSafeMode
> [INFO] Running org.apache.hadoop.ozone.om.TestOzoneManagerRestart
> [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 843.129 s - in org.apache.hadoop.ozone.om.TestOzoneManagerRestart
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2039) Some ozone unit test takes too long to finish.

2019-08-26 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HDDS-2039:


 Summary: Some ozone unit test takes too long to finish.
 Key: HDDS-2039
 URL: https://issues.apache.org/jira/browse/HDDS-2039
 Project: Hadoop Distributed Data Store
  Issue Type: Test
Reporter: Xiaoyu Yao


Here are a few \{code}

[INFO] Running org.apache.hadoop.ozone.om.TestOzoneManagerHA
[INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 436.08 
s - in org.apache.hadoop.ozone.om.TestOzoneManagerHA
[INFO] Running org.apache.hadoop.ozone.om.TestOzoneManager
[INFO] Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 259.566 
s - in org.apache.hadoop.ozone.om.TestOzoneManager
[INFO] Running org.apache.hadoop.ozone.om.TestScmSafeMode
[INFO] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 129.653 
s - in org.apache.hadoop.ozone.om.TestScmSafeMode
[INFO] Running org.apache.hadoop.ozone.om.TestOzoneManagerRestart
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 843.129 
s - in org.apache.hadoop.ozone.om.TestOzoneManagerRestart

{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-26 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916137#comment-16916137
 ] 

Xiaoyu Yao commented on HDDS-1927:
--

Just pushed a PR for the cherry-pick. Will merge once get a clean Jenkins run 
result.

> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> URL: https://issues.apache.org/jira/browse/HDDS-1927
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> This Jira is created based on @xiaoyu comment on HDDS-1884
> Can we abstract these add/remove logic into common AclUtil class as we can 
> see similar logic in both bucket manager and key manager? For example,
> public static boolean addAcl(List existingAcls, OzoneAcl newAcl)
> public static boolean removeAcl(List existingAcls, OzoneAcl newAcl)
>  
> But to do this, we need both OmKeyInfo and OMBucketInfo to use list of 
> OzoneAcl/OzoneAclInfo.
> This Jira is to do that refactor, and also address above comment to move 
> common logic to AclUtils.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2037) Fix hadoop version in pom.ozone.xml

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2037?focusedWorklogId=301495=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301495
 ]

ASF GitHub Bot logged work on HDDS-2037:


Author: ASF GitHub Bot
Created on: 26/Aug/19 20:51
Start Date: 26/Aug/19 20:51
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on issue #1351: HDDS-2037. Fix 
hadoop version in pom.ozone.xml.
URL: https://github.com/apache/hadoop/pull/1351#issuecomment-525027240
 
 
   @anuengineer Until now all the testing is done with hadoop 3.2.0 and also 
the previous ozone release (0.4.0-alpha) was done with hadoop 3.2.0 as 
dependency.
   Is there any reason to go to 3.1.0 now?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301495)
Time Spent: 1h 10m  (was: 1h)

> Fix hadoop version in pom.ozone.xml
> ---
>
> Key: HDDS-2037
> URL: https://issues.apache.org/jira/browse/HDDS-2037
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The hadoop version in pom.ozone.xml is pointing to SNAPSHOT version, this has 
> to be fixed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-738) Removing REST protocol support from OzoneClient

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-738?focusedWorklogId=301466=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301466
 ]

ASF GitHub Bot logged work on HDDS-738:
---

Author: ASF GitHub Bot
Created on: 26/Aug/19 20:36
Start Date: 26/Aug/19 20:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1329: HDDS-738. 
Removing REST protocol support from OzoneClient
URL: https://github.com/apache/hadoop/pull/1329#issuecomment-525021740
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 70 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 5 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 48 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 81 | Maven dependency ordering for branch |
   | +1 | mvninstall | 629 | trunk passed |
   | +1 | compile | 380 | trunk passed |
   | +1 | checkstyle | 82 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 865 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 177 | trunk passed |
   | 0 | spotbugs | 442 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 647 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 40 | Maven dependency ordering for patch |
   | +1 | mvninstall | 550 | the patch passed |
   | +1 | compile | 379 | the patch passed |
   | +1 | javac | 102 | hadoop-hdds in the patch passed. |
   | +1 | javac | 277 | hadoop-ozone generated 0 new + 5 unchanged - 3 fixed = 
5 total (was 8) |
   | +1 | checkstyle | 88 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 1 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 6 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 771 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 93 | hadoop-ozone generated 1 new + 26 unchanged - 1 fixed 
= 27 total (was 27) |
   | +1 | findbugs | 643 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 341 | hadoop-hdds in the patch passed. |
   | -1 | unit | 57 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 6315 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestOmUtils |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1329/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1329 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml shellcheck shelldocs |
   | uname | Linux f9b0b83cb1bc 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d1aa859 |
   | Default Java | 1.8.0_222 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1329/6/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1329/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1329/6/testReport/ |
   | Max. process+thread count | 436 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone hadoop-ozone/client 
hadoop-ozone/common hadoop-ozone/datanode hadoop-ozone/dist 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager hadoop-ozone/ozonefs 
hadoop-ozone/s3gateway hadoop-ozone/tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1329/6/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301466)
Time Spent: 1h 20m  (was: 1h 10m)

> Removing REST protocol support from OzoneClient
> 

[jira] [Commented] (HDFS-13541) NameNode Port based selective encryption

2019-08-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916119#comment-16916119
 ] 

Hadoop QA commented on HDFS-13541:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
46s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
55s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
35s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
46s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
21s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
9s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
8s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
42s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 59s{color} | {color:orange} root: The patch generated 12 new + 1655 
unchanged - 9 fixed = 1667 total (was 1664) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
49s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 31s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
34s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 32s{color} 
| {color:red} 

[jira] [Comment Edited] (HDFS-14762) "Path(Path/String parent, String child)" will fail when "child" contains ":"

2019-08-26 Thread Shixiong Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916118#comment-16916118
 ] 

Shixiong Zhu edited comment on HDFS-14762 at 8/26/19 8:22 PM:
--

[~hemanthboyina] I'm not concerned about the behavior of this Path constructor. 
What I hope can be fixed are the following two places:

https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java#L101
https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java#L270

Right now they throw an exception when hitting a file name that contains ":".

For example, when I try to write a file called "2019-08-26 00:00:00", it will 
throw an exception when creating its checksum file.


was (Author: zsxwing):
[~hemanthboyina] I'm not concerned about the behavior of this Path constructor. 
What I hope can be fixed are the following two places:

https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java#L101
https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java#L270

Right now they throw an exception when hitting a file name that contains ":".

> "Path(Path/String parent, String child)" will fail when "child" contains ":"
> 
>
> Key: HDFS-14762
> URL: https://issues.apache.org/jira/browse/HDFS-14762
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shixiong Zhu
>Assignee: hemanthboyina
>Priority: Major
>
> When the "child" parameter contains ":", "Path(Path/String parent, String 
> child)" will throw the following exception:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: ...
> {code}
> Not sure if this is a legit bug. But the following places will hit this error 
> when seeing a Path with a file name containing ":":
> https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java#L101
> https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java#L270



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14762) "Path(Path/String parent, String child)" will fail when "child" contains ":"

2019-08-26 Thread Shixiong Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916118#comment-16916118
 ] 

Shixiong Zhu commented on HDFS-14762:
-

[~hemanthboyina] I'm not concerned about the behavior of this Path constructor. 
What I hope can be fixed are the following two places:

https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java#L101
https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java#L270

Right now they throw an exception when hitting a file name that contains ":".

> "Path(Path/String parent, String child)" will fail when "child" contains ":"
> 
>
> Key: HDFS-14762
> URL: https://issues.apache.org/jira/browse/HDFS-14762
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shixiong Zhu
>Assignee: hemanthboyina
>Priority: Major
>
> When the "child" parameter contains ":", "Path(Path/String parent, String 
> child)" will throw the following exception:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: ...
> {code}
> Not sure if this is a legit bug. But the following places will hit this error 
> when seeing a Path with a file name containing ":":
> https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java#L101
> https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java#L270



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2038) Add Auditlog for ACL operations

2019-08-26 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2038:

Component/s: Ozone Manager

> Add Auditlog for ACL operations
> ---
>
> Key: HDDS-2038
> URL: https://issues.apache.org/jira/browse/HDDS-2038
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: audit, log4j2
>
> This is to add audit log support for Acl operations in HA code path.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2038) Add Auditlog for ACL operations

2019-08-26 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2038:

Labels: audit log4j2  (was: )

> Add Auditlog for ACL operations
> ---
>
> Key: HDDS-2038
> URL: https://issues.apache.org/jira/browse/HDDS-2038
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: audit, log4j2
>
> This is to add audit log support for Acl operations in HA code path.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13220) Change lastCheckpointTime to use fsimage mostRecentCheckpointTime

2019-08-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916108#comment-16916108
 ] 

Hadoop QA commented on HDFS-13220:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestFileContextAcl |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-13220 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978612/HDFS-13220.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d79b3ea51346 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6d7f01c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27682/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27682/testReport/ |
| Max. process+thread count | 4366 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDDS-2026) Overlapping chunk region cannot be read concurrently

2019-08-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916109#comment-16916109
 ] 

Hadoop QA commented on HDDS-2026:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
48s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
4s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
37m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
4s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  7m 
44s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-hdds: The patch generated 8 new + 0 
unchanged - 0 fixed = 8 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 13m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
21s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m 48s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}179m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
|   | hadoop.ozone.container.server.TestSecureContainerServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2769/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-2026 |
| JIRA Patch URL | 

[jira] [Created] (HDDS-2038) Add Auditlog for ACL operations

2019-08-26 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2038:


 Summary: Add Auditlog for ACL operations
 Key: HDDS-2038
 URL: https://issues.apache.org/jira/browse/HDDS-2038
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


This is to add audit log support for Acl operations in HA code path.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2038) Add Auditlog for ACL operations

2019-08-26 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-2038:
---

Assignee: Dinesh Chitlangia

> Add Auditlog for ACL operations
> ---
>
> Key: HDDS-2038
> URL: https://issues.apache.org/jira/browse/HDDS-2038
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> This is to add audit log support for Acl operations in HA code path.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2013) Add flag gdprEnabled for BucketInfo in OzoneManager proto

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2013?focusedWorklogId=301447=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301447
 ]

ASF GitHub Bot logged work on HDDS-2013:


Author: ASF GitHub Bot
Created on: 26/Aug/19 19:45
Start Date: 26/Aug/19 19:45
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1345: HDDS-2013. 
Add flag gdprEnabled for BucketInfo in OzoneManager proto
URL: https://github.com/apache/hadoop/pull/1345#issuecomment-525003014
 
 
   > Cool. But why do we add the gdprEnabled flag as a dedicated field in the 
protocol instead of adding a [gdprEnabled]=true metadata?
   @elek Thanks for your thoughts. It makes sense to use metadata hashmap for 
all the properties. So, we can ignore this PR. I will update the jira to 
reflect the same in the summary.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301447)
Time Spent: 1h 20m  (was: 1h 10m)

> Add flag gdprEnabled for BucketInfo in OzoneManager proto
> -
>
> Key: HDDS-2013
> URL: https://issues.apache.org/jira/browse/HDDS-2013
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14609) RBF: Security should use common AuthenticationFilter

2019-08-26 Thread CR Hota (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916101#comment-16916101
 ] 

CR Hota commented on HDFS-14609:


[~zhangchen] Thanks for the ping and clarifications. It makes sense why hdfs 
changes are needed, lets still do the hdfs changes in a separate Jira and then 
fix these tests after.

> RBF: Security should use common AuthenticationFilter
> 
>
> Key: HDFS-14609
> URL: https://issues.apache.org/jira/browse/HDFS-14609
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: CR Hota
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14609.001.patch, HDFS-14609.002.patch
>
>
> We worked on router based federation security as part of HDFS-13532. We kept 
> it compatible with the way namenode works. However with HADOOP-16314 and 
> HDFS-16354 in trunk, auth filters seems to have been changed causing tests to 
> fail.
> Changes are needed appropriately in RBF, mainly fixing broken tests.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2013) Add flag gdprEnabled for BucketInfo in OzoneManager proto

2019-08-26 Thread Dinesh Chitlangia (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916098#comment-16916098
 ] 

Dinesh Chitlangia commented on HDDS-2013:
-

Based on comments from [~elek] and discussion with [~anu], [~bharatviswa], it 
makes more sense to add this flag in bucket's metadata hashmap instead of proto.

> Add flag gdprEnabled for BucketInfo in OzoneManager proto
> -
>
> Key: HDDS-2013
> URL: https://issues.apache.org/jira/browse/HDDS-2013
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2013) Add flag gdprEnabled for BucketInfo in OzoneManager proto

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2013?focusedWorklogId=301442=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301442
 ]

ASF GitHub Bot logged work on HDDS-2013:


Author: ASF GitHub Bot
Created on: 26/Aug/19 19:36
Start Date: 26/Aug/19 19:36
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1345: HDDS-2013. Add flag 
gdprEnabled for BucketInfo in OzoneManager proto
URL: https://github.com/apache/hadoop/pull/1345#issuecomment-524999802
 
 
   Cool. But why do we add the gdprEnabled flag as a dedicated field in the 
protocol instead of adding a [gdprEnabled]=true metadata? 
   
   (I am not against it, just trying to understand the motivation...)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301442)
Time Spent: 1h 10m  (was: 1h)

> Add flag gdprEnabled for BucketInfo in OzoneManager proto
> -
>
> Key: HDDS-2013
> URL: https://issues.apache.org/jira/browse/HDDS-2013
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14567) If kms-acls is failed to load, and it will never be reload

2019-08-26 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916084#comment-16916084
 ] 

hemanthboyina commented on HDFS-14567:
--

please review the updated test code [~jojochuang] , thanks

>  If kms-acls is failed to load, and it will never be reload
> ---
>
> Key: HDFS-14567
> URL: https://issues.apache.org/jira/browse/HDFS-14567
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14567.001.patch, HDFS-14567.002.patch, 
> HDFS-14567.patch
>
>
> Scenario : through one automation tool , we are generating kms-acls , though 
> the generation of kms-acls is not completed , the system will detect a 
> modification of kms-alcs and it will try to load
> Before getting the configuration we are modifiying last reload time , code 
> shown below
> {code:java}
> private Configuration loadACLsFromFile() {
> LOG.debug("Loading ACLs file");
> lastReload = System.currentTimeMillis();
> Configuration conf = KMSConfiguration.getACLsConf();
> // triggering the resource loading.
> conf.get(Type.CREATE.getAclConfigKey());
> return conf;
> }{code}
> if the kms-acls file written within next 100ms , the changes will not be 
> loaded as this condition "newer = f.lastModified() - time > 100" never meets 
> because we have modified last reload time before getting the configuration
> {code:java}
> public static boolean isACLsFileNewer(long time) {
> boolean newer = false;
> String confDir = System.getProperty(KMS_CONFIG_DIR);
> if (confDir != null) {
> Path confPath = new Path(confDir);
> if (!confPath.isUriPathAbsolute()) {
> throw new RuntimeException("System property '" + KMS_CONFIG_DIR +
> "' must be an absolute path: " + confDir);
> }
> File f = new File(confDir, KMS_ACLS_XML);
> LOG.trace("Checking file {}, modification time is {}, last reload time is"
> + " {}", f.getPath(), f.lastModified(), time);
> // at least 100ms newer than time, we do this to ensure the file
> // has been properly closed/flushed
> newer = f.lastModified() - time > 100;
> }
> return newer;
> } {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=301437=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301437
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 26/Aug/19 19:22
Start Date: 26/Aug/19 19:22
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1353: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1353
 
 
   This is to verify cherry-pick HDDS-1927 from trunk to ozone-0.4.1
   
   …buted by Xiaoyu Yao.
   
   Signed-off-by: Anu Engineer 
   (cherry picked from commit d58eba867234eaac0e229feb990e9dab3912e063)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301437)
Time Spent: 9h 40m  (was: 9.5h)

> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> URL: https://issues.apache.org/jira/browse/HDDS-1927
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> This Jira is created based on @xiaoyu comment on HDDS-1884
> Can we abstract these add/remove logic into common AclUtil class as we can 
> see similar logic in both bucket manager and key manager? For example,
> public static boolean addAcl(List existingAcls, OzoneAcl newAcl)
> public static boolean removeAcl(List existingAcls, OzoneAcl newAcl)
>  
> But to do this, we need both OmKeyInfo and OMBucketInfo to use list of 
> OzoneAcl/OzoneAclInfo.
> This Jira is to do that refactor, and also address above comment to move 
> common logic to AclUtils.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14497) Write lock held by metasave impact following RPC processing

2019-08-26 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14497:
---
Status: Patch Available  (was: Reopened)

> Write lock held by metasave impact following RPC processing
> ---
>
> Key: HDFS-14497
> URL: https://issues.apache.org/jira/browse/HDFS-14497
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14497-addendum.001.patch, HDFS-14497.001.patch
>
>
> NameNode meta save hold global write lock currently, so following RPC r/w 
> request or inner-thread of NameNode could be paused if they try to acquire 
> global read/write lock and have to wait before metasave release it.
> I propose to change write lock to read lock and let some read request could 
> be process normally. I think it could not change informations which meta save 
> try to get if we try to open read request.
> Actually, we need ensure that there are only one thread to execute metaSave, 
> otherwise, output streams could meet exception especially both streams hold 
> the same file handle or some other same output stream.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14775) Add Timestamp for longest FSN write/read lock held log

2019-08-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916081#comment-16916081
 ] 

Hadoop QA commented on HDFS-14775:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
53s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
26m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 99m 
52s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}196m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14775 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978602/HDFS-14775.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b83c036fb85c 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 689d2e6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27680/testReport/ |
| Max. process+thread count | 2709 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27680/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Add Timestamp for longest FSN write/read lock held log
> --
>
> Key: HDFS-14775
> URL: 

[jira] [Work logged] (HDDS-2037) Fix hadoop version in pom.ozone.xml

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2037?focusedWorklogId=301433=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301433
 ]

ASF GitHub Bot logged work on HDDS-2037:


Author: ASF GitHub Bot
Created on: 26/Aug/19 19:06
Start Date: 26/Aug/19 19:06
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1351: HDDS-2037. Fix 
hadoop version in pom.ozone.xml.
URL: https://github.com/apache/hadoop/pull/1351#issuecomment-524988938
 
 
   @elek  are there any known issues if we depend on 3.1.0? like the OzoneFS ? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301433)
Time Spent: 1h  (was: 50m)

> Fix hadoop version in pom.ozone.xml
> ---
>
> Key: HDDS-2037
> URL: https://issues.apache.org/jira/browse/HDDS-2037
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The hadoop version in pom.ozone.xml is pointing to SNAPSHOT version, this has 
> to be fixed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2037) Fix hadoop version in pom.ozone.xml

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2037?focusedWorklogId=301432=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301432
 ]

ASF GitHub Bot logged work on HDDS-2037:


Author: ASF GitHub Bot
Created on: 26/Aug/19 19:05
Start Date: 26/Aug/19 19:05
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1351: HDDS-2037. Fix 
hadoop version in pom.ozone.xml.
URL: https://github.com/apache/hadoop/pull/1351#issuecomment-524988662
 
 
   Nanda, it might be a good idea to depend on version 3.1.0, @arp7, @jnp can 
we do that in this pull request ? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301432)
Time Spent: 50m  (was: 40m)

> Fix hadoop version in pom.ozone.xml
> ---
>
> Key: HDDS-2037
> URL: https://issues.apache.org/jira/browse/HDDS-2037
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The hadoop version in pom.ozone.xml is pointing to SNAPSHOT version, this has 
> to be fixed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2026) Overlapping chunk region cannot be read concurrently

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2026?focusedWorklogId=301408=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301408
 ]

ASF GitHub Bot logged work on HDDS-2026:


Author: ASF GitHub Bot
Created on: 26/Aug/19 18:28
Start Date: 26/Aug/19 18:28
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1349: HDDS-2026. 
Overlapping chunk region cannot be read concurrently
URL: https://github.com/apache/hadoop/pull/1349#issuecomment-524973708
 
 
   LGTM, I will test it to make sure it works as expected. I will also wait for 
others to comment and then commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301408)
Time Spent: 1h  (was: 50m)

> Overlapping chunk region cannot be read concurrently
> 
>
> Key: HDDS-2026
> URL: https://issues.apache.org/jira/browse/HDDS-2026
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Critical
>  Labels: pull-request-available
> Attachments: HDDS-2026-repro.patch, changes.diff, 
> first-cut-proposed.diff
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Concurrent requests to datanode for the same chunk may result in the 
> following exception in datanode:
> {code}
> java.nio.channels.OverlappingFileLockException
>at java.base/sun.nio.ch.FileLockTable.checkList(FileLockTable.java:229)
>at java.base/sun.nio.ch.FileLockTable.add(FileLockTable.java:123)
>at 
> java.base/sun.nio.ch.AsynchronousFileChannelImpl.addToFileLockTable(AsynchronousFileChannelImpl.java:178)
>at 
> java.base/sun.nio.ch.SimpleAsynchronousFileChannelImpl.implLock(SimpleAsynchronousFileChannelImpl.java:185)
>at 
> java.base/sun.nio.ch.AsynchronousFileChannelImpl.lock(AsynchronousFileChannelImpl.java:118)
>at 
> org.apache.hadoop.ozone.container.keyvalue.helpers.ChunkUtils.readData(ChunkUtils.java:175)
>at 
> org.apache.hadoop.ozone.container.keyvalue.impl.ChunkManagerImpl.readChunk(ChunkManagerImpl.java:213)
>at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleReadChunk(KeyValueHandler.java:574)
>at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:195)
>at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(HddsDispatcher.java:271)
>at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
>at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
>at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
> {code}
> It seems this is covered by retry logic, as key read is eventually successful 
> at client side.
> The problem is that:
> bq. File locks are held on behalf of the entire Java virtual machine. They 
> are not suitable for controlling access to a file by multiple threads within 
> the same virtual machine. 
> ([source|https://docs.oracle.com/javase/8/docs/api/java/nio/channels/FileLock.html])
> code ref: 
> [{{ChunkUtils.readData}}|https://github.com/apache/hadoop/blob/c92de8209d1c7da9e7ce607abeecb777c4a52c6a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java#L175]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14778) BlockManager findAndMarkBlockAsCorrupt adds block to the map if the Storage state is failed

2019-08-26 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916030#comment-16916030
 ] 

hemanthboyina commented on HDFS-14778:
--

{code:java}
 if (storage == null) {
storage = storedBlock.findStorageInfo(node);
 }
 if (storage == null) {
   blockLog.debug("BLOCK* findAndMarkBlockAsCorrupt: {} not found on {}",
  blk, dn);
  return;
 }
 markBlockAsCorrupt(new BlockToMarkCorrupt(reportedBlock, storedBlock,
blk.getGenerationStamp(), reason, Reason.CORRUPTION_REPORTED), 
storage, node); {code}
before marking the block as corrupt , we should check if the storage state is 
failed 

> BlockManager findAndMarkBlockAsCorrupt adds block to the map if the Storage 
> state is failed
> ---
>
> Key: HDFS-14778
> URL: https://issues.apache.org/jira/browse/HDFS-14778
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
>
> Should not mark the block as corrupt if the storage state is failed



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2037) Fix hadoop version in pom.ozone.xml

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2037?focusedWorklogId=301402=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301402
 ]

ASF GitHub Bot logged work on HDDS-2037:


Author: ASF GitHub Bot
Created on: 26/Aug/19 18:11
Start Date: 26/Aug/19 18:11
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1351: HDDS-2037. 
Fix hadoop version in pom.ozone.xml.
URL: https://github.com/apache/hadoop/pull/1351#discussion_r317728393
 
 

 ##
 File path: hadoop-hdds/server-scm/pom.xml
 ##
 @@ -100,10 +100,6 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd;>
   org.bouncycastle
   bcprov-jdk15on
 
-
-  io.dropwizard.metrics
-  metrics-core
 
 Review comment:
   Thanks @nandakumar131 for fixing these Maven warnings, too.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301402)
Time Spent: 40m  (was: 0.5h)

> Fix hadoop version in pom.ozone.xml
> ---
>
> Key: HDDS-2037
> URL: https://issues.apache.org/jira/browse/HDDS-2037
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The hadoop version in pom.ozone.xml is pointing to SNAPSHOT version, this has 
> to be fixed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14778) BlockManager findAndMarkBlockAsCorrupt adds block to the map if the Storage state is failed

2019-08-26 Thread hemanthboyina (Jira)
hemanthboyina created HDFS-14778:


 Summary: BlockManager findAndMarkBlockAsCorrupt adds block to the 
map if the Storage state is failed
 Key: HDFS-14778
 URL: https://issues.apache.org/jira/browse/HDFS-14778
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: hemanthboyina
Assignee: hemanthboyina


Should not mark the block as corrupt if the storage state is failed



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1975) Implement default acls for bucket/volume/key for OM HA code

2019-08-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916020#comment-16916020
 ] 

Hudson commented on HDDS-1975:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17187 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17187/])
HDDS-1975. Implement default acls for bucket/volume/key for OM HA code. 
(github: rev d1aa8596e0e5929ecf0865f4bb008cc1769a3546)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadCommitPartRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketDeleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3InitiateMultipartUploadRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadCompleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadAbortRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMClientRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRenameRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMAllocateBlockRequest.java


> Implement default acls for bucket/volume/key for OM HA code
> ---
>
> Key: HDDS-1975
> URL: https://issues.apache.org/jira/browse/HDDS-1975
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> This Jira is to implement default ACLs for Ozone volume/bucket/key.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1975) Implement default acls for bucket/volume/key for OM HA code

2019-08-26 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1975:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Implement default acls for bucket/volume/key for OM HA code
> ---
>
> Key: HDDS-1975
> URL: https://issues.apache.org/jira/browse/HDDS-1975
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> This Jira is to implement default ACLs for Ozone volume/bucket/key.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1975) Implement default acls for bucket/volume/key for OM HA code

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1975?focusedWorklogId=301396=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301396
 ]

ASF GitHub Bot logged work on HDDS-1975:


Author: ASF GitHub Bot
Created on: 26/Aug/19 18:05
Start Date: 26/Aug/19 18:05
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1315: HDDS-1975. 
Implement default acls for bucket/volume/key for OM HA code.
URL: https://github.com/apache/hadoop/pull/1315#issuecomment-524964926
 
 
   Thank You @xiaoyuyao for the review.
   I will commit this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301396)
Time Spent: 3.5h  (was: 3h 20m)

> Implement default acls for bucket/volume/key for OM HA code
> ---
>
> Key: HDDS-1975
> URL: https://issues.apache.org/jira/browse/HDDS-1975
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> This Jira is to implement default ACLs for Ozone volume/bucket/key.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1975) Implement default acls for bucket/volume/key for OM HA code

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1975?focusedWorklogId=301397=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301397
 ]

ASF GitHub Bot logged work on HDDS-1975:


Author: ASF GitHub Bot
Created on: 26/Aug/19 18:05
Start Date: 26/Aug/19 18:05
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1315: 
HDDS-1975. Implement default acls for bucket/volume/key for OM HA code.
URL: https://github.com/apache/hadoop/pull/1315
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301397)
Time Spent: 3h 40m  (was: 3.5h)

> Implement default acls for bucket/volume/key for OM HA code
> ---
>
> Key: HDDS-1975
> URL: https://issues.apache.org/jira/browse/HDDS-1975
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> This Jira is to implement default ACLs for Ozone volume/bucket/key.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1574) Ensure same datanodes are not a part of multiple pipelines

2019-08-26 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-1574:
-

Assignee: Siddharth Wagle

> Ensure same datanodes are not a part of multiple pipelines
> --
>
> Key: HDDS-1574
> URL: https://issues.apache.org/jira/browse/HDDS-1574
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
>
> Details in design doc.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1570) Refactor heartbeat reports to report all the pipelines that are open

2019-08-26 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-1570:
-

Assignee: Siddharth Wagle

> Refactor heartbeat reports to report all the pipelines that are open
> 
>
> Key: HDDS-1570
> URL: https://issues.apache.org/jira/browse/HDDS-1570
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
>
> Presently the pipeline report only reports a single pipeline id.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1570) Refactor heartbeat reports to report all the pipelines that are open

2019-08-26 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-1570:
-

Assignee: (was: Siddharth Wagle)

> Refactor heartbeat reports to report all the pipelines that are open
> 
>
> Key: HDDS-1570
> URL: https://issues.apache.org/jira/browse/HDDS-1570
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Siddharth Wagle
>Priority: Major
>
> Presently the pipeline report only reports a single pipeline id.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1574) Ensure same datanodes are not a part of multiple pipelines

2019-08-26 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1574:
--
Summary: Ensure same datanodes are not a part of multiple pipelines  (was: 
ensure same datanodes are not a part of multiple pipelines)

> Ensure same datanodes are not a part of multiple pipelines
> --
>
> Key: HDDS-1574
> URL: https://issues.apache.org/jira/browse/HDDS-1574
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Priority: Major
>
> Details in design doc.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1574) ensure same datanodes are not a part of multiple pipelines

2019-08-26 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-1574:
-

Assignee: (was: Siddharth Wagle)

> ensure same datanodes are not a part of multiple pipelines
> --
>
> Key: HDDS-1574
> URL: https://issues.apache.org/jira/browse/HDDS-1574
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Priority: Major
>
> Details in design doc.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1572) Implement a Pipeline scrubber to maintain healthy number of pipelines in a cluster

2019-08-26 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-1572:
-

Assignee: (was: Siddharth Wagle)

> Implement a Pipeline scrubber to maintain healthy number of pipelines in a 
> cluster
> --
>
> Key: HDDS-1572
> URL: https://issues.apache.org/jira/browse/HDDS-1572
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Priority: Major
>
> The design document talks about initial requirements for the pipeline 
> scrubber.
> - Maintain a datastructure for datanodes violating the pipeline membership 
> soft upper bound.
> - Scan the pipelines that the nodes are a part of to select candidates for 
> teardown.
> - Scan pipelines that do not have open containers currently in use and 
> datanodes are in violation.
> - Schedule tear down operation if a candidate pipeline is found.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2021) Upgrade Guava library to v26 in hdds project

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2021?focusedWorklogId=301385=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301385
 ]

ASF GitHub Bot logged work on HDDS-2021:


Author: ASF GitHub Bot
Created on: 26/Aug/19 17:47
Start Date: 26/Aug/19 17:47
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1340: HDDS-2021. 
Upgrade Guava library to v26 in hdds project
URL: https://github.com/apache/hadoop/pull/1340#issuecomment-524956616
 
 
   > @dineshchitlangia I think we have a build issue, can you please check?
   
   @anuengineer After rebasing to trunk, I see the problem is bigger than I 
originally presumed. The dependency convergence issue is now affecting multiple 
modules in hadoop, yarn, mr.
   I will spend some more time on this to see what is the best way to fix it. 
Right now, a no-brainer approach is to update the version across all modules, 
however, I am sure it will also lead to lot of related code changes where the 
methods may have changed/removed in new version.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301385)
Time Spent: 1h  (was: 50m)

> Upgrade Guava library to v26 in hdds project
> 
>
> Key: HDDS-2021
> URL: https://issues.apache.org/jira/browse/HDDS-2021
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Upgrade Guava library to v26 in hdds project



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2021) Upgrade Guava library to v26 in hdds project

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2021?focusedWorklogId=301381=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301381
 ]

ASF GitHub Bot logged work on HDDS-2021:


Author: ASF GitHub Bot
Created on: 26/Aug/19 17:44
Start Date: 26/Aug/19 17:44
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1340: HDDS-2021. 
Upgrade Guava library to v26 in hdds project
URL: https://github.com/apache/hadoop/pull/1340#issuecomment-524956616
 
 
   > @dineshchitlangia I think we have a build issue, can you please check?
   
   @anuengineer After rebasing to trunk, I see the problem is bigger than I 
originally presumed. The dependency convergence issue is now affecting multiple 
modules in hadoop, yarn, mr.
   I will spend some more time on this to see what is the best way to fix it. 
Right now, a no-brainer approach is to update the version across all modules, 
however, I am sure if will also lead to lot of related code changes where the 
methods may have changes in new version.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301381)
Time Spent: 50m  (was: 40m)

> Upgrade Guava library to v26 in hdds project
> 
>
> Key: HDDS-2021
> URL: https://issues.apache.org/jira/browse/HDDS-2021
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Upgrade Guava library to v26 in hdds project



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13270) RBF: Router audit logger

2019-08-26 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915994#comment-16915994
 ] 

hemanthboyina commented on HDFS-13270:
--

[~maobaolong] [~elgoiri] [~jojochuang] [~surendrasingh]  please check the 
recent patch

> RBF: Router audit logger
> 
>
> Key: HDFS-13270
> URL: https://issues.apache.org/jira/browse/HDFS-13270
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-13270.001.patch, HDFS-13270.002.patch, 
> HDFS-13270.003.patch
>
>
> We can use router auditlogger to log the client info and cmd, because the 
> FSNamesystem#Auditlogger's log think the client are all from router.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2013) Add flag gdprEnabled for BucketInfo in OzoneManager proto

2019-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2013?focusedWorklogId=301377=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301377
 ]

ASF GitHub Bot logged work on HDDS-2013:


Author: ASF GitHub Bot
Created on: 26/Aug/19 17:37
Start Date: 26/Aug/19 17:37
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1345: HDDS-2013. 
Add flag gdprEnabled for BucketInfo in OzoneManager proto
URL: https://github.com/apache/hadoop/pull/1345#issuecomment-524953767
 
 
   > We can also use the existing metadata hashmap to store these kind of data. 
It would make it more easy to add more features without changing the protocol 
later...
   
   @elek That is absolutely correct. We intend to use the same for storing the 
symmetric encryption key info. Glad that we are on the same page even before it 
started!! 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301377)
Time Spent: 1h  (was: 50m)

> Add flag gdprEnabled for BucketInfo in OzoneManager proto
> -
>
> Key: HDDS-2013
> URL: https://issues.apache.org/jira/browse/HDDS-2013
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >