[jira] [Commented] (HDFS-14303) chek block directory logic not correct when there is only meta file, print no meaning warn log

2019-06-04 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855839#comment-16855839
 ] 

He Xiaoqiao commented on HDFS-14303:


[~iamgd67] Thanks for your pings. some minor comments about 
#testScanDirectoryStructureWarn
a. it is not necessary to check rootLogger,
b. please notice the report from [~hadoopqa] and fix checkstyle.
c. any other way to check result rather than read log?
FYI, thanks.

> chek block directory logic not correct when there is only meta file, print no 
> meaning warn log
> --
>
> Key: HDFS-14303
> URL: https://issues.apache.org/jira/browse/HDFS-14303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.3, 2.9.2, 2.8.5
> Environment: env free
>Reporter: qiang Liu
>Priority: Minor
>  Labels: easy-fix
> Attachments: HDFS-14303-branch-2.005.patch, 
> HDFS-14303-branch-2.009.patch, HDFS-14303-branch-2.010.patch, 
> HDFS-14303-branch-2.7.001.patch, HDFS-14303-branch-2.7.004.patch, 
> HDFS-14303-branch-2.7.006.patch, HDFS-14303-branch-2.9.011.patch, 
> HDFS-14303-branch-2.9.012.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> chek block directory logic not correct when there is only meta file,print no 
> meaning warn log, eg:
>  WARN DirectoryScanner:? - Block: 1101939874 has to be upgraded to block 
> ID-based layout. Actual block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68,
>  expected block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68/subdir68



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14527) Stop all DataNodes may result in NN terminate

2019-06-04 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855822#comment-16855822
 ] 

He Xiaoqiao commented on HDFS-14527:


Thanks [~elgoiri] for your detailed reviews. upload [^HDFS-14527.003.patch] to 
fix following comments. Pending jenkins. Another more reviews. Thanks again.

> Stop all DataNodes may result in NN terminate
> -
>
> Key: HDFS-14527
> URL: https://issues.apache.org/jira/browse/HDFS-14527
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14527.001.patch, HDFS-14527.002.patch, 
> HDFS-14527.003.patch
>
>
> If we stop all datanodes of cluster, BlockPlacementPolicyDefault#chooseTarget 
> may get ArithmeticException when calling #getMaxNodesPerRack, which throws 
> the runtime exception out to BlockManager's ReplicationMonitor thread and 
> then terminate the NN.
> The root cause is that BlockPlacementPolicyDefault#chooseTarget not hold the 
> global lock, and if all DataNodes are dead between 
> {{clusterMap.getNumberOfLeaves()}} and {{getMaxNodesPerRack}} then it meet 
> {{ArithmeticException}} while invoke {{getMaxNodesPerRack}}.
> {code:java}
>   private DatanodeStorageInfo[] chooseTarget(int numOfReplicas,
> Node writer,
> List chosenStorage,
> boolean returnChosenNodes,
> Set excludedNodes,
> long blocksize,
> final BlockStoragePolicy storagePolicy,
> EnumSet addBlockFlags,
> EnumMap sTypes) {
> if (numOfReplicas == 0 || clusterMap.getNumOfLeaves()==0) {
>   return DatanodeStorageInfo.EMPTY_ARRAY;
> }
> ..
> int[] result = getMaxNodesPerRack(chosenStorage.size(), numOfReplicas);
> ..
> }
> {code}
> Some detailed log show as following.
> {code:java}
> 2019-05-31 12:29:21,803 ERROR 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception. 
> java.lang.ArithmeticException: / by zero
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.getMaxNodesPerRack(BlockPlacementPolicyDefault.java:282)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:228)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:132)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationWork.chooseTargets(BlockManager.java:4533)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationWork.access$1800(BlockManager.java:4493)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1954)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1830)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4453)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4388)
> at java.lang.Thread.run(Thread.java:745)
> 2019-05-31 12:29:21,805 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> {code}
> To be honest, this is not serious bug and not reprod easily, since if we stop 
> all Datanodes and only keep NameNode lives, HDFS could be not offer service 
> normally and we could only retrieve directory. It may be one corner case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14527) Stop all DataNodes may result in NN terminate

2019-06-04 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HDFS-14527:
---
Attachment: HDFS-14527.003.patch

> Stop all DataNodes may result in NN terminate
> -
>
> Key: HDFS-14527
> URL: https://issues.apache.org/jira/browse/HDFS-14527
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14527.001.patch, HDFS-14527.002.patch, 
> HDFS-14527.003.patch
>
>
> If we stop all datanodes of cluster, BlockPlacementPolicyDefault#chooseTarget 
> may get ArithmeticException when calling #getMaxNodesPerRack, which throws 
> the runtime exception out to BlockManager's ReplicationMonitor thread and 
> then terminate the NN.
> The root cause is that BlockPlacementPolicyDefault#chooseTarget not hold the 
> global lock, and if all DataNodes are dead between 
> {{clusterMap.getNumberOfLeaves()}} and {{getMaxNodesPerRack}} then it meet 
> {{ArithmeticException}} while invoke {{getMaxNodesPerRack}}.
> {code:java}
>   private DatanodeStorageInfo[] chooseTarget(int numOfReplicas,
> Node writer,
> List chosenStorage,
> boolean returnChosenNodes,
> Set excludedNodes,
> long blocksize,
> final BlockStoragePolicy storagePolicy,
> EnumSet addBlockFlags,
> EnumMap sTypes) {
> if (numOfReplicas == 0 || clusterMap.getNumOfLeaves()==0) {
>   return DatanodeStorageInfo.EMPTY_ARRAY;
> }
> ..
> int[] result = getMaxNodesPerRack(chosenStorage.size(), numOfReplicas);
> ..
> }
> {code}
> Some detailed log show as following.
> {code:java}
> 2019-05-31 12:29:21,803 ERROR 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception. 
> java.lang.ArithmeticException: / by zero
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.getMaxNodesPerRack(BlockPlacementPolicyDefault.java:282)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:228)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:132)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationWork.chooseTargets(BlockManager.java:4533)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationWork.access$1800(BlockManager.java:4493)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1954)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1830)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4453)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4388)
> at java.lang.Thread.run(Thread.java:745)
> 2019-05-31 12:29:21,805 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> {code}
> To be honest, this is not serious bug and not reprod easily, since if we stop 
> all Datanodes and only keep NameNode lives, HDFS could be not offer service 
> normally and we could only retrieve directory. It may be one corner case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1637) Fix random test failure TestSCMContainerPlacementRackAware

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1637?focusedWorklogId=253790=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253790
 ]

ASF GitHub Bot logged work on HDDS-1637:


Author: ASF GitHub Bot
Created on: 04/Jun/19 14:57
Start Date: 04/Jun/19 14:57
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #904: HDDS-1637. 
Fix random test failure TestSCMContainerPlacementRackAware.
URL: https://github.com/apache/hadoop/pull/904#discussion_r290342177
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRackAware.java
 ##
 @@ -82,7 +82,7 @@ public void setup() {
 when(nodeManager.getNodeStat(anyObject()))
 .thenReturn(new SCMNodeMetric(STORAGE_CAPACITY, 0L, 100L));
 when(nodeManager.getNodeStat(datanodes.get(2)))
-.thenReturn(new SCMNodeMetric(STORAGE_CAPACITY, 90L, 10L));
+.thenReturn(new SCMNodeMetric(STORAGE_CAPACITY, 90L, 20L));
 
 Review comment:
   Thanks @ChenSammi  for the patch. I had though about similar test only fix 
yesterday. 
   
   But this may hide a code bug when we deal with mixed nodes that some have 
enough space while others don't. If the chooseNodes() keep choosing the nodes 
without enough capacity for 3 times (default), we end up with failures even 
though other nodes have enough space. Here are two proposals:
   
   1. Filtering the candidate node list without enough capacity so that the 
placement algorithm won't need to deal with it.
   
   2. Handle capacity in the placement algorithm by adding the detected node 
without enough capacity to the exclude node list, so that the algorithm won't 
choose them in the next round. 
   
   I prefer 1 because it seems a cleaner approach and more efficient than 2. 
I'm OK with 2 as well if you feel it is easier to adjust the placement 
algorithm. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253790)
Time Spent: 0.5h  (was: 20m)

> Fix random test failure TestSCMContainerPlacementRackAware
> --
>
> Key: HDDS-1637
> URL: https://issues.apache.org/jira/browse/HDDS-1637
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This has been seen randomly in latest trunk CI, e.g., 
> [https://ci.anzix.net/job/ozone/16980/testReport/org.apache.hadoop.hdds.scm.container.placement.algorithms/TestSCMContainerPlacementRackAware/testFallback/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1641) Csi server fails because transitive Netty dependencies

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1641?focusedWorklogId=253787=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253787
 ]

ASF GitHub Bot logged work on HDDS-1641:


Author: ASF GitHub Bot
Created on: 04/Jun/19 14:56
Start Date: 04/Jun/19 14:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #906: HDDS-1641. Csi 
server fails because transitive Netty dependencies
URL: https://github.com/apache/hadoop/pull/906#issuecomment-498708079
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 523 | trunk passed |
   | +1 | compile | 269 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1532 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 165 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 487 | the patch passed |
   | +1 | compile | 307 | the patch passed |
   | +1 | javac | 307 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 672 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 236 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1700 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 58 | The patch does not generate ASF License warnings. |
   | | | 5587 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-906/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/906 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 9b0172daebe3 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7991159 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-906/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-906/1/testReport/ |
   | Max. process+thread count | 4576 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/csi U: hadoop-ozone/csi |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-906/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253787)
Time Spent: 0.5h  (was: 20m)

> Csi server fails because transitive Netty dependencies
> --
>
> Key: HDDS-1641
> URL: https://issues.apache.org/jira/browse/HDDS-1641
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> CSI server can't be started because an ClassNotFound exception.
> It turned out that with using the new configuration api we got old netty jar 
> files as transitive dependencies. (hdds-configuration depends on 
> hadoop-common, hadoop-commons depends on the word)
> We should exclude all the old netty version from the classpath 

[jira] [Work logged] (HDDS-1635) Maintain docker entrypoint and envtoconf inside ozone project

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1635?focusedWorklogId=253788=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253788
 ]

ASF GitHub Bot logged work on HDDS-1635:


Author: ASF GitHub Bot
Created on: 04/Jun/19 14:56
Start Date: 04/Jun/19 14:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #894: HDDS-1635. 
Maintain docker entrypoint and envtoconf inside ozone project
URL: https://github.com/apache/hadoop/pull/894#issuecomment-498708093
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 588 | trunk passed |
   | +1 | compile | 328 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 800 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 203 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 507 | the patch passed |
   | +1 | compile | 331 | the patch passed |
   | +1 | javac | 331 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | pylint | 2 | Error running pylint. Please check pylint stderr files. |
   | +1 | pylint | 3 | There were no new pylint issues. |
   | +1 | shellcheck | 2 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 680 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 201 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 245 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1081 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 74 | The patch does not generate ASF License warnings. |
   | | | 5347 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-894/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/894 |
   | Optional Tests | dupname asflicense shellcheck shelldocs mvnsite unit 
compile javac javadoc mvninstall shadedclient pylint |
   | uname | Linux 53b193ccd49c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7991159 |
   | Default Java | 1.8.0_212 |
   | pylint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-894/2/artifact/out/patch-pylint-stderr.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-894/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-894/2/testReport/ |
   | Max. process+thread count | 4667 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-894/2/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 pylint=1.9.2 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253788)
Time Spent: 1h 50m  (was: 1h 40m)

> Maintain docker entrypoint and envtoconf inside ozone project
> -
>
> Key: HDDS-1635
> URL: https://issues.apache.org/jira/browse/HDDS-1635
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> During 

[jira] [Work logged] (HDDS-1508) Provide example k8s deployment files for the new CSI server

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1508?focusedWorklogId=253778=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253778
 ]

ASF GitHub Bot logged work on HDDS-1508:


Author: ASF GitHub Bot
Created on: 04/Jun/19 14:44
Start Date: 04/Jun/19 14:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #905: HDDS-1508. 
Provide example k8s deployment files for the new CSI server
URL: https://github.com/apache/hadoop/pull/905#issuecomment-498702880
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 520 | trunk passed |
   | +1 | compile | 317 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 755 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 196 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 486 | the patch passed |
   | +1 | compile | 308 | the patch passed |
   | +1 | javac | 308 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 629 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 221 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1191 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 5122 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.scm.pipeline.TestSCMPipelineMetrics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-905/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/905 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient yamllint shellcheck shelldocs |
   | uname | Linux 8cf0f89bb600 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7991159 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-905/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-905/2/testReport/ |
   | Max. process+thread count | 5232 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-905/2/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253778)
Time Spent: 0.5h  (was: 20m)

> Provide example k8s deployment files for the new CSI server
> ---
>
> Key: HDDS-1508
> URL: https://issues.apache.org/jira/browse/HDDS-1508
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Issue HDDS-1382 introduced a new internal CSI server. We should provide 
> example deployment files to make it easy to deploy it to any kubernetes 
> cluster.



--
This message was 

[jira] [Created] (HDFS-14543) RBF: RouterRpcClient shouldn't retry when the remoteException is RetryException

2019-06-04 Thread xuzq (JIRA)
xuzq created HDFS-14543:
---

 Summary: RBF: RouterRpcClient shouldn't retry when the 
remoteException is RetryException
 Key: HDFS-14543
 URL: https://issues.apache.org/jira/browse/HDFS-14543
 Project: Hadoop HDFS
  Issue Type: Task
  Components: rbf
Reporter: xuzq
Assignee: xuzq


Currently RouterRpcClient will retry when the remote exception is 
RetryException.

And it will use one handler to retry 15 times, if the remote exception is 
always RetryException.

Although there is no sleep between retries, but it may costs some other 
resources.

So, RouterRpcClient show return the RetryException to client if the name node 
returns RetryException, and do sleep at client.

After sleep some times, client can request the same Router.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1641) Csi server fails because transitive Netty dependencies

2019-06-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855761#comment-16855761
 ] 

Hudson commented on HDDS-1641:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16663 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16663/])
HDDS-1641. Csi server fails because transitive Netty dependencies (#906) 
(aengineer: rev 50909a7aa0ac44eff94cf4c966920b9fb86f6974)
* (edit) hadoop-ozone/csi/pom.xml


> Csi server fails because transitive Netty dependencies
> --
>
> Key: HDDS-1641
> URL: https://issues.apache.org/jira/browse/HDDS-1641
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> CSI server can't be started because an ClassNotFound exception.
> It turned out that with using the new configuration api we got old netty jar 
> files as transitive dependencies. (hdds-configuration depends on 
> hadoop-common, hadoop-commons depends on the word)
> We should exclude all the old netty version from the classpath of the CSI 
> server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1637) Fix random test failure TestSCMContainerPlacementRackAware

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1637?focusedWorklogId=253774=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253774
 ]

ASF GitHub Bot logged work on HDDS-1637:


Author: ASF GitHub Bot
Created on: 04/Jun/19 14:30
Start Date: 04/Jun/19 14:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #904: HDDS-1637. Fix 
random test failure TestSCMContainerPlacementRackAware.
URL: https://github.com/apache/hadoop/pull/904#issuecomment-498696767
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 554 | trunk passed |
   | +1 | compile | 286 | trunk passed |
   | +1 | checkstyle | 81 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 958 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 171 | trunk passed |
   | 0 | spotbugs | 339 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 539 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 480 | the patch passed |
   | +1 | compile | 285 | the patch passed |
   | +1 | javac | 285 | the patch passed |
   | +1 | checkstyle | 84 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 738 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 165 | the patch passed |
   | +1 | findbugs | 542 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 232 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1098 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 6473 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-904/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/904 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 183736aaf9b3 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7991159 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-904/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-904/1/testReport/ |
   | Max. process+thread count | 4198 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-904/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253774)
Time Spent: 20m  (was: 10m)

> Fix random test failure TestSCMContainerPlacementRackAware
> --
>
> Key: HDDS-1637
> URL: https://issues.apache.org/jira/browse/HDDS-1637
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This has been seen randomly in latest trunk CI, e.g., 
> [https://ci.anzix.net/job/ozone/16980/testReport/org.apache.hadoop.hdds.scm.container.placement.algorithms/TestSCMContainerPlacementRackAware/testFallback/]
>  



--
This message was sent by Atlassian 

[jira] [Commented] (HDFS-14513) FSImage which is saving should be clean while NameNode shutdown

2019-06-04 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855748#comment-16855748
 ] 

He Xiaoqiao commented on HDFS-14513:


Sorry I am not familiar with mechanism of junit using ShutdownHook. For unit 
test in [^HDFS-14513.004.patch], there are full log expected as following 
before patch, So assert.fail in ShutdownHook run after the unit test finished, 
and could not fail the unit test?
{code:java}
2019-06-04 22:16:56,160 [Thread-10] WARN  util.ShutdownHookManager 
(ShutdownHookManager.java:executeShutdown(131)) - ShutdownHook 
'TestSaveNamespace$$Lambda$31/754186396' failed, 
java.util.concurrent.ExecutionException: java.lang.AssertionError: FSImageSaver 
checkpoint not clean by ShutdownHook.
java.util.concurrent.ExecutionException: java.lang.AssertionError: FSImageSaver 
checkpoint not clean by ShutdownHook.
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at 
org.apache.hadoop.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:124)
at 
org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:95)
Caused by: java.lang.AssertionError: FSImageSaver checkpoint not clean by 
ShutdownHook.
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.hdfs.server.namenode.TestSaveNamespace.lambda$testCleanCheckpointWhenShutdown$0(TestSaveNamespace.java:879)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
at java.util.concurrent.FutureTask.run(FutureTask.java)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Disconnected from the target VM, address: '127.0.0.1:59691', transport: 'socket'
{code}

> FSImage which is saving should be clean while NameNode shutdown
> ---
>
> Key: HDFS-14513
> URL: https://issues.apache.org/jira/browse/HDFS-14513
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14513.001.patch, HDFS-14513.002.patch, 
> HDFS-14513.003.patch, HDFS-14513.004.patch
>
>
> Checkpointer/FSImageSaver is regular tasks and dump NameNode meta to disk, at 
> most per hour by default. If it receive some command (e.g. transition to 
> active in HA mode) it will cancel checkpoint and delete tmp files using 
> {{FSImage#deleteCancelledCheckpoint}}. However if NameNode shutdown when 
> checkpoint, the tmp files will not be cleaned anymore. 
> Consider there are 500m inodes+blocks, it could cost 5~10min to finish once 
> checkpoint, if we shutdown NameNode during checkpointing, fsimage checkpoint 
> file will never be cleaned, after long time, there could be many useless 
> checkpoint files. So I propose that we should add hook to clean that when 
> shutdown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1508) Provide example k8s deployment files for the new CSI server

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1508?focusedWorklogId=253762=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253762
 ]

ASF GitHub Bot logged work on HDDS-1508:


Author: ASF GitHub Bot
Created on: 04/Jun/19 14:20
Start Date: 04/Jun/19 14:20
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #905: HDDS-1508. 
Provide example k8s deployment files for the new CSI server
URL: https://github.com/apache/hadoop/pull/905#issuecomment-498692587
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 534 | trunk passed |
   | +1 | compile | 273 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 718 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 480 | the patch passed |
   | +1 | compile | 298 | the patch passed |
   | +1 | javac | 298 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 649 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 173 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 229 | hadoop-hdds in the patch passed. |
   | -1 | unit | 144 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 3990 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-905/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/905 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient yamllint shellcheck shelldocs |
   | uname | Linux b9c62d2b3631 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7991159 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-905/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-905/1/testReport/ |
   | Max. process+thread count | 1295 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-905/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253762)
Time Spent: 20m  (was: 10m)

> Provide example k8s deployment files for the new CSI server
> ---
>
> Key: HDDS-1508
> URL: https://issues.apache.org/jira/browse/HDDS-1508
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Issue HDDS-1382 introduced a new internal CSI server. We should provide 
> example deployment files to make it easy to deploy it to any kubernetes 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1641) Csi server fails because transitive Netty dependencies

2019-06-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1641:
---
   Resolution: Fixed
Fix Version/s: 0.4.1
   Status: Resolved  (was: Patch Available)

[~elek] Thanks for fixing this issue. Committed to the trunk

> Csi server fails because transitive Netty dependencies
> --
>
> Key: HDDS-1641
> URL: https://issues.apache.org/jira/browse/HDDS-1641
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> CSI server can't be started because an ClassNotFound exception.
> It turned out that with using the new configuration api we got old netty jar 
> files as transitive dependencies. (hdds-configuration depends on 
> hadoop-common, hadoop-commons depends on the word)
> We should exclude all the old netty version from the classpath of the CSI 
> server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1641) Csi server fails because transitive Netty dependencies

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1641?focusedWorklogId=253761=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253761
 ]

ASF GitHub Bot logged work on HDDS-1641:


Author: ASF GitHub Bot
Created on: 04/Jun/19 14:18
Start Date: 04/Jun/19 14:18
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #906: HDDS-1641. 
Csi server fails because transitive Netty dependencies
URL: https://github.com/apache/hadoop/pull/906
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253761)
Time Spent: 20m  (was: 10m)

> Csi server fails because transitive Netty dependencies
> --
>
> Key: HDDS-1641
> URL: https://issues.apache.org/jira/browse/HDDS-1641
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> CSI server can't be started because an ClassNotFound exception.
> It turned out that with using the new configuration api we got old netty jar 
> files as transitive dependencies. (hdds-configuration depends on 
> hadoop-common, hadoop-commons depends on the word)
> We should exclude all the old netty version from the classpath of the CSI 
> server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12733) Option to disable to namenode local edits

2019-06-04 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855738#comment-16855738
 ] 

He Xiaoqiao commented on HDFS-12733:


Thanks [~brahmareddy] for your suggestion.
{quote}The problem might be when Shared storage crashed we can't do 
"INITIALIZESHAREDEDITS" incase user dn't want to use this( as might not care 
about performance,he might care reliability). By considering I have given the 
patch.{quote}
 [^HDFS-12733-003.patch] is a good solution and it is compatible with current 
implement with one config.
For the case you mentioned above, IIRC, NameNode only read edit from shared 
storage and replay in HA mode, even shared storage crashed. Unless we update 
logic about {{FSEditLog#initSharedJournalsForRead}} in HA mode. anyway, I 
confused about set only local storage in HA mode. Please correct me if 
something wrong. Thanks again.

> Option to disable to namenode local edits
> -
>
> Key: HDFS-12733
> URL: https://issues.apache.org/jira/browse/HDFS-12733
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, performance
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HDFS-12733-001.patch, HDFS-12733-002.patch, 
> HDFS-12733-003.patch, HDFS-12733.004.patch, HDFS-12733.005.patch
>
>
> As of now, Edits will be written in local and shared locations which will be 
> redundant and local edits never used in HA setup.
> Disabling local edits gives little performance improvement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1490) Support configurable container placement policy through "ozone.scm.container.placement.classname"

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1490?focusedWorklogId=253751=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253751
 ]

ASF GitHub Bot logged work on HDDS-1490:


Author: ASF GitHub Bot
Created on: 04/Jun/19 14:00
Start Date: 04/Jun/19 14:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #903: HDDS-1490. 
Support configurable container placement policy through 'o…
URL: https://github.com/apache/hadoop/pull/903#issuecomment-498684061
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 9 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 515 | trunk passed |
   | +1 | compile | 279 | trunk passed |
   | +1 | checkstyle | 71 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 827 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 177 | trunk passed |
   | 0 | spotbugs | 335 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 527 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 36 | Maven dependency ordering for patch |
   | +1 | mvninstall | 505 | the patch passed |
   | +1 | compile | 283 | the patch passed |
   | +1 | javac | 283 | the patch passed |
   | -0 | checkstyle | 35 | hadoop-hdds: The patch generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | xml | 9 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 625 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 71 | hadoop-hdds generated 9 new + 14 unchanged - 0 fixed = 
23 total (was 14) |
   | +1 | findbugs | 523 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 240 | hadoop-hdds in the patch passed. |
   | -1 | unit | 133 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 5253 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.request.bucket.TestOMBucketDeleteRequest |
   |   | hadoop.ozone.om.request.bucket.TestOMBucketCreateRequest |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerStateMachine |
   |   | hadoop.ozone.om.request.bucket.TestOMBucketSetPropertyRequest |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-903/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/903 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux f128fa432a0c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7991159 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-903/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-903/1/artifact/out/diff-javadoc-javadoc-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-903/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-903/1/testReport/ |
   | Max. process+thread count | 1346 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-903/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253751)
 

[jira] [Work logged] (HDDS-1635) Maintain docker entrypoint and envtoconf inside ozone project

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1635?focusedWorklogId=253740=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253740
 ]

ASF GitHub Bot logged work on HDDS-1635:


Author: ASF GitHub Bot
Created on: 04/Jun/19 13:33
Start Date: 04/Jun/19 13:33
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #894: HDDS-1635. 
Maintain docker entrypoint and envtoconf inside ozone project
URL: https://github.com/apache/hadoop/pull/894#discussion_r290297327
 
 

 ##
 File path: hadoop-ozone/dist/src/main/dockerbin/entrypoint.sh
 ##
 @@ -0,0 +1,138 @@
+#!/usr/bin/env bash
+##
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+set -e
+
+DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
+
+if [ -n "$SLEEP_SECONDS" ]; then
+   echo "Sleeping for $SLEEP_SECONDS seconds"
+   sleep $SLEEP_SECONDS
 
 Review comment:
   Thanks the comment @eyanghwx
   
   Usually I prefer to avoid significant changes during a move because it won't 
be separated from the copy/move:
* It can be harder to follow what has been changed
* And it can be harder to revert in case of any problem
   
   I created an issue to track this problem:
   
   https://issues.apache.org/jira/browse/HDDS-1642
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253740)
Time Spent: 1h 40m  (was: 1.5h)

> Maintain docker entrypoint and envtoconf inside ozone project
> -
>
> Key: HDDS-1635
> URL: https://issues.apache.org/jira/browse/HDDS-1635
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> During an offline discussion with [~eyang] and [~arp], Eric suggested to 
> maintain the source of the docker specific start images inside the main ozone 
> branch (trunk) instead of the branch of the docker image.
> With this approach the ozone-runner image can be a very lightweight image and 
> the entrypoint logic can be versioned together with the ozone itself.
> An other use case is a container creation script. Recently we 
> [documented|https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Docker+images]
>  that hadoop-runner/ozone-runner/ozone images are not for production (for 
> example because they contain development tools).
> We can create a helper tool (similar what Spark provides) to create Ozone 
> container images from any production ready base image. But this tool requires 
> the existence of the scripts inside the distribution.
> (ps: I think sooner or later the functionality of envtoconf.py can be added 
> to the OzoneConfiguration java class and we can parse the configuration 
> values directly from environment variables.
> In this patch I copied the required scripts to the ozone source tree and the 
> new ozone-runner image (HDDS-1634) is designed to use it from this specific 
> location.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1642) Avoid shell references relative to the current script path

2019-06-04 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1642:
--

 Summary: Avoid shell references relative to the current script path
 Key: HDDS-1642
 URL: https://issues.apache.org/jira/browse/HDDS-1642
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Eric Yang


This is based on the review comment from [~eyang]:

bq. You might need pwd -P to resolve symlinks. I don't recommend to use script 
location to make decision of where binaries are supposed to be because someone 
else can make newbie mistake and refactor your script to invalid the original 
coding intend. See this blog to explain the right way to get the directory of a 
bash script. This is the reason that I used OZONE_HOME as base reference 
frequently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1635) Maintain docker entrypoint and envtoconf inside ozone project

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1635?focusedWorklogId=253737=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253737
 ]

ASF GitHub Bot logged work on HDDS-1635:


Author: ASF GitHub Bot
Created on: 04/Jun/19 13:28
Start Date: 04/Jun/19 13:28
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #894: HDDS-1635. Maintain 
docker entrypoint and envtoconf inside ozone project
URL: https://github.com/apache/hadoop/pull/894#issuecomment-498671598
 
 
   > ShellChecks?
   
   Violations are fixed in the last commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253737)
Time Spent: 1.5h  (was: 1h 20m)

> Maintain docker entrypoint and envtoconf inside ozone project
> -
>
> Key: HDDS-1635
> URL: https://issues.apache.org/jira/browse/HDDS-1635
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> During an offline discussion with [~eyang] and [~arp], Eric suggested to 
> maintain the source of the docker specific start images inside the main ozone 
> branch (trunk) instead of the branch of the docker image.
> With this approach the ozone-runner image can be a very lightweight image and 
> the entrypoint logic can be versioned together with the ozone itself.
> An other use case is a container creation script. Recently we 
> [documented|https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Docker+images]
>  that hadoop-runner/ozone-runner/ozone images are not for production (for 
> example because they contain development tools).
> We can create a helper tool (similar what Spark provides) to create Ozone 
> container images from any production ready base image. But this tool requires 
> the existence of the scripts inside the distribution.
> (ps: I think sooner or later the functionality of envtoconf.py can be added 
> to the OzoneConfiguration java class and we can parse the configuration 
> values directly from environment variables.
> In this patch I copied the required scripts to the ozone source tree and the 
> new ozone-runner image (HDDS-1634) is designed to use it from this specific 
> location.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1641) Csi server fails because transitive Netty dependencies

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1641?focusedWorklogId=253735=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253735
 ]

ASF GitHub Bot logged work on HDDS-1641:


Author: ASF GitHub Bot
Created on: 04/Jun/19 13:22
Start Date: 04/Jun/19 13:22
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #906: HDDS-1641. Csi 
server fails because transitive Netty dependencies
URL: https://github.com/apache/hadoop/pull/906
 
 
   CSI server can't be started because an ClassNotFound exception.
   
   It turned out that with using the new configuration api we got old netty jar 
files as transitive dependencies. (hdds-configuration depends on hadoop-common, 
hadoop-commons depends on the word)
   
   We should exclude all the old netty version from the classpath of the CSI 
server.
   
   See: https://issues.apache.org/jira/browse/HDDS-1641
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253735)
Time Spent: 10m
Remaining Estimate: 0h

> Csi server fails because transitive Netty dependencies
> --
>
> Key: HDDS-1641
> URL: https://issues.apache.org/jira/browse/HDDS-1641
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> CSI server can't be started because an ClassNotFound exception.
> It turned out that with using the new configuration api we got old netty jar 
> files as transitive dependencies. (hdds-configuration depends on 
> hadoop-common, hadoop-commons depends on the word)
> We should exclude all the old netty version from the classpath of the CSI 
> server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1641) Csi server fails because transitive Netty dependencies

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1641:
-
Labels: pull-request-available  (was: )

> Csi server fails because transitive Netty dependencies
> --
>
> Key: HDDS-1641
> URL: https://issues.apache.org/jira/browse/HDDS-1641
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>
> CSI server can't be started because an ClassNotFound exception.
> It turned out that with using the new configuration api we got old netty jar 
> files as transitive dependencies. (hdds-configuration depends on 
> hadoop-common, hadoop-commons depends on the word)
> We should exclude all the old netty version from the classpath of the CSI 
> server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1641) Csi server fails because transitive Netty dependencies

2019-06-04 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1641:
---
Status: Patch Available  (was: Open)

> Csi server fails because transitive Netty dependencies
> --
>
> Key: HDDS-1641
> URL: https://issues.apache.org/jira/browse/HDDS-1641
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>
> CSI server can't be started because an ClassNotFound exception.
> It turned out that with using the new configuration api we got old netty jar 
> files as transitive dependencies. (hdds-configuration depends on 
> hadoop-common, hadoop-commons depends on the word)
> We should exclude all the old netty version from the classpath of the CSI 
> server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1641) Csi server fails because transitive Netty dependencies

2019-06-04 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1641:
---
Summary: Csi server fails because transitive Netty dependencies  (was: Csi 
server fails because transitive Netty dependency)

> Csi server fails because transitive Netty dependencies
> --
>
> Key: HDDS-1641
> URL: https://issues.apache.org/jira/browse/HDDS-1641
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>
> CSI server can't be started because an ClassNotFound exception.
> It turned out that with using the new configuration api we got old netty jar 
> files as transitive dependencies. (hdds-configuration depends on 
> hadoop-common, hadoop-commons depends on the word)
> We should exclude all the old netty version from the classpath of the CSI 
> server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1641) Csi server fails because transitive Netty dependency

2019-06-04 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1641:
--

 Summary: Csi server fails because transitive Netty dependency
 Key: HDDS-1641
 URL: https://issues.apache.org/jira/browse/HDDS-1641
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton
Assignee: Elek, Marton


CSI server can't be started because an ClassNotFound exception.

It turned out that with using the new configuration api we got old netty jar 
files as transitive dependencies. (hdds-configuration depends on hadoop-common, 
hadoop-commons depends on the word)

We should exclude all the old netty version from the classpath of the CSI 
server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1508) Provide example k8s deployment files for the new CSI server

2019-06-04 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1508:
---
Target Version/s: 0.4.1  (was: 0.5.0)

> Provide example k8s deployment files for the new CSI server
> ---
>
> Key: HDDS-1508
> URL: https://issues.apache.org/jira/browse/HDDS-1508
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Issue HDDS-1382 introduced a new internal CSI server. We should provide 
> example deployment files to make it easy to deploy it to any kubernetes 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1508) Provide example k8s deployment files for the new CSI server

2019-06-04 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1508:
---
Status: Patch Available  (was: Open)

> Provide example k8s deployment files for the new CSI server
> ---
>
> Key: HDDS-1508
> URL: https://issues.apache.org/jira/browse/HDDS-1508
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Issue HDDS-1382 introduced a new internal CSI server. We should provide 
> example deployment files to make it easy to deploy it to any kubernetes 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1224) Restructure code to validate the response from server in the Read path

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1224?focusedWorklogId=253733=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253733
 ]

ASF GitHub Bot logged work on HDDS-1224:


Author: ASF GitHub Bot
Created on: 04/Jun/19 13:17
Start Date: 04/Jun/19 13:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #806: HDDS-1224. 
Restructure code to validate the response from server in the Read path
URL: https://github.com/apache/hadoop/pull/806#issuecomment-498667387
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | +1 | mvninstall | 524 | trunk passed |
   | +1 | compile | 304 | trunk passed |
   | +1 | checkstyle | 92 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 896 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 181 | trunk passed |
   | 0 | spotbugs | 325 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 516 | trunk passed |
   | -0 | patch | 395 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 36 | Maven dependency ordering for patch |
   | +1 | mvninstall | 484 | the patch passed |
   | +1 | compile | 287 | the patch passed |
   | +1 | javac | 287 | the patch passed |
   | -0 | checkstyle | 48 | hadoop-hdds: The patch generated 14 new + 0 
unchanged - 0 fixed = 14 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 702 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 71 | hadoop-hdds generated 5 new + 14 unchanged - 0 fixed = 
19 total (was 14) |
   | +1 | findbugs | 537 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 230 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1032 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 69 | The patch does not generate ASF License warnings. |
   | | | 6377 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.scm.node.TestQueryNode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/806 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 598d581e8274 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7991159 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/6/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/6/artifact/out/diff-javadoc-javadoc-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/6/testReport/ |
   | Max. process+thread count | 4504 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-hdds/common 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253733)
Time Spent: 5.5h  (was: 5h 20m)

> Restructure code to validate the response from server in the Read path
> --
>
> 

[jira] [Work logged] (HDDS-1508) Provide example k8s deployment files for the new CSI server

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1508?focusedWorklogId=253729=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253729
 ]

ASF GitHub Bot logged work on HDDS-1508:


Author: ASF GitHub Bot
Created on: 04/Jun/19 13:12
Start Date: 04/Jun/19 13:12
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #905: HDDS-1508. Provide 
example k8s deployment files for the new CSI server
URL: https://github.com/apache/hadoop/pull/905
 
 
   Issue HDDS-1382 introduced a new internal CSI server. We should provide 
example deployment files to make it easy to deploy it to any kubernetes cluster.
   
   See: https://issues.apache.org/jira/browse/HDDS-1508
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253729)
Time Spent: 10m
Remaining Estimate: 0h

> Provide example k8s deployment files for the new CSI server
> ---
>
> Key: HDDS-1508
> URL: https://issues.apache.org/jira/browse/HDDS-1508
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Issue HDDS-1382 introduced a new internal CSI server. We should provide 
> example deployment files to make it easy to deploy it to any kubernetes 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1508) Provide example k8s deployment files for the new CSI server

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1508:
-
Labels: pull-request-available  (was: )

> Provide example k8s deployment files for the new CSI server
> ---
>
> Key: HDDS-1508
> URL: https://issues.apache.org/jira/browse/HDDS-1508
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>
> Issue HDDS-1382 introduced a new internal CSI server. We should provide 
> example deployment files to make it easy to deploy it to any kubernetes 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1628) Fix the execution and return code of smoketest executor shell script

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1628?focusedWorklogId=253728=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253728
 ]

ASF GitHub Bot logged work on HDDS-1628:


Author: ASF GitHub Bot
Created on: 04/Jun/19 13:11
Start Date: 04/Jun/19 13:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #902: HDDS-1628. Fix 
the execution and return code of smoketest executor shell script
URL: https://github.com/apache/hadoop/pull/902#issuecomment-498665024
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1390 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 135 | Maven dependency ordering for branch |
   | +1 | mvninstall | 727 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1115 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 34 | Maven dependency ordering for patch |
   | +1 | mvninstall | 617 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | shellcheck | 0 | The patch generated 1 new + 1 unchanged - 0 fixed = 
2 total (was 1) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 852 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 121 | hadoop-hdds in the patch passed. |
   | +1 | unit | 254 | hadoop-ozone in the patch passed. |
   | +1 | asflicense | 70 | The patch does not generate ASF License warnings. |
   | | | 5588 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-902/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/902 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
   | uname | Linux fdec2b2a02dd 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7991159 |
   | shellcheck | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-902/1/artifact/out/diff-patch-shellcheck.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-902/1/testReport/ |
   | Max. process+thread count | 294 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-902/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253728)
Time Spent: 0.5h  (was: 20m)

> Fix the execution and return code of smoketest executor shell script
> 
>
> Key: HDDS-1628
> URL: https://issues.apache.org/jira/browse/HDDS-1628
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Problem: Some of the smoketest executions were reported to green even if they 
> contained failed tests.
> Root cause: the legacy test executor 
> (hadoop-ozone/dist/src/main/smoketest/test.sh) which just calls the new 
> executor script (hadoop-ozone/dist/src/main/compose/test-all.sh) didn't 
> handle the return code well (the failure of the smoketests should be 
> signalled by the bash return code)
> This patch:
>  * Fixes the error code handling in smoketest/test.sh
>  * Fixes the test execution in compose/test-all.sh (should work from any 
> other directories)
>  * Updates 

[jira] [Work logged] (HDDS-1628) Fix the execution and return code of smoketest executor shell script

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1628?focusedWorklogId=253727=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253727
 ]

ASF GitHub Bot logged work on HDDS-1628:


Author: ASF GitHub Bot
Created on: 04/Jun/19 13:11
Start Date: 04/Jun/19 13:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #902: HDDS-1628. 
Fix the execution and return code of smoketest executor shell script
URL: https://github.com/apache/hadoop/pull/902#discussion_r290287400
 
 

 ##
 File path: hadoop-ozone/dev-support/checks/acceptance.sh
 ##
 @@ -13,6 +13,7 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
+DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
 export HADOOP_VERSION=3
-hadoop-ozone/dist/target/ozone-*-SNAPSHOT/smoketest/test.sh
+$DIR/../../../hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/test-all.sh
 
 Review comment:
   shellcheck:1: note: Double quote to prevent globbing and word splitting. 
[SC2086]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253727)
Time Spent: 20m  (was: 10m)

> Fix the execution and return code of smoketest executor shell script
> 
>
> Key: HDDS-1628
> URL: https://issues.apache.org/jira/browse/HDDS-1628
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Problem: Some of the smoketest executions were reported to green even if they 
> contained failed tests.
> Root cause: the legacy test executor 
> (hadoop-ozone/dist/src/main/smoketest/test.sh) which just calls the new 
> executor script (hadoop-ozone/dist/src/main/compose/test-all.sh) didn't 
> handle the return code well (the failure of the smoketests should be 
> signalled by the bash return code)
> This patch:
>  * Fixes the error code handling in smoketest/test.sh
>  * Fixes the test execution in compose/test-all.sh (should work from any 
> other directories)
>  * Updates hadoop-ozone/dev-support/checks/acceptance.sh to use the newer 
> test-all.sh executor instead of the old one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14542) Remove redundant code when verify quota

2019-06-04 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855670#comment-16855670
 ] 

Ayush Saxena commented on HDFS-14542:
-

If Quota by storage type is not set, then it just return without looping up, If 
we remove this check now, even if Quota by storage type is not set, it will 
Loop up for all storage types and then return, Instead of one check it will 
check equal to number of storage types. it saves one check, only if quota is 
set, which too is negligible and increases calls n times for if Quota is not 
set.
I guess the present code is correct.

> Remove redundant code when verify quota
> ---
>
> Key: HDFS-14542
> URL: https://issues.apache.org/jira/browse/HDFS-14542
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HDFS-14542.patch
>
>
> DirectoryWithQuotaFeature.verifyQuotaByStorageType() does the job of 
> verifying quota. It's redundant to call isQuotaByStorageTypeSet() because the 
> for each iterator nextline has done the same job.
> {code:java}
> private void verifyQuotaByStorageType(EnumCounters typeDelta) 
>  throws QuotaByStorageTypeExceededException {
>   if (!isQuotaByStorageTypeSet()) { // REDUNDANT.
> return;
>   }
>   for (StorageType t: StorageType.getTypesSupportingQuota()) {
> if (!isQuotaByStorageTypeSet(t)) { // CHECK FOR EACH STORAGETYPE.
>   continue;
> }
> if (Quota.isViolated(quota.getTypeSpace(t), usage.getTypeSpace(t),
> typeDelta.get(t))) {
>   throw new QuotaByStorageTypeExceededException(
>   quota.getTypeSpace(t), usage.getTypeSpace(t) + typeDelta.get(t), t);
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1637) Fix random test failure TestSCMContainerPlacementRackAware

2019-06-04 Thread Sammi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855654#comment-16855654
 ] 

Sammi Chen commented on HDDS-1637:
--

[~xyao], thanks for report this.  I upload a PR 
https://github.com/apache/hadoop/pull/904 for this.

> Fix random test failure TestSCMContainerPlacementRackAware
> --
>
> Key: HDDS-1637
> URL: https://issues.apache.org/jira/browse/HDDS-1637
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This has been seen randomly in latest trunk CI, e.g., 
> [https://ci.anzix.net/job/ozone/16980/testReport/org.apache.hadoop.hdds.scm.container.placement.algorithms/TestSCMContainerPlacementRackAware/testFallback/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1637) Fix random test failure TestSCMContainerPlacementRackAware

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1637?focusedWorklogId=253709=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253709
 ]

ASF GitHub Bot logged work on HDDS-1637:


Author: ASF GitHub Bot
Created on: 04/Jun/19 12:41
Start Date: 04/Jun/19 12:41
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #904: HDDS-1637. 
Fix random test failure TestSCMContainerPlacementRackAware.
URL: https://github.com/apache/hadoop/pull/904
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253709)
Time Spent: 10m
Remaining Estimate: 0h

> Fix random test failure TestSCMContainerPlacementRackAware
> --
>
> Key: HDDS-1637
> URL: https://issues.apache.org/jira/browse/HDDS-1637
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This has been seen randomly in latest trunk CI, e.g., 
> [https://ci.anzix.net/job/ozone/16980/testReport/org.apache.hadoop.hdds.scm.container.placement.algorithms/TestSCMContainerPlacementRackAware/testFallback/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1637) Fix random test failure TestSCMContainerPlacementRackAware

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1637:
-
Labels: pull-request-available  (was: )

> Fix random test failure TestSCMContainerPlacementRackAware
> --
>
> Key: HDDS-1637
> URL: https://issues.apache.org/jira/browse/HDDS-1637
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>
> This has been seen randomly in latest trunk CI, e.g., 
> [https://ci.anzix.net/job/ozone/16980/testReport/org.apache.hadoop.hdds.scm.container.placement.algorithms/TestSCMContainerPlacementRackAware/testFallback/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1490) Support configurable container placement policy through "ozone.scm.container.placement.classname"

2019-06-04 Thread Sammi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855645#comment-16855645
 ] 

Sammi Chen commented on HDDS-1490:
--

04.patch PR https://github.com/apache/hadoop/pull/903

> Support configurable container placement policy through 
> "ozone.scm.container.placement.classname"
> -
>
> Key: HDDS-1490
> URL: https://issues.apache.org/jira/browse/HDDS-1490
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1490.01.patch, HDDS-1490.02.patch, 
> HDDS-1490.03.patch, HDDS-1490.04.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Support system property "ozone.scm.container.placement.classname" in 
> ozone-site.xml. User can specify the implementation class name as the value 
> of the property.  Here is an example, 
>  
> ozone.scm.container.placement.classname
> 
> org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMContainerPlacementRackAwareß
>  
> If this property is not set, then default 
> org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMContainerPlacementRackAware
>  will be used. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1490) Support configurable container placement policy through "ozone.scm.container.placement.classname"

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1490?focusedWorklogId=253705=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253705
 ]

ASF GitHub Bot logged work on HDDS-1490:


Author: ASF GitHub Bot
Created on: 04/Jun/19 12:31
Start Date: 04/Jun/19 12:31
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #903: HDDS-1490. 
Support configurable container placement policy through 'o…
URL: https://github.com/apache/hadoop/pull/903
 
 
   …zone.scm.container.placement.classname'
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253705)
Time Spent: 50m  (was: 40m)

> Support configurable container placement policy through 
> "ozone.scm.container.placement.classname"
> -
>
> Key: HDDS-1490
> URL: https://issues.apache.org/jira/browse/HDDS-1490
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1490.01.patch, HDDS-1490.02.patch, 
> HDDS-1490.03.patch, HDDS-1490.04.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Support system property "ozone.scm.container.placement.classname" in 
> ozone-site.xml. User can specify the implementation class name as the value 
> of the property.  Here is an example, 
>  
> ozone.scm.container.placement.classname
> 
> org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMContainerPlacementRackAwareß
>  
> If this property is not set, then default 
> org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMContainerPlacementRackAware
>  will be used. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14542) Remove redundant code when verify quota

2019-06-04 Thread Jinglun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855641#comment-16855641
 ] 

Jinglun commented on HDFS-14542:


upload one patch to trigger jenkins.

> Remove redundant code when verify quota
> ---
>
> Key: HDFS-14542
> URL: https://issues.apache.org/jira/browse/HDFS-14542
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: Jinglun
>Priority: Minor
> Attachments: HDFS-14542.patch
>
>
> DirectoryWithQuotaFeature.verifyQuotaByStorageType() does the job of 
> verifying quota. It's redundant to call isQuotaByStorageTypeSet() because the 
> for each iterator nextline has done the same job.
> {code:java}
> private void verifyQuotaByStorageType(EnumCounters typeDelta) 
>  throws QuotaByStorageTypeExceededException {
>   if (!isQuotaByStorageTypeSet()) { // REDUNDANT.
> return;
>   }
>   for (StorageType t: StorageType.getTypesSupportingQuota()) {
> if (!isQuotaByStorageTypeSet(t)) { // CHECK FOR EACH STORAGETYPE.
>   continue;
> }
> if (Quota.isViolated(quota.getTypeSpace(t), usage.getTypeSpace(t),
> typeDelta.get(t))) {
>   throw new QuotaByStorageTypeExceededException(
>   quota.getTypeSpace(t), usage.getTypeSpace(t) + typeDelta.get(t), t);
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14542) Remove redundant code when verify quota

2019-06-04 Thread Jinglun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun reassigned HDFS-14542:
--

Assignee: Jinglun

> Remove redundant code when verify quota
> ---
>
> Key: HDFS-14542
> URL: https://issues.apache.org/jira/browse/HDFS-14542
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HDFS-14542.patch
>
>
> DirectoryWithQuotaFeature.verifyQuotaByStorageType() does the job of 
> verifying quota. It's redundant to call isQuotaByStorageTypeSet() because the 
> for each iterator nextline has done the same job.
> {code:java}
> private void verifyQuotaByStorageType(EnumCounters typeDelta) 
>  throws QuotaByStorageTypeExceededException {
>   if (!isQuotaByStorageTypeSet()) { // REDUNDANT.
> return;
>   }
>   for (StorageType t: StorageType.getTypesSupportingQuota()) {
> if (!isQuotaByStorageTypeSet(t)) { // CHECK FOR EACH STORAGETYPE.
>   continue;
> }
> if (Quota.isViolated(quota.getTypeSpace(t), usage.getTypeSpace(t),
> typeDelta.get(t))) {
>   throw new QuotaByStorageTypeExceededException(
>   quota.getTypeSpace(t), usage.getTypeSpace(t) + typeDelta.get(t), t);
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14542) Remove redundant code when verify quota

2019-06-04 Thread Jinglun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14542:
---
Description: 
DirectoryWithQuotaFeature.verifyQuotaByStorageType() does the job of verifying 
quota. It's redundant to call isQuotaByStorageTypeSet() because the for each 
iterator nextline has done the same job.
{code:java}
private void verifyQuotaByStorageType(EnumCounters typeDelta) 
 throws QuotaByStorageTypeExceededException {
  if (!isQuotaByStorageTypeSet()) { // REDUNDANT.
return;
  }
  for (StorageType t: StorageType.getTypesSupportingQuota()) {
if (!isQuotaByStorageTypeSet(t)) { // CHECK FOR EACH STORAGETYPE.
  continue;
}
if (Quota.isViolated(quota.getTypeSpace(t), usage.getTypeSpace(t),
typeDelta.get(t))) {
  throw new QuotaByStorageTypeExceededException(
  quota.getTypeSpace(t), usage.getTypeSpace(t) + typeDelta.get(t), t);
}
  }
}
{code}

  was:
DirectoryWithQuotaFeature.verifyQuotaByStorageType() does the job of verifying 
quota. It's redundant to call isQuotaByStorageTypeSet() because the for each 
iterator nextline has done the same job.
{code:java}
if (!isQuotaByStorageTypeSet()) { // REDUNDANT.
  return;
}
for (StorageType t: StorageType.getTypesSupportingQuota()) {
  if (!isQuotaByStorageTypeSet(t)) { // CHECK FOR EACH STORAGETYPE.
continue;
  }
  if (Quota.isViolated(quota.getTypeSpace(t), usage.getTypeSpace(t),
  typeDelta.get(t))) {
throw new QuotaByStorageTypeExceededException(
quota.getTypeSpace(t), usage.getTypeSpace(t) + typeDelta.get(t), t);
  }
}
{code}


> Remove redundant code when verify quota
> ---
>
> Key: HDFS-14542
> URL: https://issues.apache.org/jira/browse/HDFS-14542
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: Jinglun
>Priority: Minor
> Attachments: HDFS-14542.patch
>
>
> DirectoryWithQuotaFeature.verifyQuotaByStorageType() does the job of 
> verifying quota. It's redundant to call isQuotaByStorageTypeSet() because the 
> for each iterator nextline has done the same job.
> {code:java}
> private void verifyQuotaByStorageType(EnumCounters typeDelta) 
>  throws QuotaByStorageTypeExceededException {
>   if (!isQuotaByStorageTypeSet()) { // REDUNDANT.
> return;
>   }
>   for (StorageType t: StorageType.getTypesSupportingQuota()) {
> if (!isQuotaByStorageTypeSet(t)) { // CHECK FOR EACH STORAGETYPE.
>   continue;
> }
> if (Quota.isViolated(quota.getTypeSpace(t), usage.getTypeSpace(t),
> typeDelta.get(t))) {
>   throw new QuotaByStorageTypeExceededException(
>   quota.getTypeSpace(t), usage.getTypeSpace(t) + typeDelta.get(t), t);
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1490) Support configurable container placement policy through "ozone.scm.container.placement.classname"

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1490?focusedWorklogId=253702=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253702
 ]

ASF GitHub Bot logged work on HDDS-1490:


Author: ASF GitHub Bot
Created on: 04/Jun/19 12:24
Start Date: 04/Jun/19 12:24
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on issue #898: HDDS-1490. Support 
configurable container placement policy 
URL: https://github.com/apache/hadoop/pull/898#issuecomment-498648316
 
 
   Failed UTs are not relevant.  And all failed UTs are passed locally after 
patch rebased against today's trunk. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253702)
Time Spent: 40m  (was: 0.5h)

> Support configurable container placement policy through 
> "ozone.scm.container.placement.classname"
> -
>
> Key: HDDS-1490
> URL: https://issues.apache.org/jira/browse/HDDS-1490
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1490.01.patch, HDDS-1490.02.patch, 
> HDDS-1490.03.patch, HDDS-1490.04.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Support system property "ozone.scm.container.placement.classname" in 
> ozone-site.xml. User can specify the implementation class name as the value 
> of the property.  Here is an example, 
>  
> ozone.scm.container.placement.classname
> 
> org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMContainerPlacementRackAwareß
>  
> If this property is not set, then default 
> org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMContainerPlacementRackAware
>  will be used. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14542) Remove redundant code when verify quota

2019-06-04 Thread Jinglun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14542:
---
Attachment: HDFS-14542.patch

> Remove redundant code when verify quota
> ---
>
> Key: HDFS-14542
> URL: https://issues.apache.org/jira/browse/HDFS-14542
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: Jinglun
>Priority: Minor
> Attachments: HDFS-14542.patch
>
>
> DirectoryWithQuotaFeature.verifyQuotaByStorageType() does the job of 
> verifying quota. It's redundant to call isQuotaByStorageTypeSet() because the 
> for each iterator nextline has done the same job.
> {code:java}
> if (!isQuotaByStorageTypeSet()) { // REDUNDANT.
>   return;
> }
> for (StorageType t: StorageType.getTypesSupportingQuota()) {
>   if (!isQuotaByStorageTypeSet(t)) { // CHECK FOR EACH STORAGETYPE.
> continue;
>   }
>   if (Quota.isViolated(quota.getTypeSpace(t), usage.getTypeSpace(t),
>   typeDelta.get(t))) {
> throw new QuotaByStorageTypeExceededException(
> quota.getTypeSpace(t), usage.getTypeSpace(t) + typeDelta.get(t), t);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1490) Support configurable container placement policy through "ozone.scm.container.placement.classname"

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1490?focusedWorklogId=253701=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253701
 ]

ASF GitHub Bot logged work on HDDS-1490:


Author: ASF GitHub Bot
Created on: 04/Jun/19 12:22
Start Date: 04/Jun/19 12:22
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on issue #898: HDDS-1490. Support 
configurable container placement policy 
URL: https://github.com/apache/hadoop/pull/898#issuecomment-498648316
 
 
   Failed UTs are not relevant. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253701)
Time Spent: 0.5h  (was: 20m)

> Support configurable container placement policy through 
> "ozone.scm.container.placement.classname"
> -
>
> Key: HDDS-1490
> URL: https://issues.apache.org/jira/browse/HDDS-1490
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1490.01.patch, HDDS-1490.02.patch, 
> HDDS-1490.03.patch, HDDS-1490.04.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Support system property "ozone.scm.container.placement.classname" in 
> ozone-site.xml. User can specify the implementation class name as the value 
> of the property.  Here is an example, 
>  
> ozone.scm.container.placement.classname
> 
> org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMContainerPlacementRackAwareß
>  
> If this property is not set, then default 
> org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMContainerPlacementRackAware
>  will be used. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14542) Remove redundant code when verify quota

2019-06-04 Thread Jinglun (JIRA)
Jinglun created HDFS-14542:
--

 Summary: Remove redundant code when verify quota
 Key: HDFS-14542
 URL: https://issues.apache.org/jira/browse/HDFS-14542
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.1.1
Reporter: Jinglun


DirectoryWithQuotaFeature.verifyQuotaByStorageType() does the job of verifying 
quota. It's redundant to call isQuotaByStorageTypeSet() because the for each 
iterator nextline has done the same job.
{code:java}
if (!isQuotaByStorageTypeSet()) { // REDUNDANT.
  return;
}
for (StorageType t: StorageType.getTypesSupportingQuota()) {
  if (!isQuotaByStorageTypeSet(t)) { // CHECK FOR EACH STORAGETYPE.
continue;
  }
  if (Quota.isViolated(quota.getTypeSpace(t), usage.getTypeSpace(t),
  typeDelta.get(t))) {
throw new QuotaByStorageTypeExceededException(
quota.getTypeSpace(t), usage.getTypeSpace(t) + typeDelta.get(t), t);
  }
}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1621) flushStateMachineData should ensure the write chunks are flushed to disk

2019-06-04 Thread Supratim Deka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855621#comment-16855621
 ] 

Supratim Deka commented on HDDS-1621:
-

keeping the FileChannel around after writeData and passing it back to the State 
Machine is not really required.
dfs.container.chunk.write.sync determines whether the chunk is persisted as 
soon as the data is written.
If this parameter is set to false, it implies the possibility of data loss. 
This tradeoff is provided to enable higher throughput.

In keeping with this understanding, we will limit the change to
1. invoking a force+close on the channel inside writeData if the sync option is 
set.
2. change AsynchronousFileChannel to FileChannel (as explained in the previous 
comment)
 

> flushStateMachineData should ensure the write chunks are flushed to disk
> 
>
> Key: HDDS-1621
> URL: https://issues.apache.org/jira/browse/HDDS-1621
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Shashikant Banerjee
>Assignee: Supratim Deka
>Priority: Major
>
> Currently, chunks writes are not synced to disk by default. When 
> flushStateMachineData gests invoked from Ratis, it should also ensure all the 
> pending chunk writes should be flushed to disk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1628) Fix the execution and return code of smoketest executor shell script

2019-06-04 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1628:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix the execution and return code of smoketest executor shell script
> 
>
> Key: HDDS-1628
> URL: https://issues.apache.org/jira/browse/HDDS-1628
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Problem: Some of the smoketest executions were reported to green even if they 
> contained failed tests.
> Root cause: the legacy test executor 
> (hadoop-ozone/dist/src/main/smoketest/test.sh) which just calls the new 
> executor script (hadoop-ozone/dist/src/main/compose/test-all.sh) didn't 
> handle the return code well (the failure of the smoketests should be 
> signalled by the bash return code)
> This patch:
>  * Fixes the error code handling in smoketest/test.sh
>  * Fixes the test execution in compose/test-all.sh (should work from any 
> other directories)
>  * Updates hadoop-ozone/dev-support/checks/acceptance.sh to use the newer 
> test-all.sh executor instead of the old one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-1628) Fix the execution and return code of smoketest executor shell script

2019-06-04 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reopened HDDS-1628:


> Fix the execution and return code of smoketest executor shell script
> 
>
> Key: HDDS-1628
> URL: https://issues.apache.org/jira/browse/HDDS-1628
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Problem: Some of the smoketest executions were reported to green even if they 
> contained failed tests.
> Root cause: the legacy test executor 
> (hadoop-ozone/dist/src/main/smoketest/test.sh) which just calls the new 
> executor script (hadoop-ozone/dist/src/main/compose/test-all.sh) didn't 
> handle the return code well (the failure of the smoketests should be 
> signalled by the bash return code)
> This patch:
>  * Fixes the error code handling in smoketest/test.sh
>  * Fixes the test execution in compose/test-all.sh (should work from any 
> other directories)
>  * Updates hadoop-ozone/dev-support/checks/acceptance.sh to use the newer 
> test-all.sh executor instead of the old one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1628) Fix the execution and return code of smoketest executor shell script

2019-06-04 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855600#comment-16855600
 ] 

Elek, Marton commented on HDDS-1628:


Sure, somehow it was missing. I created a new one.

> Fix the execution and return code of smoketest executor shell script
> 
>
> Key: HDDS-1628
> URL: https://issues.apache.org/jira/browse/HDDS-1628
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Problem: Some of the smoketest executions were reported to green even if they 
> contained failed tests.
> Root cause: the legacy test executor 
> (hadoop-ozone/dist/src/main/smoketest/test.sh) which just calls the new 
> executor script (hadoop-ozone/dist/src/main/compose/test-all.sh) didn't 
> handle the return code well (the failure of the smoketests should be 
> signalled by the bash return code)
> This patch:
>  * Fixes the error code handling in smoketest/test.sh
>  * Fixes the test execution in compose/test-all.sh (should work from any 
> other directories)
>  * Updates hadoop-ozone/dev-support/checks/acceptance.sh to use the newer 
> test-all.sh executor instead of the old one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1628) Fix the execution and return code of smoketest executor shell script

2019-06-04 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1628:
---
Status: Patch Available  (was: Reopened)

> Fix the execution and return code of smoketest executor shell script
> 
>
> Key: HDDS-1628
> URL: https://issues.apache.org/jira/browse/HDDS-1628
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Problem: Some of the smoketest executions were reported to green even if they 
> contained failed tests.
> Root cause: the legacy test executor 
> (hadoop-ozone/dist/src/main/smoketest/test.sh) which just calls the new 
> executor script (hadoop-ozone/dist/src/main/compose/test-all.sh) didn't 
> handle the return code well (the failure of the smoketests should be 
> signalled by the bash return code)
> This patch:
>  * Fixes the error code handling in smoketest/test.sh
>  * Fixes the test execution in compose/test-all.sh (should work from any 
> other directories)
>  * Updates hadoop-ozone/dev-support/checks/acceptance.sh to use the newer 
> test-all.sh executor instead of the old one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1628) Fix the execution and return code of smoketest executor shell script

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1628?focusedWorklogId=253690=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253690
 ]

ASF GitHub Bot logged work on HDDS-1628:


Author: ASF GitHub Bot
Created on: 04/Jun/19 11:36
Start Date: 04/Jun/19 11:36
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #902: HDDS-1628. Fix the 
execution and return code of smoketest executor shell script
URL: https://github.com/apache/hadoop/pull/902
 
 
   Problem: Some of the smoketest executions were reported to green even if 
they contained failed tests.
   
   Root cause: the legacy test executor 
(hadoop-ozone/dist/src/main/smoketest/test.sh) which just calls the new 
executor script (hadoop-ozone/dist/src/main/compose/test-all.sh) didn't handle 
the return code well (the failure of the smoketests should be signalled by the 
bash return code)
   
   This patch:
* Fixes the error code handling in smoketest/test.sh
* Fixes the test execution in compose/test-all.sh (should work from any 
other directories)
* Updates hadoop-ozone/dev-support/checks/acceptance.sh to use the newer 
test-all.sh executor instead of the old one.
   
   See: https://issues.apache.org/jira/browse/HDDS-1628
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253690)
Time Spent: 10m
Remaining Estimate: 0h

> Fix the execution and return code of smoketest executor shell script
> 
>
> Key: HDDS-1628
> URL: https://issues.apache.org/jira/browse/HDDS-1628
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Problem: Some of the smoketest executions were reported to green even if they 
> contained failed tests.
> Root cause: the legacy test executor 
> (hadoop-ozone/dist/src/main/smoketest/test.sh) which just calls the new 
> executor script (hadoop-ozone/dist/src/main/compose/test-all.sh) didn't 
> handle the return code well (the failure of the smoketests should be 
> signalled by the bash return code)
> This patch:
>  * Fixes the error code handling in smoketest/test.sh
>  * Fixes the test execution in compose/test-all.sh (should work from any 
> other directories)
>  * Updates hadoop-ozone/dev-support/checks/acceptance.sh to use the newer 
> test-all.sh executor instead of the old one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1628) Fix the execution and return code of smoketest executor shell script

2019-06-04 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1628:
---
Status: Patch Available  (was: Open)

> Fix the execution and return code of smoketest executor shell script
> 
>
> Key: HDDS-1628
> URL: https://issues.apache.org/jira/browse/HDDS-1628
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Problem: Some of the smoketest executions were reported to green even if they 
> contained failed tests.
> Root cause: the legacy test executor 
> (hadoop-ozone/dist/src/main/smoketest/test.sh) which just calls the new 
> executor script (hadoop-ozone/dist/src/main/compose/test-all.sh) didn't 
> handle the return code well (the failure of the smoketests should be 
> signalled by the bash return code)
> This patch:
>  * Fixes the error code handling in smoketest/test.sh
>  * Fixes the test execution in compose/test-all.sh (should work from any 
> other directories)
>  * Updates hadoop-ozone/dev-support/checks/acceptance.sh to use the newer 
> test-all.sh executor instead of the old one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1628) Fix the execution and return code of smoketest executor shell script

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1628:
-
Labels: pull-request-available  (was: )

> Fix the execution and return code of smoketest executor shell script
> 
>
> Key: HDDS-1628
> URL: https://issues.apache.org/jira/browse/HDDS-1628
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>
> Problem: Some of the smoketest executions were reported to green even if they 
> contained failed tests.
> Root cause: the legacy test executor 
> (hadoop-ozone/dist/src/main/smoketest/test.sh) which just calls the new 
> executor script (hadoop-ozone/dist/src/main/compose/test-all.sh) didn't 
> handle the return code well (the failure of the smoketests should be 
> signalled by the bash return code)
> This patch:
>  * Fixes the error code handling in smoketest/test.sh
>  * Fixes the test execution in compose/test-all.sh (should work from any 
> other directories)
>  * Updates hadoop-ozone/dev-support/checks/acceptance.sh to use the newer 
> test-all.sh executor instead of the old one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1555) Disable install snapshot for ContainerStateMachine

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1555?focusedWorklogId=253689=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253689
 ]

ASF GitHub Bot logged work on HDDS-1555:


Author: ASF GitHub Bot
Created on: 04/Jun/19 11:35
Start Date: 04/Jun/19 11:35
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on issue #846: HDDS-1555. Disable 
install snapshot for ContainerStateMachine.
URL: https://github.com/apache/hadoop/pull/846#issuecomment-498634373
 
 
   Thanks @swagle for updating the patch. The patch looks good me. I am +1 on 
this. Please take care of the checkstyle issues reported.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253689)
Time Spent: 1h 40m  (was: 1.5h)

> Disable install snapshot for ContainerStateMachine
> --
>
> Key: HDDS-1555
> URL: https://issues.apache.org/jira/browse/HDDS-1555
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> In case a follower lags behind the leader by a large number, the leader tries 
> to send the snapshot to the follower. For ContainerStateMachine, the 
> information in the snapshot it not the entire state machine data. 
> InstallSnapshot for ContainerStateMachine should be disabled.
> {code}
> 2019-05-19 10:58:22,198 WARN  server.GrpcLogAppender 
> (GrpcLogAppender.java:installSnapshot(423)) - 
> GrpcLogAppender(e3e19760-1340-4acd-b50d-f8a796a97254->28d9bd2f-3fe2-4a69-8120-757a00fa2f20):
>  failed to install snapshot 
> [/Users/msingh/code/apache/ozone/github/git_oz_bugs_fixes/hadoop-ozone/integration-test/target/test/data/MiniOzoneClusterImpl-c2a863ef-8be9-445c-886f-57cad3a7b12e/datanode-6/data/ratis/fb88b749-3e75-4381-8973-6e0cb4904c7e/sm/snapshot.2_190]:
>  {}
> java.lang.NullPointerException
> at 
> org.apache.ratis.server.impl.LogAppender.readFileChunk(LogAppender.java:369)
> at 
> org.apache.ratis.server.impl.LogAppender.access$1100(LogAppender.java:54)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:318)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:303)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.installSnapshot(GrpcLogAppender.java:412)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.runAppenderImpl(GrpcLogAppender.java:101)
> at 
> org.apache.ratis.server.impl.LogAppender$AppenderDaemon.run(LogAppender.java:80)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1636) Tracing id is not propagated via async datanode grpc call

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1636?focusedWorklogId=253679=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253679
 ]

ASF GitHub Bot logged work on HDDS-1636:


Author: ASF GitHub Bot
Created on: 04/Jun/19 11:05
Start Date: 04/Jun/19 11:05
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #895: HDDS-1636. Tracing 
id is not propagated via async datanode grpc call
URL: https://github.com/apache/hadoop/pull/895#discussion_r290242450
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/tracing/TracingUtil.java
 ##
 @@ -99,7 +103,16 @@ public static Scope importAndCreateScope(String name, 
String encodedParent) {
 if (encodedParent != null && encodedParent.length() > 0) {
   StringBuilder builder = new StringBuilder();
   builder.append(encodedParent);
-  parentSpan = tracer.extract(StringCodec.FORMAT, builder);
+  try {
+parentSpan = tracer.extract(StringCodec.FORMAT, builder);
+  } catch (Exception ex) {
+if (LOG.isDebugEnabled()) {
+  LOG.debug("Can't extract tracing from the message.", ex);
+} else {
+  LOG.warn(
 
 Review comment:
   My log was full with stack traces in case of a problem. I removed the stack 
trace from WARN and only the e.getMessage is printed out. I think it should be 
enough to notice the problem.
   
   But I have no strong opinion. What do you prefer? WARN + full exception all 
the time?
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253679)
Time Spent: 40m  (was: 0.5h)

> Tracing id is not propagated via async datanode grpc call
> -
>
> Key: HDDS-1636
> URL: https://issues.apache.org/jira/browse/HDDS-1636
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Recently a new exception become visible in the datanode logs, using standard 
> freon (STANDLAONE)
> {code}
> datanode_2  | 2019-06-03 12:18:21 WARN  
> PropagationRegistry$ExceptionCatchingExtractorDecorator:60 - Error when 
> extracting SpanContext from carrier. Handling gracefully.
> datanode_2  | 
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 7576cabf-37a4-4232-9729-939a3fdb68c4WriteChunk150a8a848a951784256ca0801f7d9cf8b_stream_ed583cee-9552-4f1a-8c77-63f7d07b755f_chunk_1
> datanode_2  | at 
> org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:49)
> datanode_2  | at 
> org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:34)
> datanode_2  | at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
> datanode_2  | at 
> io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
> datanode_2  | at 
> io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
> datanode_2  | at 
> io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
> datanode_2  | at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:102)
> datanode_2  | at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
> datanode_2  | at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
> datanode_2  | at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
> datanode_2  | at 
> 

[jira] [Commented] (HDDS-1554) Create disk tests for fault injection test

2019-06-04 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1687#comment-1687
 ] 

Elek, Marton commented on HDDS-1554:


bq. Robot test framework based test cases doesn't converge toward using Apache 
Infra's Jenkins server

Based on my experience it can be executed in jenkins without any problem. I 
think it works very well without robot test plugin. It's enough to run robot 
tests in a dind image. Isn't it?

bq.  For example, how to introspect that authentication is verified on the 
server side instead of Java client. Robots framework can not tap into JVM to 
give us the answer that we seek, but a junit test can

Can you please explain how the junit test will do it if the backend runs in a 
spearated container?

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1554.001.patch
>
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14526) RBF: Update the document of RBF related metrics

2019-06-04 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855543#comment-16855543
 ] 

Takanobu Asanuma commented on HDFS-14526:
-

[~ayushtkn] Thanks for your review.

[~elgoiri] [~crh] Could you take a look?

> RBF: Update the document of RBF related metrics
> ---
>
> Key: HDFS-14526
> URL: https://issues.apache.org/jira/browse/HDFS-14526
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14526-HDFS-13891.1.patch, federationmetrics_v1.png
>
>
> This is a follow-on task of HDFS-14508. We need to update 
> {{HDFSRouterFederation.md#Metrics}} and {{Metrics.md}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1552) RpcClient works with both Hadoop-3 and Hadoop-2

2019-06-04 Thread Sammi Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HDDS-1552:
-
Summary: RpcClient works with both Hadoop-3 and Hadoop-2  (was: Provide 
RpcClient for Hadoop 2)

> RpcClient works with both Hadoop-3 and Hadoop-2
> ---
>
> Key: HDDS-1552
> URL: https://issues.apache.org/jira/browse/HDDS-1552
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>
> Provide a RpcClient for Hadoop 2. Current RpcClient depends on class 
> KeyProviderTokenIssuer which is not available in Hadoop 2. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1510) Classpath files are deployed to the maven repository as pom/jar files

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1510?focusedWorklogId=253656=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253656
 ]

ASF GitHub Bot logged work on HDDS-1510:


Author: ASF GitHub Bot
Created on: 04/Jun/19 09:59
Start Date: 04/Jun/19 09:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #900: HDDS-1510. 
Classpath files are deployed to the maven repository as pom/jar files
URL: https://github.com/apache/hadoop/pull/900#issuecomment-498606198
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 593 | trunk passed |
   | +1 | compile | 320 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1798 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 484 | the patch passed |
   | +1 | compile | 288 | the patch passed |
   | +1 | javac | 288 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 750 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 184 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 271 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1406 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 5674 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-900/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/900 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 3ddbc142a27d 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7991159 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-900/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-900/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-900/1/testReport/ |
   | Max. process+thread count | 4840 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds hadoop-ozone hadoop-ozone/dist U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-900/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253656)
Time Spent: 0.5h  (was: 20m)

> Classpath files are deployed to the maven repository as pom/jar files
> -
>
> Key: HDDS-1510
> URL: https://issues.apache.org/jira/browse/HDDS-1510
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> 1. 

[jira] [Updated] (HDFS-14541) ShortCircuitReplica#unref cost about 6% cpu and 6% heap allocation because of the frequent thrown NoSuchElementException in our HBase benchmark

2019-06-04 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HDFS-14541:

Description: 
Our XiaoMi HBase team are evaluating the performence improvement of 
HBASE-21879,  and we have few CPU flame graph  & heap flame graph by using 
async-profiler,  and find that there're some performence issues in DFSClient  . 


See the attached two flame graphs, we can conclude that the try catch block in 
ShortCircuitCache#trimEvictionMaps  has some serious perf problem , we should 
remove the try catch from DFSClient. 

{code}
  /**
   * Trim the eviction lists.
   */
  private void trimEvictionMaps() {
long now = Time.monotonicNow();
demoteOldEvictableMmaped(now);

while (true) {
  long evictableSize = evictable.size();
  long evictableMmappedSize = evictableMmapped.size();
  if (evictableSize + evictableMmappedSize <= maxTotalSize) {
return;
  }
  ShortCircuitReplica replica;
  try {
if (evictableSize == 0) {
  replica = (ShortCircuitReplica)evictableMmapped.get(evictableMmapped
  .firstKey());
} else {
  replica = (ShortCircuitReplica)evictable.get(evictable.firstKey());
}
  } catch (NoSuchElementException e) {
break;
  }
  if (LOG.isTraceEnabled()) {
LOG.trace(this + ": trimEvictionMaps is purging " + replica +
StringUtils.getStackTrace(Thread.currentThread()));
  }
  purge(replica);
}
  }
{code}

Our Xiaomi HDFS Team member [~leosun08] will prepare patch for this issue.  

  was:
Our XiaoMi HBase team are evaluating the performence improvement of 
HBASE-21879,  and we have few CPU flame graph  & heap flame graph by using 
async-profiler,  and found that there're some performence issues in DFSClient 
now . 


See the attached two flame graph, we can conclude that the try catch block in 
ShortCircuitCache#trimEvictionMaps  has some serious perf problem now, we 
should remove this from DFSClient. 

{code}
  /**
   * Trim the eviction lists.
   */
  private void trimEvictionMaps() {
long now = Time.monotonicNow();
demoteOldEvictableMmaped(now);

while (true) {
  long evictableSize = evictable.size();
  long evictableMmappedSize = evictableMmapped.size();
  if (evictableSize + evictableMmappedSize <= maxTotalSize) {
return;
  }
  ShortCircuitReplica replica;
  try {
if (evictableSize == 0) {
  replica = (ShortCircuitReplica)evictableMmapped.get(evictableMmapped
  .firstKey());
} else {
  replica = (ShortCircuitReplica)evictable.get(evictable.firstKey());
}
  } catch (NoSuchElementException e) {
break;
  }
  if (LOG.isTraceEnabled()) {
LOG.trace(this + ": trimEvictionMaps is purging " + replica +
StringUtils.getStackTrace(Thread.currentThread()));
  }
  purge(replica);
}
  }
{code}

Our Xiaomi HDFS Team member [~leosun08] will prepare patch for this issue.  


> ShortCircuitReplica#unref cost about 6% cpu and 6% heap allocation because of 
> the frequent thrown NoSuchElementException  in our HBase benchmark
> 
>
> Key: HDFS-14541
> URL: https://issues.apache.org/jira/browse/HDFS-14541
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zheng Hu
>Priority: Major
> Attachments: async-prof-pid-94152-alloc-2.svg, 
> async-prof-pid-94152-cpu-1.svg
>
>
> Our XiaoMi HBase team are evaluating the performence improvement of 
> HBASE-21879,  and we have few CPU flame graph  & heap flame graph by using 
> async-profiler,  and find that there're some performence issues in DFSClient  
> . 
> See the attached two flame graphs, we can conclude that the try catch block 
> in ShortCircuitCache#trimEvictionMaps  has some serious perf problem , we 
> should remove the try catch from DFSClient. 
> {code}
>   /**
>* Trim the eviction lists.
>*/
>   private void trimEvictionMaps() {
> long now = Time.monotonicNow();
> demoteOldEvictableMmaped(now);
> while (true) {
>   long evictableSize = evictable.size();
>   long evictableMmappedSize = evictableMmapped.size();
>   if (evictableSize + evictableMmappedSize <= maxTotalSize) {
> return;
>   }
>   ShortCircuitReplica replica;
>   try {
> if (evictableSize == 0) {
>   replica = (ShortCircuitReplica)evictableMmapped.get(evictableMmapped
>   .firstKey());
> } else {
>   replica = (ShortCircuitReplica)evictable.get(evictable.firstKey());
> }
>   } catch (NoSuchElementException e) {
> break;
>   }
>   if (LOG.isTraceEnabled()) {
> 

[jira] [Updated] (HDFS-14541) ShortCircuitReplica#unref cost about 6% cpu and 6% heap allocation because of the frequent thrown NoSuchElementException in our HBase benchmark

2019-06-04 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HDFS-14541:

Description: 
Our XiaoMi HBase team are evaluating the performence improvement of 
HBASE-21879,  and we have few CPU flame graph  & heap flame graph by using 
async-profiler,  and found that there're some performence issues in DFSClient 
now . 


See the attached two flame graph, we can conclude that the try catch block in 
ShortCircuitCache#trimEvictionMaps  has some serious perf problem now, we 
should remove this from DFSClient. 

{code}
  /**
   * Trim the eviction lists.
   */
  private void trimEvictionMaps() {
long now = Time.monotonicNow();
demoteOldEvictableMmaped(now);

while (true) {
  long evictableSize = evictable.size();
  long evictableMmappedSize = evictableMmapped.size();
  if (evictableSize + evictableMmappedSize <= maxTotalSize) {
return;
  }
  ShortCircuitReplica replica;
  try {
if (evictableSize == 0) {
  replica = (ShortCircuitReplica)evictableMmapped.get(evictableMmapped
  .firstKey());
} else {
  replica = (ShortCircuitReplica)evictable.get(evictable.firstKey());
}
  } catch (NoSuchElementException e) {
break;
  }
  if (LOG.isTraceEnabled()) {
LOG.trace(this + ": trimEvictionMaps is purging " + replica +
StringUtils.getStackTrace(Thread.currentThread()));
  }
  purge(replica);
}
  }
{code}

Our Xiaomi HDFS Team member [~leosun08] will prepare patch for this issue.  

  was:
Our XiaoMi HBase team are evaluating the performence improvement of 
HBASE-21879,  and we have few CPU flame graph  & heap flame graph by using 
async-profiler,  and found that there're some performence issues in DFSClient 
now . 


See the attached two flame graph, we can conclude that the try catch block in 
ShortCircuitCache#trimEvictionMaps  has some serious perf problem now, we 
should remove this from DFSClient. 

{code}
  /**
   * Trim the eviction lists.
   */
  private void trimEvictionMaps() {
long now = Time.monotonicNow();
demoteOldEvictableMmaped(now);

while (true) {
  long evictableSize = evictable.size();
  long evictableMmappedSize = evictableMmapped.size();
  if (evictableSize + evictableMmappedSize <= maxTotalSize) {
return;
  }
  ShortCircuitReplica replica;
  try {
if (evictableSize == 0) {
  replica = (ShortCircuitReplica)evictableMmapped.get(evictableMmapped
  .firstKey());
} else {
  replica = (ShortCircuitReplica)evictable.get(evictable.firstKey());
}
  } catch (NoSuchElementException e) {
break;
  }
  if (LOG.isTraceEnabled()) {
LOG.trace(this + ": trimEvictionMaps is purging " + replica +
StringUtils.getStackTrace(Thread.currentThread()));
  }
  purge(replica);
}
  }
{code}


> ShortCircuitReplica#unref cost about 6% cpu and 6% heap allocation because of 
> the frequent thrown NoSuchElementException  in our HBase benchmark
> 
>
> Key: HDFS-14541
> URL: https://issues.apache.org/jira/browse/HDFS-14541
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zheng Hu
>Priority: Major
> Attachments: async-prof-pid-94152-alloc-2.svg, 
> async-prof-pid-94152-cpu-1.svg
>
>
> Our XiaoMi HBase team are evaluating the performence improvement of 
> HBASE-21879,  and we have few CPU flame graph  & heap flame graph by using 
> async-profiler,  and found that there're some performence issues in DFSClient 
> now . 
> See the attached two flame graph, we can conclude that the try catch block in 
> ShortCircuitCache#trimEvictionMaps  has some serious perf problem now, we 
> should remove this from DFSClient. 
> {code}
>   /**
>* Trim the eviction lists.
>*/
>   private void trimEvictionMaps() {
> long now = Time.monotonicNow();
> demoteOldEvictableMmaped(now);
> while (true) {
>   long evictableSize = evictable.size();
>   long evictableMmappedSize = evictableMmapped.size();
>   if (evictableSize + evictableMmappedSize <= maxTotalSize) {
> return;
>   }
>   ShortCircuitReplica replica;
>   try {
> if (evictableSize == 0) {
>   replica = (ShortCircuitReplica)evictableMmapped.get(evictableMmapped
>   .firstKey());
> } else {
>   replica = (ShortCircuitReplica)evictable.get(evictable.firstKey());
> }
>   } catch (NoSuchElementException e) {
> break;
>   }
>   if (LOG.isTraceEnabled()) {
> LOG.trace(this + ": trimEvictionMaps is purging " + replica +
> 

[jira] [Created] (HDFS-14541) ShortCircuitReplica#unref cost about 6% cpu and 6% heap allocation because of the frequent thrown NoSuchElementException in our HBase benchmark

2019-06-04 Thread Zheng Hu (JIRA)
Zheng Hu created HDFS-14541:
---

 Summary: ShortCircuitReplica#unref cost about 6% cpu and 6% heap 
allocation because of the frequent thrown NoSuchElementException  in our HBase 
benchmark
 Key: HDFS-14541
 URL: https://issues.apache.org/jira/browse/HDFS-14541
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Zheng Hu
 Attachments: async-prof-pid-94152-alloc-2.svg, 
async-prof-pid-94152-cpu-1.svg

Our XiaoMi HBase team are evaluating the performence improvement of 
HBASE-21879,  and we have few CPU flame graph  & heap flame graph by using 
async-profiler,  and found that there're some performence issues in DFSClient 
now . 


See the attached two flame graph, we can conclude that the try catch block in 
ShortCircuitCache#trimEvictionMaps  has some serious perf problem now, we 
should remove this from DFSClient. 

{code}
  /**
   * Trim the eviction lists.
   */
  private void trimEvictionMaps() {
long now = Time.monotonicNow();
demoteOldEvictableMmaped(now);

while (true) {
  long evictableSize = evictable.size();
  long evictableMmappedSize = evictableMmapped.size();
  if (evictableSize + evictableMmappedSize <= maxTotalSize) {
return;
  }
  ShortCircuitReplica replica;
  try {
if (evictableSize == 0) {
  replica = (ShortCircuitReplica)evictableMmapped.get(evictableMmapped
  .firstKey());
} else {
  replica = (ShortCircuitReplica)evictable.get(evictable.firstKey());
}
  } catch (NoSuchElementException e) {
break;
  }
  if (LOG.isTraceEnabled()) {
LOG.trace(this + ": trimEvictionMaps is purging " + replica +
StringUtils.getStackTrace(Thread.currentThread()));
  }
  purge(replica);
}
  }
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14526) RBF: Update the document of RBF related metrics

2019-06-04 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855507#comment-16855507
 ] 

Ayush Saxena commented on HDFS-14526:
-

Should be correct only, Thanx for the information :)

Patch looks fair enough to me.

Lets wait if others have any comments.

 

> RBF: Update the document of RBF related metrics
> ---
>
> Key: HDFS-14526
> URL: https://issues.apache.org/jira/browse/HDFS-14526
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14526-HDFS-13891.1.patch, federationmetrics_v1.png
>
>
> This is a follow-on task of HDFS-14508. We need to update 
> {{HDFSRouterFederation.md#Metrics}} and {{Metrics.md}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1640) Reduce the size of the recon jar file

2019-06-04 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1640:
--

 Summary: Reduce the size of the recon jar file
 Key: HDDS-1640
 URL: https://issues.apache.org/jira/browse/HDDS-1640
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Recon
Reporter: Elek, Marton


hadoop-ozone-recon-0.5.0-SNAPSHOT.jar is 73 MB, mainly because the node_modules 
are included (full typescript source, eslint, babel, etc.):

{code}
unzip -l hadoop-ozone-recon-0.5.0-SNAPSHOT.jar | grep node_modules | wc
{code}

Fix me if I am wrong, but I think node_modules is not required in the 
distribution as the dependencies are already included in the compiled 
javascript files.

I propose to remove the node_modules from the jar file.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1639) Restructure documentation pages for better understanding

2019-06-04 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1639:
---
Status: Patch Available  (was: Open)

> Restructure documentation pages for better understanding
> 
>
> Key: HDDS-1639
> URL: https://issues.apache.org/jira/browse/HDDS-1639
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Documentation page should be updated according to the recent changes:
> In the uploaded PR I modified the following:
>  #  Pages are restructured to use a similar structure what is intruced on the 
> wiki by [~anu]. (Getting started guides are separated for different 
> environments)
>  # The width of the menu is increased (to make it more readable)
>  # The logo is moved from the main page from the menu (to get more space for 
> the menu items)
>  # 'Requirements' section is added to each 'Getting started' page
>  # Test tools / docker image / kubernetes pages are imported from the wiki. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1639) Restructure documentation pages for better understanding

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1639?focusedWorklogId=253622=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253622
 ]

ASF GitHub Bot logged work on HDDS-1639:


Author: ASF GitHub Bot
Created on: 04/Jun/19 08:46
Start Date: 04/Jun/19 08:46
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #901: HDDS-1639. 
Restructure documentation pages for better understanding
URL: https://github.com/apache/hadoop/pull/901
 
 
   Documentation page should be updated according to the recent changes:
   
   In the uploaded PR I modified the following:
   
#  Pages are restructured to use a similar structure what is intruced on 
the wiki by [~anu]. (Getting started guides are separated for different 
environments)
# The width of the menu is increased (to make it more readable)
# The logo is moved from the main page from the menu (to get more space for 
the menu items)
# 'Requirements' section is added to each 'Getting started' page
# Test tools / docker image / kubernetes pages are imported from the wiki. 
   
   See: https://issues.apache.org/jira/browse/HDDS-1639
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253622)
Time Spent: 10m
Remaining Estimate: 0h

> Restructure documentation pages for better understanding
> 
>
> Key: HDDS-1639
> URL: https://issues.apache.org/jira/browse/HDDS-1639
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Documentation page should be updated according to the recent changes:
> In the uploaded PR I modified the following:
>  #  Pages are restructured to use a similar structure what is intruced on the 
> wiki by [~anu]. (Getting started guides are separated for different 
> environments)
>  # The width of the menu is increased (to make it more readable)
>  # The logo is moved from the main page from the menu (to get more space for 
> the menu items)
>  # 'Requirements' section is added to each 'Getting started' page
>  # Test tools / docker image / kubernetes pages are imported from the wiki. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1639) Restructure documentation pages for better understanding

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1639:
-
Labels: pull-request-available  (was: )

> Restructure documentation pages for better understanding
> 
>
> Key: HDDS-1639
> URL: https://issues.apache.org/jira/browse/HDDS-1639
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>
> Documentation page should be updated according to the recent changes:
> In the uploaded PR I modified the following:
>  #  Pages are restructured to use a similar structure what is intruced on the 
> wiki by [~anu]. (Getting started guides are separated for different 
> environments)
>  # The width of the menu is increased (to make it more readable)
>  # The logo is moved from the main page from the menu (to get more space for 
> the menu items)
>  # 'Requirements' section is added to each 'Getting started' page
>  # Test tools / docker image / kubernetes pages are imported from the wiki. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1639) Restructure documentation pages for better understanding

2019-06-04 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1639:
--

 Summary: Restructure documentation pages for better understanding
 Key: HDDS-1639
 URL: https://issues.apache.org/jira/browse/HDDS-1639
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton
Assignee: Elek, Marton


Documentation page should be updated according to the recent changes:

In the uploaded PR I modified the following:

 #  Pages are restructured to use a similar structure what is intruced on the 
wiki by [~anu]. (Getting started guides are separated for different 
environments)
 # The width of the menu is increased (to make it more readable)
 # The logo is moved from the main page from the menu (to get more space for 
the menu items)
 # 'Requirements' section is added to each 'Getting started' page
 # Test tools / docker image / kubernetes pages are imported from the wiki. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1510) Classpath files are deployed to the maven repository as pom/jar files

2019-06-04 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1510:
---
Status: Patch Available  (was: Open)

> Classpath files are deployed to the maven repository as pom/jar files
> -
>
> Key: HDDS-1510
> URL: https://issues.apache.org/jira/browse/HDDS-1510
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> 1. Classpath files are plain text files which are generatede for each ozone 
> projects. Classpath files are used to defined the classpath of a module (om, 
> scm, etc) based on the maven classpath.
> Example classpath file:
> {code}
> classpath=$HDDS_LIB_JARS_DIR/kerb-simplekdc-1.0.1.jar:$HDDS_LIB_JARS_DIR/hk2-utils-2.5.0.jar:$HDDS_LIB_JARS_DIR/jackson-core-2.9.5.jar:$HDDS_LIB_JARS_DIR/ratis-netty-0.4.0-fe2b15d-SNAPSHOT.jar:$HDDS_LIB_JARS_DIR/protobuf-java-2.5.0.jar:...
>  
> {code}
> Classpath files are maven artifacts and copied to share/ozone/classpath in 
> the distribution
> 2. 0.4.0 was the first release when we deployed the artifacts to the apache 
> nexus. [~ajayydv] reported the problem that the staging repository can't be 
> closed: INFRA-18344
> It turned out that the classpath files are uploaded with jar extension to the 
> repository. We deleted all the classpath files manually and the repository 
> became closable.
> To avoid similar issues we need to fix this problem and make sure that the 
> classpath files are not uploaded to the repository during a 'mvn deploy' or 
> uploaded but with a good extension.
> ps: I don't know the exact solution yet, but I can imagine that bumping the 
> version of maven deploy plugin can help. Seems to be a bug in the plugin.
> ps2: This is blocker as we need to fix it before the next release



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1510) Classpath files are deployed to the maven repository as pom/jar files

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1510?focusedWorklogId=253618=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253618
 ]

ASF GitHub Bot logged work on HDDS-1510:


Author: ASF GitHub Bot
Created on: 04/Jun/19 08:30
Start Date: 04/Jun/19 08:30
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #900: HDDS-1510. Classpath 
files are deployed to the maven repository as pom/jar files
URL: https://github.com/apache/hadoop/pull/900#issuecomment-498575508
 
 
   Every file can be an "artifact" in the maven word if they are marked as 
artifact. We use custom classpath descriptors to generate the exact classpath 
and they are installed to the local maven repo + deployed to the maven 
repository during release.
   
   This is a problem in the `build-classpath` goal of the 
`maven-depedency-plugin` which is used to generate the classpath file. With 
using the `true` configuration we ask the plugin to mark the 
file as an uploadable artifact.
   
   Unfortunately the plugin doesn't set the _type_ of the artifact and the 
default is a jar. As a result the classpath file (text) is uploaded as a jar 
file, and nexus validation is failed because these fake jars.
   
   See the output of a normal `mvn install`
   
   ```
   [INFO] --- maven-install-plugin:2.5.1:install (default-install) @ 
hadoop-hdds-common ---
   ...
   [INFO] Installing 
/home/elek/projects/hadoop-review/hadoop-hdds/common/target/classpath to 
/home/elek/.m2/repository/org/apache/hadoop/hadoop-hdds-common/0.5.0-SNAPSHOT/hadoop-hdds-common-0.5.0-SNAPSHOT-classpath.jar
   ```
   But the classpath file is a text file:
   
   ```
   head  /home/elek/projects/hadoop-review/hadoop-hdds/common/target/classpath 
to 
/home/elek/.m2/repository/org/apache/hadoop/hadoop-hdds-common/0.5.0-SNAPSHOT/hadoop-hdds-common-0.5.0-SNAPSHOT-classpath.jar
   
classpath=$HDDS_LIB_JARS_DIR/kerb-simplekdc-1.0.1.jar:$HDDS_LIB_JARS_DIR/jackson-core-2.9.5.jar:$HDDS_LIB_JARS_DIR/ratis-netty-0.4.0-fe2b15d-SNAPSHOT.jar:$HDDS_LIB_JARS_DIR/protobuf-java-2.5.0.jar:$HDDS_LIB_JARS_DIR/bcpkix-jdk15on-1.60.jar
   ...
   ```

   One easy workaround is to mark the classpath file as an uploadable artifact 
with a separated plugin which has more flexibility (maven-build-helper-plugin).
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253618)
Time Spent: 20m  (was: 10m)

> Classpath files are deployed to the maven repository as pom/jar files
> -
>
> Key: HDDS-1510
> URL: https://issues.apache.org/jira/browse/HDDS-1510
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> 1. Classpath files are plain text files which are generatede for each ozone 
> projects. Classpath files are used to defined the classpath of a module (om, 
> scm, etc) based on the maven classpath.
> Example classpath file:
> {code}
> classpath=$HDDS_LIB_JARS_DIR/kerb-simplekdc-1.0.1.jar:$HDDS_LIB_JARS_DIR/hk2-utils-2.5.0.jar:$HDDS_LIB_JARS_DIR/jackson-core-2.9.5.jar:$HDDS_LIB_JARS_DIR/ratis-netty-0.4.0-fe2b15d-SNAPSHOT.jar:$HDDS_LIB_JARS_DIR/protobuf-java-2.5.0.jar:...
>  
> {code}
> Classpath files are maven artifacts and copied to share/ozone/classpath in 
> the distribution
> 2. 0.4.0 was the first release when we deployed the artifacts to the apache 
> nexus. [~ajayydv] reported the problem that the staging repository can't be 
> closed: INFRA-18344
> It turned out that the classpath files are uploaded with jar extension to the 
> repository. We deleted all the classpath files manually and the repository 
> became closable.
> To avoid similar issues we need to fix this problem and make sure that the 
> classpath files are not uploaded to the repository during a 'mvn deploy' or 
> uploaded but with a good extension.
> ps: I don't know the exact solution yet, but I can imagine that bumping the 
> version of maven deploy plugin can help. Seems to be a bug in the plugin.
> ps2: This is blocker as we need to fix it before the next release



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HDFS-14526) RBF: Update the document of RBF related metrics

2019-06-04 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855453#comment-16855453
 ] 

Takanobu Asanuma commented on HDFS-14526:
-

I thought it is plural nouns ending in –s.
 [https://www.grammar.cl/rules/genitive-case.htm]

I'm not a native English speaker and may make some mistakes or have written 
unnatural sentences. If anything comes up, please let me know. :)

> RBF: Update the document of RBF related metrics
> ---
>
> Key: HDFS-14526
> URL: https://issues.apache.org/jira/browse/HDFS-14526
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14526-HDFS-13891.1.patch, federationmetrics_v1.png
>
>
> This is a follow-on task of HDFS-14508. We need to update 
> {{HDFSRouterFederation.md#Metrics}} and {{Metrics.md}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1510) Classpath files are deployed to the maven repository as pom/jar files

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1510?focusedWorklogId=253614=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253614
 ]

ASF GitHub Bot logged work on HDDS-1510:


Author: ASF GitHub Bot
Created on: 04/Jun/19 08:24
Start Date: 04/Jun/19 08:24
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #900: HDDS-1510. 
Classpath files are deployed to the maven repository as pom/jar files
URL: https://github.com/apache/hadoop/pull/900
 
 
   1. Classpath files are plain text files which are generatede for each ozone 
projects. Classpath files are used to defined the classpath of a module (om, 
scm, etc) based on the maven classpath.
   
   Example classpath file:
   
   {code}
   
classpath=$HDDS_LIB_JARS_DIR/kerb-simplekdc-1.0.1.jar:$HDDS_LIB_JARS_DIR/hk2-utils-2.5.0.jar:$HDDS_LIB_JARS_DIR/jackson-core-2.9.5.jar:$HDDS_LIB_JARS_DIR/ratis-netty-0.4.0-fe2b15d-SNAPSHOT.jar:$HDDS_LIB_JARS_DIR/protobuf-java-2.5.0.jar:...
 
   {code}
   
   Classpath files are maven artifacts and copied to share/ozone/classpath in 
the distribution
   
   2. 0.4.0 was the first release when we deployed the artifacts to the apache 
nexus. [~ajayydv] reported the problem that the staging repository can't be 
closed: INFRA-18344
   
   It turned out that the classpath files are uploaded with jar extension to 
the repository. We deleted all the classpath files manually and the repository 
became closable.
   
   To avoid similar issues we need to fix this problem and make sure that the 
classpath files are not uploaded to the repository during a 'mvn deploy' or 
uploaded but with a good extension.
   
   ps: I don't know the exact solution yet, but I can imagine that bumping the 
version of maven deploy plugin can help. Seems to be a bug in the plugin.
   
   ps2: This is blocker as we need to fix it before the next release
   
   See: https://issues.apache.org/jira/browse/HDDS-1510
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253614)
Time Spent: 10m
Remaining Estimate: 0h

> Classpath files are deployed to the maven repository as pom/jar files
> -
>
> Key: HDDS-1510
> URL: https://issues.apache.org/jira/browse/HDDS-1510
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> 1. Classpath files are plain text files which are generatede for each ozone 
> projects. Classpath files are used to defined the classpath of a module (om, 
> scm, etc) based on the maven classpath.
> Example classpath file:
> {code}
> classpath=$HDDS_LIB_JARS_DIR/kerb-simplekdc-1.0.1.jar:$HDDS_LIB_JARS_DIR/hk2-utils-2.5.0.jar:$HDDS_LIB_JARS_DIR/jackson-core-2.9.5.jar:$HDDS_LIB_JARS_DIR/ratis-netty-0.4.0-fe2b15d-SNAPSHOT.jar:$HDDS_LIB_JARS_DIR/protobuf-java-2.5.0.jar:...
>  
> {code}
> Classpath files are maven artifacts and copied to share/ozone/classpath in 
> the distribution
> 2. 0.4.0 was the first release when we deployed the artifacts to the apache 
> nexus. [~ajayydv] reported the problem that the staging repository can't be 
> closed: INFRA-18344
> It turned out that the classpath files are uploaded with jar extension to the 
> repository. We deleted all the classpath files manually and the repository 
> became closable.
> To avoid similar issues we need to fix this problem and make sure that the 
> classpath files are not uploaded to the repository during a 'mvn deploy' or 
> uploaded but with a good extension.
> ps: I don't know the exact solution yet, but I can imagine that bumping the 
> version of maven deploy plugin can help. Seems to be a bug in the plugin.
> ps2: This is blocker as we need to fix it before the next release



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1510) Classpath files are deployed to the maven repository as pom/jar files

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1510:
-
Labels: pull-request-available  (was: )

> Classpath files are deployed to the maven repository as pom/jar files
> -
>
> Key: HDDS-1510
> URL: https://issues.apache.org/jira/browse/HDDS-1510
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>
> 1. Classpath files are plain text files which are generatede for each ozone 
> projects. Classpath files are used to defined the classpath of a module (om, 
> scm, etc) based on the maven classpath.
> Example classpath file:
> {code}
> classpath=$HDDS_LIB_JARS_DIR/kerb-simplekdc-1.0.1.jar:$HDDS_LIB_JARS_DIR/hk2-utils-2.5.0.jar:$HDDS_LIB_JARS_DIR/jackson-core-2.9.5.jar:$HDDS_LIB_JARS_DIR/ratis-netty-0.4.0-fe2b15d-SNAPSHOT.jar:$HDDS_LIB_JARS_DIR/protobuf-java-2.5.0.jar:...
>  
> {code}
> Classpath files are maven artifacts and copied to share/ozone/classpath in 
> the distribution
> 2. 0.4.0 was the first release when we deployed the artifacts to the apache 
> nexus. [~ajayydv] reported the problem that the staging repository can't be 
> closed: INFRA-18344
> It turned out that the classpath files are uploaded with jar extension to the 
> repository. We deleted all the classpath files manually and the repository 
> became closable.
> To avoid similar issues we need to fix this problem and make sure that the 
> classpath files are not uploaded to the repository during a 'mvn deploy' or 
> uploaded but with a good extension.
> ps: I don't know the exact solution yet, but I can imagine that bumping the 
> version of maven deploy plugin can help. Seems to be a bug in the plugin.
> ps2: This is blocker as we need to fix it before the next release



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14526) RBF: Update the document of RBF related metrics

2019-06-04 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855436#comment-16855436
 ] 

Ayush Saxena commented on HDFS-14526:
-

{code:java}
sub-clusters'
{code}
Is this      *'*       mis-placed? or does that have some meaning?

> RBF: Update the document of RBF related metrics
> ---
>
> Key: HDFS-14526
> URL: https://issues.apache.org/jira/browse/HDFS-14526
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14526-HDFS-13891.1.patch, federationmetrics_v1.png
>
>
> This is a follow-on task of HDFS-14508. We need to update 
> {{HDFSRouterFederation.md#Metrics}} and {{Metrics.md}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1510) Classpath files are deployed to the maven repository as pom/jar files

2019-06-04 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855424#comment-16855424
 ] 

Elek, Marton commented on HDDS-1510:


I support any of the improvements if the current features (which are missing 
from the hadoop scripts) can be supported. Especially the isolated maven based 
classpath for each daemons. But please  open a separated issue. This issue is 
about a small technical problem of the current behaviour. 

Thanks.

> Classpath files are deployed to the maven repository as pom/jar files
> -
>
> Key: HDDS-1510
> URL: https://issues.apache.org/jira/browse/HDDS-1510
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>
> 1. Classpath files are plain text files which are generatede for each ozone 
> projects. Classpath files are used to defined the classpath of a module (om, 
> scm, etc) based on the maven classpath.
> Example classpath file:
> {code}
> classpath=$HDDS_LIB_JARS_DIR/kerb-simplekdc-1.0.1.jar:$HDDS_LIB_JARS_DIR/hk2-utils-2.5.0.jar:$HDDS_LIB_JARS_DIR/jackson-core-2.9.5.jar:$HDDS_LIB_JARS_DIR/ratis-netty-0.4.0-fe2b15d-SNAPSHOT.jar:$HDDS_LIB_JARS_DIR/protobuf-java-2.5.0.jar:...
>  
> {code}
> Classpath files are maven artifacts and copied to share/ozone/classpath in 
> the distribution
> 2. 0.4.0 was the first release when we deployed the artifacts to the apache 
> nexus. [~ajayydv] reported the problem that the staging repository can't be 
> closed: INFRA-18344
> It turned out that the classpath files are uploaded with jar extension to the 
> repository. We deleted all the classpath files manually and the repository 
> became closable.
> To avoid similar issues we need to fix this problem and make sure that the 
> classpath files are not uploaded to the repository during a 'mvn deploy' or 
> uploaded but with a good extension.
> ps: I don't know the exact solution yet, but I can imagine that bumping the 
> version of maven deploy plugin can help. Seems to be a bug in the plugin.
> ps2: This is blocker as we need to fix it before the next release



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1510) Classpath files are deployed to the maven repository as pom/jar files

2019-06-04 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1510:
---
Target Version/s: 0.4.1  (was: 0.5.0)

> Classpath files are deployed to the maven repository as pom/jar files
> -
>
> Key: HDDS-1510
> URL: https://issues.apache.org/jira/browse/HDDS-1510
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>
> 1. Classpath files are plain text files which are generatede for each ozone 
> projects. Classpath files are used to defined the classpath of a module (om, 
> scm, etc) based on the maven classpath.
> Example classpath file:
> {code}
> classpath=$HDDS_LIB_JARS_DIR/kerb-simplekdc-1.0.1.jar:$HDDS_LIB_JARS_DIR/hk2-utils-2.5.0.jar:$HDDS_LIB_JARS_DIR/jackson-core-2.9.5.jar:$HDDS_LIB_JARS_DIR/ratis-netty-0.4.0-fe2b15d-SNAPSHOT.jar:$HDDS_LIB_JARS_DIR/protobuf-java-2.5.0.jar:...
>  
> {code}
> Classpath files are maven artifacts and copied to share/ozone/classpath in 
> the distribution
> 2. 0.4.0 was the first release when we deployed the artifacts to the apache 
> nexus. [~ajayydv] reported the problem that the staging repository can't be 
> closed: INFRA-18344
> It turned out that the classpath files are uploaded with jar extension to the 
> repository. We deleted all the classpath files manually and the repository 
> became closable.
> To avoid similar issues we need to fix this problem and make sure that the 
> classpath files are not uploaded to the repository during a 'mvn deploy' or 
> uploaded but with a good extension.
> ps: I don't know the exact solution yet, but I can imagine that bumping the 
> version of maven deploy plugin can help. Seems to be a bug in the plugin.
> ps2: This is blocker as we need to fix it before the next release



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1510) Classpath files are deployed to the maven repository as pom/jar files

2019-06-04 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-1510:
--

Assignee: Elek, Marton

> Classpath files are deployed to the maven repository as pom/jar files
> -
>
> Key: HDDS-1510
> URL: https://issues.apache.org/jira/browse/HDDS-1510
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>
> 1. Classpath files are plain text files which are generatede for each ozone 
> projects. Classpath files are used to defined the classpath of a module (om, 
> scm, etc) based on the maven classpath.
> Example classpath file:
> {code}
> classpath=$HDDS_LIB_JARS_DIR/kerb-simplekdc-1.0.1.jar:$HDDS_LIB_JARS_DIR/hk2-utils-2.5.0.jar:$HDDS_LIB_JARS_DIR/jackson-core-2.9.5.jar:$HDDS_LIB_JARS_DIR/ratis-netty-0.4.0-fe2b15d-SNAPSHOT.jar:$HDDS_LIB_JARS_DIR/protobuf-java-2.5.0.jar:...
>  
> {code}
> Classpath files are maven artifacts and copied to share/ozone/classpath in 
> the distribution
> 2. 0.4.0 was the first release when we deployed the artifacts to the apache 
> nexus. [~ajayydv] reported the problem that the staging repository can't be 
> closed: INFRA-18344
> It turned out that the classpath files are uploaded with jar extension to the 
> repository. We deleted all the classpath files manually and the repository 
> became closable.
> To avoid similar issues we need to fix this problem and make sure that the 
> classpath files are not uploaded to the repository during a 'mvn deploy' or 
> uploaded but with a good extension.
> ps: I don't know the exact solution yet, but I can imagine that bumping the 
> version of maven deploy plugin can help. Seems to be a bug in the plugin.
> ps2: This is blocker as we need to fix it before the next release



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1631) Fix auditparser smoketests

2019-06-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855393#comment-16855393
 ] 

Hudson commented on HDDS-1631:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16661 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16661/])
HDDS-1631. Fix auditparser smoketests (#892) (aengineer: rev 
5d5081eff8e898b5f16481dd87891c11763a0ec8)
* (edit) hadoop-ozone/dist/src/main/compose/ozone/test.sh
* (edit) hadoop-ozone/dist/src/main/smoketest/auditparser/auditparser.robot


> Fix auditparser smoketests
> --
>
> Key: HDDS-1631
> URL: https://issues.apache.org/jira/browse/HDDS-1631
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In HDDS-1518 we modified the location of the var and config files inside the 
> container.
> There are three problems with the current auditparser smokest:
>  1. The default audit log4j files are not part of the new config directory 
> (fixed with HDDS-1630)
>  2. The smoketest is executed in scm container instead of om
>  3. The log directory is hard coded
> The 2 and 3 will be fined in this patch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14220) Enable Replica Placement Value Per Rack

2019-06-04 Thread Amithsha (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855391#comment-16855391
 ] 

Amithsha commented on HDFS-14220:
-

[~elgoiri] node groups? I am not sure about this can you provide more info.

> Enable Replica Placement Value Per Rack
> ---
>
> Key: HDFS-14220
> URL: https://issues.apache.org/jira/browse/HDFS-14220
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Amithsha
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14220-01.patch, HDFS-14220-02.patch
>
>
> By default, Replica placement per rack will be taken care by  
> BlockPlacementPolicyDefault.java .
> With 2 if conditions 
>  # numOfRacks <1 
>  # numOfRacks > 1
> and the placement will happen as 1 on localrack, 2 on remote rack.
> If a user needs max of 1 replica per rack then 
> BlockPlacementPolicyDefault.java modification is needed instead we can add a 
> property to specify the placement policy and replica value per rack.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1628) Fix the execution and return code of smoketest executor shell script

2019-06-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1628:
---
Status: Open  (was: Patch Available)

> Fix the execution and return code of smoketest executor shell script
> 
>
> Key: HDDS-1628
> URL: https://issues.apache.org/jira/browse/HDDS-1628
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>
> Problem: Some of the smoketest executions were reported to green even if they 
> contained failed tests.
> Root cause: the legacy test executor 
> (hadoop-ozone/dist/src/main/smoketest/test.sh) which just calls the new 
> executor script (hadoop-ozone/dist/src/main/compose/test-all.sh) didn't 
> handle the return code well (the failure of the smoketests should be 
> signalled by the bash return code)
> This patch:
>  * Fixes the error code handling in smoketest/test.sh
>  * Fixes the test execution in compose/test-all.sh (should work from any 
> other directories)
>  * Updates hadoop-ozone/dev-support/checks/acceptance.sh to use the newer 
> test-all.sh executor instead of the old one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1631) Fix auditparser smoketests

2019-06-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1631:
---
   Resolution: Fixed
Fix Version/s: 0.4.1
   Status: Resolved  (was: Patch Available)

[~elek] Thanks for the refinement in the Robot script. I have committed this 
patch to the trunk.

> Fix auditparser smoketests
> --
>
> Key: HDDS-1631
> URL: https://issues.apache.org/jira/browse/HDDS-1631
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In HDDS-1518 we modified the location of the var and config files inside the 
> container.
> There are three problems with the current auditparser smokest:
>  1. The default audit log4j files are not part of the new config directory 
> (fixed with HDDS-1630)
>  2. The smoketest is executed in scm container instead of om
>  3. The log directory is hard coded
> The 2 and 3 will be fined in this patch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1631) Fix auditparser smoketests

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1631?focusedWorklogId=253568=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253568
 ]

ASF GitHub Bot logged work on HDDS-1631:


Author: ASF GitHub Bot
Created on: 04/Jun/19 06:30
Start Date: 04/Jun/19 06:30
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #892: HDDS-1631. 
Fix auditparser smoketests
URL: https://github.com/apache/hadoop/pull/892
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253568)
Time Spent: 40m  (was: 0.5h)

> Fix auditparser smoketests
> --
>
> Key: HDDS-1631
> URL: https://issues.apache.org/jira/browse/HDDS-1631
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In HDDS-1518 we modified the location of the var and config files inside the 
> container.
> There are three problems with the current auditparser smokest:
>  1. The default audit log4j files are not part of the new config directory 
> (fixed with HDDS-1630)
>  2. The smoketest is executed in scm container instead of om
>  3. The log directory is hard coded
> The 2 and 3 will be fined in this patch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14526) RBF: Update the document of RBF related metrics

2019-06-04 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14526:

Attachment: federationmetrics_v1.png

> RBF: Update the document of RBF related metrics
> ---
>
> Key: HDFS-14526
> URL: https://issues.apache.org/jira/browse/HDFS-14526
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14526-HDFS-13891.1.patch, federationmetrics_v1.png
>
>
> This is a follow-on task of HDFS-14508. We need to update 
> {{HDFSRouterFederation.md#Metrics}} and {{Metrics.md}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14526) RBF: Update the document of RBF related metrics

2019-06-04 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855382#comment-16855382
 ] 

Takanobu Asanuma commented on HDFS-14526:
-

Uploaded the 1st patch.

* The patch doesn't include {{RouterMBean}} metrics since only statistical 
information is documented in {{Metrics.html}}.
* Add dfs context like the other metrics. (See: HDFS-12883)

Kindly help to review it.

> RBF: Update the document of RBF related metrics
> ---
>
> Key: HDFS-14526
> URL: https://issues.apache.org/jira/browse/HDFS-14526
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14526-HDFS-13891.1.patch
>
>
> This is a follow-on task of HDFS-14508. We need to update 
> {{HDFSRouterFederation.md#Metrics}} and {{Metrics.md}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1635) Maintain docker entrypoint and envtoconf inside ozone project

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1635?focusedWorklogId=253566=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253566
 ]

ASF GitHub Bot logged work on HDDS-1635:


Author: ASF GitHub Bot
Created on: 04/Jun/19 06:25
Start Date: 04/Jun/19 06:25
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #894: HDDS-1635. 
Maintain docker entrypoint and envtoconf inside ozone project
URL: https://github.com/apache/hadoop/pull/894#issuecomment-498538514
 
 
   ShellChecks? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253566)
Time Spent: 1h 20m  (was: 1h 10m)

> Maintain docker entrypoint and envtoconf inside ozone project
> -
>
> Key: HDDS-1635
> URL: https://issues.apache.org/jira/browse/HDDS-1635
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> During an offline discussion with [~eyang] and [~arp], Eric suggested to 
> maintain the source of the docker specific start images inside the main ozone 
> branch (trunk) instead of the branch of the docker image.
> With this approach the ozone-runner image can be a very lightweight image and 
> the entrypoint logic can be versioned together with the ozone itself.
> An other use case is a container creation script. Recently we 
> [documented|https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Docker+images]
>  that hadoop-runner/ozone-runner/ozone images are not for production (for 
> example because they contain development tools).
> We can create a helper tool (similar what Spark provides) to create Ozone 
> container images from any production ready base image. But this tool requires 
> the existence of the scripts inside the distribution.
> (ps: I think sooner or later the functionality of envtoconf.py can be added 
> to the OzoneConfiguration java class and we can parse the configuration 
> values directly from environment variables.
> In this patch I copied the required scripts to the ozone source tree and the 
> new ozone-runner image (HDDS-1634) is designed to use it from this specific 
> location.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14526) RBF: Update the document of RBF related metrics

2019-06-04 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14526:

Attachment: HDFS-14526-HDFS-13891.1.patch

> RBF: Update the document of RBF related metrics
> ---
>
> Key: HDFS-14526
> URL: https://issues.apache.org/jira/browse/HDFS-14526
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14526-HDFS-13891.1.patch
>
>
> This is a follow-on task of HDFS-14508. We need to update 
> {{HDFSRouterFederation.md#Metrics}} and {{Metrics.md}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1628) Fix the execution and return code of smoketest executor shell script

2019-06-04 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855379#comment-16855379
 ] 

Anu Engineer commented on HDDS-1628:


where is the pull request ? [~elek] ? Can you please check ?

> Fix the execution and return code of smoketest executor shell script
> 
>
> Key: HDDS-1628
> URL: https://issues.apache.org/jira/browse/HDDS-1628
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>
> Problem: Some of the smoketest executions were reported to green even if they 
> contained failed tests.
> Root cause: the legacy test executor 
> (hadoop-ozone/dist/src/main/smoketest/test.sh) which just calls the new 
> executor script (hadoop-ozone/dist/src/main/compose/test-all.sh) didn't 
> handle the return code well (the failure of the smoketests should be 
> signalled by the bash return code)
> This patch:
>  * Fixes the error code handling in smoketest/test.sh
>  * Fixes the test execution in compose/test-all.sh (should work from any 
> other directories)
>  * Updates hadoop-ozone/dev-support/checks/acceptance.sh to use the newer 
> test-all.sh executor instead of the old one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1633) Update rat from 0.12 to 0.13 in hadoop-runner build script

2019-06-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1633:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~elek] Thanks for the version update, I appreciate it. I have committed this 
to the trunk

> Update rat from 0.12 to 0.13 in hadoop-runner build script
> --
>
> Key: HDDS-1633
> URL: https://issues.apache.org/jira/browse/HDDS-1633
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We have a new rat, the old one is not available. The url should be updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1633) Update rat from 0.12 to 0.13 in hadoop-runner build script

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1633?focusedWorklogId=253564=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253564
 ]

ASF GitHub Bot logged work on HDDS-1633:


Author: ASF GitHub Bot
Created on: 04/Jun/19 06:22
Start Date: 04/Jun/19 06:22
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #891: HDDS-1633. 
Update rat from 0.12 to 0.13 in hadoop-runner build script
URL: https://github.com/apache/hadoop/pull/891
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253564)
Time Spent: 20m  (was: 10m)

> Update rat from 0.12 to 0.13 in hadoop-runner build script
> --
>
> Key: HDDS-1633
> URL: https://issues.apache.org/jira/browse/HDDS-1633
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We have a new rat, the old one is not available. The url should be updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1629) Tar file creation can be optional for non-dist builds

2019-06-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855376#comment-16855376
 ] 

Hudson commented on HDDS-1629:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16660 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16660/])
HDDS-1629. Tar file creation can be optional for non-dist builds. (aengineer: 
rev e140a450465c903217c73942f1d9200ea7f27570)
* (edit) hadoop-ozone/dist/pom.xml


> Tar file creation can be optional for non-dist builds
> -
>
> Key: HDDS-1629
> URL: https://issues.apache.org/jira/browse/HDDS-1629
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone tar file creation is a very time consuming step. I propose to make it 
> optional and create the tar file only if the dist profile is enabled (-Pdist)
> The tar file is not required to test ozone as the same content is available 
> from hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT which is enough to run 
> docker-compose pseudo clusters, smoketests. 
> If it's required, the tar file creation can be requested by the dist profile.
>  
> On my machine (ssd based) it can cause 5-10% time improvements as the tar 
> size is ~500MB and it requires a lot of IO.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1629) Tar file creation can be optional for non-dist builds

2019-06-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1629:
---
   Resolution: Fixed
Fix Version/s: 0.4.1
   Status: Resolved  (was: Patch Available)

[~elek] Thanks for this optimization. I have committed this patch to the trunk.

> Tar file creation can be optional for non-dist builds
> -
>
> Key: HDDS-1629
> URL: https://issues.apache.org/jira/browse/HDDS-1629
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone tar file creation is a very time consuming step. I propose to make it 
> optional and create the tar file only if the dist profile is enabled (-Pdist)
> The tar file is not required to test ozone as the same content is available 
> from hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT which is enough to run 
> docker-compose pseudo clusters, smoketests. 
> If it's required, the tar file creation can be requested by the dist profile.
>  
> On my machine (ssd based) it can cause 5-10% time improvements as the tar 
> size is ~500MB and it requires a lot of IO.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1629) Tar file creation can be optional for non-dist builds

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1629?focusedWorklogId=253563=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253563
 ]

ASF GitHub Bot logged work on HDDS-1629:


Author: ASF GitHub Bot
Created on: 04/Jun/19 06:20
Start Date: 04/Jun/19 06:20
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #887: HDDS-1629. 
Tar file creation can be optional for non-dist builds
URL: https://github.com/apache/hadoop/pull/887
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253563)
Time Spent: 0.5h  (was: 20m)

> Tar file creation can be optional for non-dist builds
> -
>
> Key: HDDS-1629
> URL: https://issues.apache.org/jira/browse/HDDS-1629
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone tar file creation is a very time consuming step. I propose to make it 
> optional and create the tar file only if the dist profile is enabled (-Pdist)
> The tar file is not required to test ozone as the same content is available 
> from hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT which is enough to run 
> docker-compose pseudo clusters, smoketests. 
> If it's required, the tar file creation can be requested by the dist profile.
>  
> On my machine (ssd based) it can cause 5-10% time improvements as the tar 
> size is ~500MB and it requires a lot of IO.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1607) Create smoketest for non-secure mapreduce example

2019-06-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855372#comment-16855372
 ] 

Hudson commented on HDDS-1607:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16659 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16659/])
HDDS-1607. Create smoketest for non-secure mapreduce example (#869) (aengineer: 
rev 1fc359fc101b3ff90c95d22a3f4cfa78b65ae47d)
* (add) hadoop-ozone/dist/src/main/compose/ozone-mr/test.sh
* (add) hadoop-ozone/dist/src/main/smoketest/createmrenv.robot
* (add) hadoop-ozone/dist/src/main/compose/ozone-mr/docker-compose.yaml
* (add) hadoop-ozone/dist/src/main/compose/ozone-mr/.env
* (add) hadoop-ozone/dist/src/main/compose/ozone-mr/docker-config
* (add) hadoop-ozone/dist/src/main/smoketest/mapreduce.robot


> Create smoketest for non-secure mapreduce example
> -
>
> Key: HDDS-1607
> URL: https://issues.apache.org/jira/browse/HDDS-1607
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> We had multiple problems earlier with the classpath separation and the 
> internal ozonefs classloader. Before fixing all the issues I propose to 
> create a smoketest to detect if the classpath separation is broken again .
> As a first step I created an smoketest/ozone-mr environment (based on the  
> work of [~xyao], which is secure) and a smoketest 
> Possible follow-up works:
>  * Adapt the test.sh for the ozonesecure-mr
>  * Include test runs with older hadoop versions 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1607) Create smoketest for non-secure mapreduce example

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1607?focusedWorklogId=253562=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253562
 ]

ASF GitHub Bot logged work on HDDS-1607:


Author: ASF GitHub Bot
Created on: 04/Jun/19 06:18
Start Date: 04/Jun/19 06:18
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #869: HDDS-1607. 
Create smoketest for non-secure mapreduce example
URL: https://github.com/apache/hadoop/pull/869
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253562)
Time Spent: 2h  (was: 1h 50m)

> Create smoketest for non-secure mapreduce example
> -
>
> Key: HDDS-1607
> URL: https://issues.apache.org/jira/browse/HDDS-1607
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> We had multiple problems earlier with the classpath separation and the 
> internal ozonefs classloader. Before fixing all the issues I propose to 
> create a smoketest to detect if the classpath separation is broken again .
> As a first step I created an smoketest/ozone-mr environment (based on the  
> work of [~xyao], which is secure) and a smoketest 
> Possible follow-up works:
>  * Adapt the test.sh for the ozonesecure-mr
>  * Include test runs with older hadoop versions 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1607) Create smoketest for non-secure mapreduce example

2019-06-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1607:
---
   Resolution: Fixed
Fix Version/s: 0.4.1
   Status: Resolved  (was: Patch Available)

[~xyao] Thanks for the review. [~elek] Thanks for the patch. I have committed 
this to the trunk.

> Create smoketest for non-secure mapreduce example
> -
>
> Key: HDDS-1607
> URL: https://issues.apache.org/jira/browse/HDDS-1607
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> We had multiple problems earlier with the classpath separation and the 
> internal ozonefs classloader. Before fixing all the issues I propose to 
> create a smoketest to detect if the classpath separation is broken again .
> As a first step I created an smoketest/ozone-mr environment (based on the  
> work of [~xyao], which is secure) and a smoketest 
> Possible follow-up works:
>  * Adapt the test.sh for the ozonesecure-mr
>  * Include test runs with older hadoop versions 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1632) Make the hadoop home word readable and avoid sudo in hadoop-runner

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1632?focusedWorklogId=253560=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253560
 ]

ASF GitHub Bot logged work on HDDS-1632:


Author: ASF GitHub Bot
Created on: 04/Jun/19 06:15
Start Date: 04/Jun/19 06:15
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #893: HDDS-1632. 
Make the hadoop home word readable and avoid sudo in hadoo…
URL: https://github.com/apache/hadoop/pull/893
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253560)
Time Spent: 40m  (was: 0.5h)

> Make the hadoop home word readable and avoid sudo in hadoop-runner
> --
>
> Key: HDDS-1632
> URL: https://issues.apache.org/jira/browse/HDDS-1632
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> [~eyang] reporeted in HDDS-1609 that the hadoop-runner image can be started 
> *without* mounting a real hadoop (usually, it's ounted) AND using a different 
> uid:
> {code}
> docker run -it  -u $(id -u):$(id -g) apache/hadoop-runner bash
> docker: Error response from daemon: OCI runtime create failed: 
> container_linux.go:345: starting container process caused "chdir to cwd 
> (\"/opt/hadoop\") set in config.json failed: permission denied": unknown.
> {code}
> There are two blocking problems here:
>  * the /opt/hadoop directory (which is the CWD inside the container) is 700 
> instead of 755
>  * The usage of sudo in started scripts (sudo is not possible if the real 
> user is not added to the /etc/passwd)
> Both of them are addressed by this patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1632) Make the hadoop home word readable and avoid sudo in hadoop-runner

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1632?focusedWorklogId=253561=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-253561
 ]

ASF GitHub Bot logged work on HDDS-1632:


Author: ASF GitHub Bot
Created on: 04/Jun/19 06:16
Start Date: 04/Jun/19 06:16
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #893: HDDS-1632. Make 
the hadoop home word readable and avoid sudo in hadoo…
URL: https://github.com/apache/hadoop/pull/893#issuecomment-498536395
 
 
   +1. LGTM
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 253561)
Time Spent: 50m  (was: 40m)

> Make the hadoop home word readable and avoid sudo in hadoop-runner
> --
>
> Key: HDDS-1632
> URL: https://issues.apache.org/jira/browse/HDDS-1632
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> [~eyang] reporeted in HDDS-1609 that the hadoop-runner image can be started 
> *without* mounting a real hadoop (usually, it's ounted) AND using a different 
> uid:
> {code}
> docker run -it  -u $(id -u):$(id -g) apache/hadoop-runner bash
> docker: Error response from daemon: OCI runtime create failed: 
> container_linux.go:345: starting container process caused "chdir to cwd 
> (\"/opt/hadoop\") set in config.json failed: permission denied": unknown.
> {code}
> There are two blocking problems here:
>  * the /opt/hadoop directory (which is the CWD inside the container) is 700 
> instead of 755
>  * The usage of sudo in started scripts (sudo is not possible if the real 
> user is not added to the /etc/passwd)
> Both of them are addressed by this patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1632) Make the hadoop home word readable and avoid sudo in hadoop-runner

2019-06-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1632:
---
   Resolution: Fixed
Fix Version/s: 0.4.1
   Status: Resolved  (was: Patch Available)

[~eyang] Thanks for the review, [~elek] Thanks for the patch, I have committed 
this to the trunk.

> Make the hadoop home word readable and avoid sudo in hadoop-runner
> --
>
> Key: HDDS-1632
> URL: https://issues.apache.org/jira/browse/HDDS-1632
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> [~eyang] reporeted in HDDS-1609 that the hadoop-runner image can be started 
> *without* mounting a real hadoop (usually, it's ounted) AND using a different 
> uid:
> {code}
> docker run -it  -u $(id -u):$(id -g) apache/hadoop-runner bash
> docker: Error response from daemon: OCI runtime create failed: 
> container_linux.go:345: starting container process caused "chdir to cwd 
> (\"/opt/hadoop\") set in config.json failed: permission denied": unknown.
> {code}
> There are two blocking problems here:
>  * the /opt/hadoop directory (which is the CWD inside the container) is 700 
> instead of 755
>  * The usage of sudo in started scripts (sudo is not possible if the real 
> user is not added to the /etc/passwd)
> Both of them are addressed by this patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1541) Implement addAcl,removeAcl,setAcl,getAcl for Key

2019-06-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1541:
---
Target Version/s: 0.4.1

> Implement addAcl,removeAcl,setAcl,getAcl  for Key
> -
>
> Key: HDDS-1541
> URL: https://issues.apache.org/jira/browse/HDDS-1541
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 77h 40m
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl  for Key



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1630) Copy default configuration files to the writeable directory

2019-06-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1630:
---
Target Version/s: 0.4.1

> Copy default configuration files to the writeable directory
> ---
>
> Key: HDDS-1630
> URL: https://issues.apache.org/jira/browse/HDDS-1630
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDDS-1518 separates the read-only directories (/opt/ozone, /opt/hadoop) from 
> the read-write directories (/etc/hadoop, /var/log/hadoop). 
> The configuration directory and log directory should be writeable and to make 
> it easier to run the docker-compose based pseudo clusters with *different* 
> host uid we started to use different config dir.
> But we need all the defaults in the configuration dir. In this patch I add a 
> small fragments to the hadoop-runner image to copy the default files (if 
> available).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1632) Make the hadoop home word readable and avoid sudo in hadoop-runner

2019-06-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1632:
---
Target Version/s: 0.4.1

> Make the hadoop home word readable and avoid sudo in hadoop-runner
> --
>
> Key: HDDS-1632
> URL: https://issues.apache.org/jira/browse/HDDS-1632
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> [~eyang] reporeted in HDDS-1609 that the hadoop-runner image can be started 
> *without* mounting a real hadoop (usually, it's ounted) AND using a different 
> uid:
> {code}
> docker run -it  -u $(id -u):$(id -g) apache/hadoop-runner bash
> docker: Error response from daemon: OCI runtime create failed: 
> container_linux.go:345: starting container process caused "chdir to cwd 
> (\"/opt/hadoop\") set in config.json failed: permission denied": unknown.
> {code}
> There are two blocking problems here:
>  * the /opt/hadoop directory (which is the CWD inside the container) is 700 
> instead of 755
>  * The usage of sudo in started scripts (sudo is not possible if the real 
> user is not added to the /etc/passwd)
> Both of them are addressed by this patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



<    1   2   3   4   >