[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1169: HDDS-3930. Fix OMKeyDeletesRequest.

2020-07-07 Thread GitBox


bharatviswa504 commented on a change in pull request #1169:
URL: https://github.com/apache/hadoop-ozone/pull/1169#discussion_r451283029



##
File path: hadoop-ozone/interface-client/src/main/proto/OmClientProtocol.proto
##
@@ -867,10 +867,10 @@ message DeletedKeys {
 }
 
 message DeleteKeysResponse {
-repeated KeyInfo deletedKeys = 1;
-repeated KeyInfo unDeletedKeys = 2;

Review comment:
   I don't see any usage of it, and also I see no real usage of this result 
in the client. 
   And this is a new API added in current trunk, and will be released in the 
next ozone release.
   
   One more reason, as we fail whole batch, I don't see real use case for it as 
of current behavior.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1169: HDDS-3930. Fix OMKeyDeletesRequest.

2020-07-07 Thread GitBox


bharatviswa504 commented on a change in pull request #1169:
URL: https://github.com/apache/hadoop-ozone/pull/1169#discussion_r451283029



##
File path: hadoop-ozone/interface-client/src/main/proto/OmClientProtocol.proto
##
@@ -867,10 +867,10 @@ message DeletedKeys {
 }
 
 message DeleteKeysResponse {
-repeated KeyInfo deletedKeys = 1;
-repeated KeyInfo unDeletedKeys = 2;

Review comment:
   I don't see any usage of it, and also I see no real usage of this result 
in the client. 
   And this is a new API adding, and will be released in next ozone release.
   
   One more reason, as we fail whole batch, I don't see real use case for it as 
of current behavior.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #1169: HDDS-3930. Fix OMKeyDeletesRequest.

2020-07-07 Thread GitBox


smengcl commented on a change in pull request #1169:
URL: https://github.com/apache/hadoop-ozone/pull/1169#discussion_r451240368



##
File path: hadoop-ozone/interface-client/src/main/proto/OmClientProtocol.proto
##
@@ -867,10 +867,10 @@ message DeletedKeys {
 }
 
 message DeleteKeysResponse {
-repeated KeyInfo deletedKeys = 1;
-repeated KeyInfo unDeletedKeys = 2;

Review comment:
   `unDeletedKeys` seems to be added in this 
[comment](https://github.com/apache/hadoop-ozone/pull/814/files#r429342829) on 
purpose.
   
   Will removing those two fields cause any compatibility issue?

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeysDeleteRequest.java
##
@@ -116,89 +111,112 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMResponse.Builder omResponse = OmResponseUtil.getOMResponseBuilder(
 getOmRequest());
 OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+// As right now, only client exposed API is for a single volume and
+// bucket. So, all entries will have same volume name and bucket name.
+// So, we can validate once.
+if (deleteKeyArgsList.size() > 0) {
+  volumeName = deleteKeyArgsList.get(0).getVolumeName();
+  bucketName = deleteKeyArgsList.get(0).getBucketName();
+}
+
+boolean acquiredLock =
+omMetadataManager.getLock().acquireWriteLock(BUCKET_LOCK, volumeName,
+bucketName);
+
+int indexFailed = 0;
 try {
-  for (KeyArgs deleteKeyArgs : deleteKeyArgsList) {
+
+  // Validate bucket and volume exists or not.
+  if (deleteKeyArgsList.size() > 0) {
+validateBucketAndVolume(omMetadataManager, volumeName, bucketName);
+  }
+
+
+  // Check if any of the key in the batch cannot be deleted. If exists the
+  // batch delete will be failed.
+
+  for (indexFailed = 0; indexFailed < deleteKeyArgsList.size();
+   indexFailed++) {
+KeyArgs deleteKeyArgs = deleteKeyArgsList.get(0);

Review comment:
   Why do we always get the first element here? Typo?
   
   Also, I didn't find existing test for `OMKeysDeleteRequest` or 
`OMKeysDeleteResponse`. It'd be a good idea to add some tests for sanity checks.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cku328 commented on pull request #1171: HDDS-3932. Hide jOOQ logo message from the log output on compile

2020-07-07 Thread GitBox


cku328 commented on pull request #1171:
URL: https://github.com/apache/hadoop-ozone/pull/1171#issuecomment-655265688


   @avijayanhwx 
   Okay, thanks for the review. I'll retrigger CI check.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3933) Memory leak because of too many Datanode State Machine Thread

2020-07-07 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3933:
-
Attachment: jstack.txt

> Memory leak because of too many Datanode State Machine Thread
> -
>
> Key: HDDS-3933
> URL: https://issues.apache.org/jira/browse/HDDS-3933
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: jstack.txt, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png
>
>
> When create 22345th  Datanode State Machine Thread, OOM happened.
> !screenshot-1.png! 
>  !screenshot-2.png! 
>  !screenshot-3.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3933) Memory leak because of too many Datanode State Machine Thread

2020-07-07 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3933:
-
Description: 
When create 22345th  Datanode State Machine Thread, OOM happened.
!screenshot-1.png! 
 !screenshot-2.png! 
 !screenshot-3.png! 

  was:
When create 22345th  Datanode State Machine Thread, OOM happened.
!screenshot-1.png! 
 !screenshot-2.png! 


> Memory leak because of too many Datanode State Machine Thread
> -
>
> Key: HDDS-3933
> URL: https://issues.apache.org/jira/browse/HDDS-3933
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
> When create 22345th  Datanode State Machine Thread, OOM happened.
> !screenshot-1.png! 
>  !screenshot-2.png! 
>  !screenshot-3.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3933) Memory leak because of too many Datanode State Machine Thread

2020-07-07 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3933:
-
Attachment: screenshot-3.png

> Memory leak because of too many Datanode State Machine Thread
> -
>
> Key: HDDS-3933
> URL: https://issues.apache.org/jira/browse/HDDS-3933
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
> When create 22345th  Datanode State Machine Thread, OOM happened.
> !screenshot-1.png! 
>  !screenshot-2.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] GlenGeng commented on a change in pull request #1151: HDDS-3191: switch from SCMPipelineManager to PipelineManagerV2Impl

2020-07-07 Thread GitBox


GlenGeng commented on a change in pull request #1151:
URL: https://github.com/apache/hadoop-ozone/pull/1151#discussion_r451254135



##
File path: hadoop-ozone/recon/pom.xml
##
@@ -108,6 +108,7 @@
 
 
   pnpm config set store-dir ~/.pnpm-store
+  
false

Review comment:
   Thanks for pointing out! Will drop this change.

##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/ha/SCMHAConfiguration.java
##
@@ -78,7 +78,7 @@
   description = "The size of the raft segment used by Apache Ratis on" +
   " SCM. (16 KB by default)"
   )
-  private long raftSegmentSize = 16L * 1024L;
+  private double raftSegmentSize = 16L * 1024L;

Review comment:
   you will see that `ConfigType.SIZE` will be reflected as `StorageUnit`, 
which need be `double`.
   
   refer to `ConfigurationReflectionUtil`, 
   ```
 case SIZE:
   forcedFieldSet(field, configuration,
   from.getStorageSize(key, "0B", configAnnotation.sizeUnit()));
   break;
   ```
   
   and `ConfigurationSource`
   ```
 default double getStorageSize(String name, String defaultValue,
 StorageUnit targetUnit)
   ```
   
   This code is not reached before, so merged into HDDS-2823 without breaking 
CI.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3922) Display the pipeline info on scm web page

2020-07-07 Thread lihanran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lihanran reassigned HDDS-3922:
--

Assignee: lihanran

> Display the pipeline info on scm web page
> -
>
> Key: HDDS-3922
> URL: https://issues.apache.org/jira/browse/HDDS-3922
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Affects Versions: 0.7.0
>Reporter: maobaolong
>Assignee: lihanran
>Priority: Major
>  Labels: SCM, UI, webpage
> Attachments: image-2020-07-06-10-17-08-324.png, 
> image-2020-07-06-10-18-58-151.png
>
>
> !image-2020-07-06-10-18-58-151.png!
>  
> !image-2020-07-06-10-17-08-324.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] iamabug commented on a change in pull request #1175: HDDS-2766. security/SecuringDataNodes.md

2020-07-07 Thread GitBox


iamabug commented on a change in pull request #1175:
URL: https://github.com/apache/hadoop-ozone/pull/1175#discussion_r451238736



##
File path: hadoop-hdds/docs/content/security/SecuringDatanodes.zh.md
##
@@ -0,0 +1,53 @@
+---
+title: "安全化 Datanode"
+date: "2019-April-03"
+weight: 2
+summary:  解释安全化 datanode 的不同模式,包括 Kerberos、证书的手动颁发和自动颁发等。
+icon: th
+---
+
+
+
+过去,Hadoop 中 datanode 的安全机制是通过在节点上创建 Keytab 文件实现的,而 Ozone 改用 datanode 证书,在安全的 
Ozone 集群中,datanode 不再需要 Kerberos。
+
+但是我们也支持传统的基于 Kerberos 的认证来方便现有用户,用户只需要在 hdfs-site.xml 里配置下面参数即可:
+
+参数名|描述
+|--
+dfs.datanode.kerberos.principal| datanode 的服务主体名  比如:dn/_h...@realm.com
+dfs.datanode.keytab.file| datanode 进程所使用的 keytab 文件
+hdds.datanode.http.kerberos.principal| datanode http 服务器的服务主体名
+hdds.datanode.http.kerberos.keytab| datanode http 服务器所使用的 keytab 文件
+
+
+## 如何安全化 datanode
+
+在 Ozone 中,当 datanode 启动并发现 SCM 的地址之后,datanode 要做的第一件事就是创建私钥并向 SCM 发送证书请求。
+
+通过 Kerberos 颁发证书当前模型
+SCM 有一个内置的 CA 用来批准证书请求,如果 datanode 已经有一个 Kerberos keytab,SCM 会信任它并自动颁发一个证书。
+
+
+手动颁发开发中
+如果 datanode 是新加入的并且没有 keytab,那么证书请求需要等待管理员的批准。换句话说,信任关系链由集群管理员建立。

Review comment:
   A typo in origin doc maybe, 'band new` should be `brand new` ? If yes, 
should I fix it in this PR or open a new one ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] iamabug commented on a change in pull request #1175: HDDS-2766. security/SecuringDataNodes.md

2020-07-07 Thread GitBox


iamabug commented on a change in pull request #1175:
URL: https://github.com/apache/hadoop-ozone/pull/1175#discussion_r451237935



##
File path: hadoop-hdds/docs/content/security/SecuringDatanodes.zh.md
##
@@ -0,0 +1,53 @@
+---
+title: "安全化 Datanode"
+date: "2019-April-03"
+weight: 2
+summary:  解释安全化 datanode 的不同模式,包括 Kerberos、证书的手动颁发和自动颁发等。
+icon: th
+---
+
+
+
+过去,Hadoop 中 datanode 的安全机制是通过在节点上创建 Keytab 文件实现的,而 Ozone 改用 datanode 证书,在安全的 
Ozone 集群中,datanode 不再需要 Kerberos。
+
+但是我们也支持传统的基于 Kerberos 的认证来方便现有用户,用户只需要在 hdfs-site.xml 里配置下面参数即可:
+
+参数名|描述
+|--
+dfs.datanode.kerberos.principal| datanode 的服务主体名  比如:dn/_h...@realm.com
+dfs.datanode.keytab.file| datanode 进程所使用的 keytab 文件
+hdds.datanode.http.kerberos.principal| datanode http 服务器的服务主体名
+hdds.datanode.http.kerberos.keytab| datanode http 服务器所使用的 keytab 文件
+
+
+## 如何安全化 datanode
+
+在 Ozone 中,当 datanode 启动并发现 SCM 的地址之后,datanode 要做的第一件事就是创建私钥并向 SCM 发送证书请求。
+
+通过 Kerberos 颁发证书当前模型
+SCM 有一个内置的 CA 用来批准证书请求,如果 datanode 已经有一个 Kerberos keytab,SCM 会信任它并自动颁发一个证书。
+
+
+手动颁发开发中
+如果 datanode 是新加入的并且没有 keytab,那么证书请求需要等待管理员的批准。换句话说,信任关系链由集群管理员建立。

Review comment:
   Original doc says `the web of trust`, but according to my understanding, 
it is called chain of trust in PKI.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] iamabug commented on a change in pull request #1175: HDDS-2766. security/SecuringDataNodes.md

2020-07-07 Thread GitBox


iamabug commented on a change in pull request #1175:
URL: https://github.com/apache/hadoop-ozone/pull/1175#discussion_r451237218



##
File path: hadoop-hdds/docs/content/security/SecuringDatanodes.zh.md
##
@@ -0,0 +1,53 @@
+---
+title: "安全化 Datanode"
+date: "2019-April-03"
+weight: 2
+summary:  解释安全化 datanode 的不同模式,包括 Kerberos、证书的手动颁发和自动颁发等。

Review comment:
   Question about original doc, which is more precise, `datanode` or `data 
node` ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2766) security/SecuringDataNodes.md

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2766:
-
Labels: pull-request-available  (was: )

> security/SecuringDataNodes.md
> -
>
> Key: HDDS-2766
> URL: https://issues.apache.org/jira/browse/HDDS-2766
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiang Zhang
>Assignee: Xiang Zhang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] iamabug opened a new pull request #1175: HDDS-2766. security/SecuringDataNodes.md

2020-07-07 Thread GitBox


iamabug opened a new pull request #1175:
URL: https://github.com/apache/hadoop-ozone/pull/1175


   ## What changes were proposed in this pull request?
   
   translation to doc 
https://hadoop.apache.org/ozone/docs/0.5.0-beta/security/securingdatanodes.html
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2766
   
   ## How was this patch tested?
   
   hugo server
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-3918) ConcurrentModificationException in ContainerReportHandler.onMessage

2020-07-07 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153148#comment-17153148
 ] 

Xiaoyu Yao edited comment on HDDS-3918 at 7/8/20, 1:16 AM:
---

We have a race condition here on the container set of the 
NodeStateMap#nodeToContainer map. The ICR (Incrementation container report) and 
CR (container report) and processed in separate executors threads. 

ICR simply add() to the container set.
CR get() and set() to the container set. 

HDDS-3110 has the correct root cause analysis of the race condition but does 
not choose the thread safe version of the HashSet. So the race still exist as 
shown in SCM logs. 

Attach a simple unit test TestCME.java to verify this and the fix has been 
posted in the PR. 


was (Author: xyao):
We have a race condition here on the container set of the 
NodeStateMap#nodeToContainer map. The ICR (Incrementation container report) and 
CR (container report) and processed in separate executors threads. 

ICR simply add() to the container set.
CR get() and set() to the container set. 

HDDS-3110 has the correct root cause analysis of the race condition but does 
not choose the thread safe version of the HashSet. So the race still exist as 
shown in SCM logs. 

I have written a simple unit test to verify this and the will post the fix 
shortly. 


> ConcurrentModificationException in ContainerReportHandler.onMessage
> ---
>
> Key: HDDS-3918
> URL: https://issues.apache.org/jira/browse/HDDS-3918
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: TestCME.java
>
>
> 2020-07-03 14:51:45,489 [EventQueue-ContainerReportForContainerReportHandler] 
> ERROR org.apache.hadoop.hdds.server.events.SingleThreadExecutor: Error on 
> execution message 
> org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher$ContainerReportFromDatanode@8f6e7cb
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1445)
> at java.util.HashMap$KeyIterator.next(HashMap.java:1469)
> at 
> java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1044)
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:127)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:50)
> at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:81)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 2020-07-03 14:51:45,648 [EventQueue-ContainerReportForContainerReportHandler] 
> ERROR org.apache.hadoop.hdds.server.events.SingleThreadExecutor: Error on 
> execution message 
> org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher$ContainerReportFromDatanode@49d2b84b
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1445)
> at java.util.HashMap$KeyIterator.next(HashMap.java:1469)
> at 
> java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1044)
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:127)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:50)
> at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:81)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3918) ConcurrentModificationException in ContainerReportHandler.onMessage

2020-07-07 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-3918:
-
Attachment: TestCME.java

> ConcurrentModificationException in ContainerReportHandler.onMessage
> ---
>
> Key: HDDS-3918
> URL: https://issues.apache.org/jira/browse/HDDS-3918
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: TestCME.java
>
>
> 2020-07-03 14:51:45,489 [EventQueue-ContainerReportForContainerReportHandler] 
> ERROR org.apache.hadoop.hdds.server.events.SingleThreadExecutor: Error on 
> execution message 
> org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher$ContainerReportFromDatanode@8f6e7cb
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1445)
> at java.util.HashMap$KeyIterator.next(HashMap.java:1469)
> at 
> java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1044)
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:127)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:50)
> at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:81)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 2020-07-03 14:51:45,648 [EventQueue-ContainerReportForContainerReportHandler] 
> ERROR org.apache.hadoop.hdds.server.events.SingleThreadExecutor: Error on 
> execution message 
> org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher$ContainerReportFromDatanode@49d2b84b
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1445)
> at java.util.HashMap$KeyIterator.next(HashMap.java:1469)
> at 
> java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1044)
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:127)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:50)
> at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:81)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3918) ConcurrentModificationException in ContainerReportHandler.onMessage

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3918:
-
Labels: pull-request-available  (was: )

> ConcurrentModificationException in ContainerReportHandler.onMessage
> ---
>
> Key: HDDS-3918
> URL: https://issues.apache.org/jira/browse/HDDS-3918
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>
> 2020-07-03 14:51:45,489 [EventQueue-ContainerReportForContainerReportHandler] 
> ERROR org.apache.hadoop.hdds.server.events.SingleThreadExecutor: Error on 
> execution message 
> org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher$ContainerReportFromDatanode@8f6e7cb
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1445)
> at java.util.HashMap$KeyIterator.next(HashMap.java:1469)
> at 
> java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1044)
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:127)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:50)
> at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:81)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 2020-07-03 14:51:45,648 [EventQueue-ContainerReportForContainerReportHandler] 
> ERROR org.apache.hadoop.hdds.server.events.SingleThreadExecutor: Error on 
> execution message 
> org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher$ContainerReportFromDatanode@49d2b84b
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1445)
> at java.util.HashMap$KeyIterator.next(HashMap.java:1469)
> at 
> java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1044)
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:127)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:50)
> at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:81)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao opened a new pull request #1174: HDDS-3918. ConcurrentModificationException in ContainerReportHandler.…

2020-07-07 Thread GitBox


xiaoyuyao opened a new pull request #1174:
URL: https://github.com/apache/hadoop-ozone/pull/1174


   …onMessage.
   
   ## What changes were proposed in this pull request?
   
   Use thread safe HashSet for NodeStateMap#nodeToContainer map
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3918
   
   ## How was this patch tested?
   
   Unit test (TestCME.java attached to the linked JIRA.)
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3918) ConcurrentModificationException in ContainerReportHandler.onMessage

2020-07-07 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153148#comment-17153148
 ] 

Xiaoyu Yao commented on HDDS-3918:
--

We have a race condition here on the container set of the 
NodeStateMap#nodeToContainer map. The ICR (Incrementation container report) and 
CR (container report) and processed in separate executors threads. 

ICR simply add() to the container set.
CR get() and set() to the container set. 

HDDS-3110 has the correct root cause analysis of the race condition but does 
not choose the thread safe version of the HashSet. So the race still exist as 
shown in SCM logs. 

I have written a simple unit test to verify this and the will post the fix 
shortly. 


> ConcurrentModificationException in ContainerReportHandler.onMessage
> ---
>
> Key: HDDS-3918
> URL: https://issues.apache.org/jira/browse/HDDS-3918
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Nanda kumar
>Priority: Major
>
> 2020-07-03 14:51:45,489 [EventQueue-ContainerReportForContainerReportHandler] 
> ERROR org.apache.hadoop.hdds.server.events.SingleThreadExecutor: Error on 
> execution message 
> org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher$ContainerReportFromDatanode@8f6e7cb
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1445)
> at java.util.HashMap$KeyIterator.next(HashMap.java:1469)
> at 
> java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1044)
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:127)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:50)
> at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:81)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 2020-07-03 14:51:45,648 [EventQueue-ContainerReportForContainerReportHandler] 
> ERROR org.apache.hadoop.hdds.server.events.SingleThreadExecutor: Error on 
> execution message 
> org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher$ContainerReportFromDatanode@49d2b84b
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1445)
> at java.util.HashMap$KeyIterator.next(HashMap.java:1469)
> at 
> java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1044)
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:127)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:50)
> at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:81)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3853) Container marked as missing on datanode while container directory do exist

2020-07-07 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang reassigned HDDS-3853:


Assignee: runzhiwang  (was: Shashikant Banerjee)

> Container marked as missing on datanode while container directory do exist
> --
>
> Key: HDDS-3853
> URL: https://issues.apache.org/jira/browse/HDDS-3853
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Sammi Chen
>Assignee: runzhiwang
>Priority: Major
>
> {code}
> INFO org.apache.hadoop.ozone.container.common.impl.HddsDispatcher: Operation: 
> PutBlock , Trace ID: 487c959563e884b9:509a3386ba37abc6:487c959563e884b9:0 , 
> Message: ContainerID 1744 has been lost and and cannot be recreated on this 
> DataNode , Result: CONTAINER_MISSING , StorageContainerException Occurred.
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  ContainerID 1744 has been lost and and cannot be recreated on this DataNode
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(HddsDispatcher.java:238)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:166)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:395)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:405)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$applyTransaction$6(ContainerStateMachine.java:749)
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>  ERROR 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine:
>  gid group-1376E41FD581 : ApplyTransaction failed. cmd PutBlock logIndex 
> 40079 msg : ContainerID 1744 has been lost and and cannot be recreated on 
> this DataNode Container Result: CONTAINER_MISSING
>  ERROR 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis:
>  pipeline Action CLOSE on pipeline 
> PipelineID=de21dfcf-415c-4901-84ca-1376e41fd581.Reason : Ratis Transaction 
> failure in datanode 33b49c34-caa2-4b4f-894e-dce7db4f97b9 with role FOLLOWER 
> .Triggering pipeline close action
>  {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3705) [OFS] Implement getTrashRoots for trash cleanup

2020-07-07 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-3705:
-
Target Version/s: 0.6.0  (was: 0.7.0)
  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

> [OFS] Implement getTrashRoots for trash cleanup
> ---
>
> Key: HDDS-3705
> URL: https://issues.apache.org/jira/browse/HDDS-3705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Filesystem
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Blocker
>  Labels: pull-request-available
>
> We need to override {{getTrashRoots()}} as well in order to allow for easier 
> future OM trash cleanup impl.
> This jira doesn't directly implement the trash cleanup feature itself, but a 
> prerequisite for this feature.
> This is a follow-up jira to HDDS-3574: 
> https://github.com/apache/hadoop-ozone/pull/941#discussion_r428212741



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on pull request #1089: HDDS-3705. [OFS] Implement getTrashRoots for trash cleanup

2020-07-07 Thread GitBox


smengcl commented on pull request #1089:
URL: https://github.com/apache/hadoop-ozone/pull/1089#issuecomment-655205691


   Thanks @xiaoyuyao  for the review. Will merge shortly.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl merged pull request #1089: HDDS-3705. [OFS] Implement getTrashRoots for trash cleanup

2020-07-07 Thread GitBox


smengcl merged pull request #1089:
URL: https://github.com/apache/hadoop-ozone/pull/1089


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1164: HDDS-3824: OM read requests should make SCM#refreshPipeline outside BUCKET_LOCK

2020-07-07 Thread GitBox


bharatviswa504 commented on a change in pull request #1164:
URL: https://github.com/apache/hadoop-ozone/pull/1164#discussion_r451209121



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
##
@@ -659,15 +660,6 @@ public OmKeyInfo lookupKey(OmKeyArgs args, String 
clientAddress)
   });
 }
   }
-  // Refresh container pipeline info from SCM

Review comment:
   Can we also move the generation of the secret token also outside of the 
lock?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #1151: HDDS-3191: switch from SCMPipelineManager to PipelineManagerV2Impl

2020-07-07 Thread GitBox


xiaoyuyao commented on a change in pull request #1151:
URL: https://github.com/apache/hadoop-ozone/pull/1151#discussion_r451155274



##
File path: hadoop-ozone/recon/pom.xml
##
@@ -108,6 +108,7 @@
 
 
   pnpm config set store-dir ~/.pnpm-store
+  
false

Review comment:
   This change seems unrelated. Can you merge it from master? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #1151: HDDS-3191: switch from SCMPipelineManager to PipelineManagerV2Impl

2020-07-07 Thread GitBox


xiaoyuyao commented on a change in pull request #1151:
URL: https://github.com/apache/hadoop-ozone/pull/1151#discussion_r451149178



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/ha/SCMHAConfiguration.java
##
@@ -78,7 +78,7 @@
   description = "The size of the raft segment used by Apache Ratis on" +
   " SCM. (16 KB by default)"
   )
-  private long raftSegmentSize = 16L * 1024L;
+  private double raftSegmentSize = 16L * 1024L;

Review comment:
   Is there a reason to change long to double here and force conversion 
later? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on pull request #1089: HDDS-3705. [OFS] Implement getTrashRoots for trash cleanup

2020-07-07 Thread GitBox


xiaoyuyao commented on pull request #1089:
URL: https://github.com/apache/hadoop-ozone/pull/1089#issuecomment-655139139


   LGTM, +1 pending CI.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sonarcloud[bot] commented on pull request #1169: HDDS-3930. Fix OMKeyDeletesRequest.

2020-07-07 Thread GitBox


sonarcloud[bot] commented on pull request #1169:
URL: https://github.com/apache/hadoop-ozone/pull/1169#issuecomment-655133377


   SonarCloud Quality Gate failed.
   
   [](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1169&resolved=false&types=BUG)
 [](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1169&resolved=false&types=BUG)
 [1 
Bug](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1169&resolved=false&types=BUG)
  
   [](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1169&resolved=false&types=VULNERABILITY)
 [](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1169&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1169&resolved=false&types=VULNERABILITY)
 (and [](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1169&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1169&resolved=false&types=SECURITY_HOTSPOT)
 to review)  
   [](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1169&resolved=false&types=CODE_SMELL)
 [](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1169&resolved=false&types=CODE_SMELL)
 [2 Code 
Smells](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1169&resolved=false&types=CODE_SMELL)
   
   [](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1169&metric=new_coverage&view=list)
 [96.4% 
Coverage](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1169&metric=new_coverage&view=list)
  
   [](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1169&metric=new_duplicated_lines_density&view=list)
 [0.0% 
Duplication](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1169&metric=new_duplicated_lines_density&view=list)
   
The version of Java (1.8.0_232) you 
have used to run this analysis is deprecated and we will stop accepting it from 
October 2020. Please update to at least Java 11.
   Read more [here](https://sonarcloud.io/documentation/upcoming/)
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] arp7 commented on pull request #1149: HDDS-3878. Make OMHA serviceID optional if one (but only one) is defined in the config

2020-07-07 Thread GitBox


arp7 commented on pull request #1149:
URL: https://github.com/apache/hadoop-ozone/pull/1149#issuecomment-655129013


   Yeah this was a conscious choice. I feel usage of unqualified paths in HDFS 
can be ambiguous and error-prone when multiple clusters/federation are involved.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3874) ITestRootedOzoneContract tests are flaky

2020-07-07 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17153089#comment-17153089
 ] 

Siyao Meng commented on HDDS-3874:
--

[~elek] I recall seeing once or twice {{it-filesystem-contract}} fails for o3fs 
in PR checks a long time ago. But might not be related imo.

> ITestRootedOzoneContract tests are flaky
> 
>
> Key: HDDS-3874
> URL: https://issues.apache.org/jira/browse/HDDS-3874
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Marton Elek
>Assignee: Siyao Meng
>Priority: Blocker
>
> Different tests are failed with similar reasons:
> {code}
> java.lang.Exception: test timed out after 18 milliseconds
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
>   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
>   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.waitOnFlushFutures(BlockOutputStream.java:537)
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFlush(BlockOutputStream.java:499)
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.close(BlockOutputStream.java:514)
>   at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.close(BlockOutputStreamEntry.java:149)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleStreamAction(KeyOutputStream.java:483)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleFlushOrClose(KeyOutputStream.java:457)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.close(KeyOutputStream.java:510)
>   at 
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.close(OzoneFSOutputStream.java:56)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.createFile(ContractTestUtils.java:638)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractOpenTest.testOpenFileTwice(AbstractContractOpenTest.java:135)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
> Example:
> https://github.com/elek/ozone-build-results/blob/master/2020/06/16/1051/it-filesystem-contract/hadoop-ozone/integration-test/org.apache.hadoop.fs.ozone.contract.rooted.ITestRootedOzoneContractOpen.txt
> But same problem here:
> https://github.com/elek/hadoop-ozone/runs/810175295?check_suite_focus=true 
> (contract)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #1089: HDDS-3705. [OFS] Implement getTrashRoots for trash cleanup

2020-07-07 Thread GitBox


smengcl commented on a change in pull request #1089:
URL: https://github.com/apache/hadoop-ozone/pull/1089#discussion_r451129142



##
File path: 
hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneClientAdapterImpl.java
##
@@ -516,6 +520,62 @@ public FileStatusAdapter getFileStatus(String path, URI 
uri,
 }
   }
 
+  /**
+   * Get trash roots for current user or all users.
+   *
+   * Note:
+   * 1. When allUsers flag is false, this only returns the trash roots for
+   * those that the current user has access to.
+   * 2. Also it is not particularly efficient to use this API when there are
+   * a lot of volumes and buckets as the client has to iterate through all
+   * buckets in all volumes.
+   *
+   * @param allUsers return trashRoots of all users if true, used by emptier
+   * @param fs Pointer to the current OFS FileSystem
+   * @return
+   */
+  public Collection getTrashRoots(boolean allUsers,
+  BasicRootedOzoneFileSystem fs) {
+List ret = new ArrayList<>();
+try {
+  Iterator iterVol;
+  String username = UserGroupInformation.getCurrentUser().getUserName();
+  if (allUsers) {
+iterVol = objectStore.listVolumes("");
+  } else {
+iterVol = objectStore.listVolumesByUser(username, "", "");
+  }
+  while (iterVol.hasNext()) {
+OzoneVolume volume = iterVol.next();
+Path volumePath = new Path(OZONE_URI_DELIMITER, volume.getName());
+Iterator bucketIter = volume.listBuckets("");
+while (bucketIter.hasNext()) {
+  OzoneBucket bucket = bucketIter.next();
+  Path bucketPath = new Path(volumePath, bucket.getName());
+  Path trashRoot = new Path(bucketPath, FileSystem.TRASH_PREFIX);
+  if (allUsers) {
+if (fs.exists(trashRoot)) {
+  for (FileStatus candidate : fs.listStatus(trashRoot)) {
+if (fs.exists(candidate.getPath()) && candidate.isDirectory()) 
{
+  ret.add(candidate);
+}
+  }
+}
+  } else {
+Path userTrash = new Path(trashRoot, username);
+if (fs.exists(userTrash) &&
+fs.getFileStatus(userTrash).isDirectory()) {
+  ret.add(fs.getFileStatus(userTrash));
+}
+  }
+}
+  }
+} catch (IOException ex) {
+  throw new RuntimeException(ex);

Review comment:
   done 1940f99f524ef589b2c5a1613031f3cb7b1066b4





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #1089: HDDS-3705. [OFS] Implement getTrashRoots for trash cleanup

2020-07-07 Thread GitBox


smengcl commented on a change in pull request #1089:
URL: https://github.com/apache/hadoop-ozone/pull/1089#discussion_r451126852



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestRootedOzoneFileSystem.java
##
@@ -871,4 +876,103 @@ public void testFailToDeleteRoot() throws IOException {
 Assert.assertFalse(fs.delete(new Path("/"), true));
   }
 
+  /**
+   * Test getTrashRoots() in OFS. Different from the existing test for o3fs.
+   */
+  @Test
+  public void testGetTrashRoots() throws IOException {
+String username = UserGroupInformation.getCurrentUser().getShortUserName();
+OzoneVolume volume1 = objectStore.getVolume(volumeName);
+String prevOwner = volume1.getOwner();
+// Set owner of the volume to current user, so it will show up in vol list
+Assert.assertTrue(volume1.setOwner(username));
+
+Path trashRoot1 = new Path(bucketPath, TRASH_PREFIX);
+Path user1Trash1 = new Path(trashRoot1, username);
+// When user trash dir isn't been created
+Assert.assertEquals(0, fs.getTrashRoots(false).size());
+Assert.assertEquals(0, fs.getTrashRoots(true).size());
+// Let's create our first user1 (current user) trash dir.
+fs.mkdirs(user1Trash1);
+// Results should be getTrashRoots(false)=1, gTR(true)=1
+Collection res = fs.getTrashRoots(false);
+Assert.assertEquals(1, res.size());
+res.forEach(e -> Assert.assertEquals(
+user1Trash1.toString(), e.getPath().toUri().getPath()));
+res = fs.getTrashRoots(true);
+Assert.assertEquals(1, res.size());
+res.forEach(e -> Assert.assertEquals(
+user1Trash1.toString(), e.getPath().toUri().getPath()));
+
+// Create one more trash for user2 in the same bucket
+Path user2Trash1 = new Path(trashRoot1, "testuser2");
+fs.mkdirs(user2Trash1);
+// Results should be getTrashRoots(false)=1, gTR(true)=2
+Assert.assertEquals(1, fs.getTrashRoots(false).size());

Review comment:
   sure thing. done in ebf104b3d6b0c6531ce2bec33fe00c996fcb16cf





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #1164: HDDS-3824: OM read requests should make SCM#refreshPipeline outside BUCKET_LOCK

2020-07-07 Thread GitBox


xiaoyuyao commented on a change in pull request #1164:
URL: https://github.com/apache/hadoop-ozone/pull/1164#discussion_r451122204



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
##
@@ -1706,36 +1718,41 @@ public OzoneFileStatus getFileStatus(OmKeyArgs args) 
throws IOException {
 
   // Check if the key is a file.
   String fileKeyBytes = metadataManager.getOzoneKey(
-  volumeName, bucketName, keyName);
-  OmKeyInfo fileKeyInfo = metadataManager.getKeyTable().get(fileKeyBytes);
+  volumeName, bucketName, keyName);
+  fileKeyInfo = metadataManager.getKeyTable().get(fileKeyBytes);
+
+  // Check if the key is a directory.
+  if (fileKeyInfo == null) {
+String dirKey = OzoneFSUtils.addTrailingSlashIfNeeded(keyName);
+String dirKeyBytes = metadataManager.getOzoneKey(
+volumeName, bucketName, dirKey);
+OmKeyInfo dirKeyInfo = metadataManager.getKeyTable().get(dirKeyBytes);
+if (dirKeyInfo != null) {
+  return new OzoneFileStatus(dirKeyInfo, scmBlockSize, true);
+}
+  }
+} finally {
+  metadataManager.getLock().releaseReadLock(BUCKET_LOCK, volumeName,
+  bucketName);
+
+  // if the key is a file then do refresh pipeline info in OM by asking SCM
   if (fileKeyInfo != null) {
-if (args.getRefreshPipeline()) {
+if (refreshPipeline) {
   refreshPipeline(fileKeyInfo);
 }

Review comment:
   should we order datanodes when the key is a file for caller like 
getFileStatus?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #1164: HDDS-3824: OM read requests should make SCM#refreshPipeline outside BUCKET_LOCK

2020-07-07 Thread GitBox


xiaoyuyao commented on a change in pull request #1164:
URL: https://github.com/apache/hadoop-ozone/pull/1164#discussion_r451122204



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
##
@@ -1706,36 +1718,41 @@ public OzoneFileStatus getFileStatus(OmKeyArgs args) 
throws IOException {
 
   // Check if the key is a file.
   String fileKeyBytes = metadataManager.getOzoneKey(
-  volumeName, bucketName, keyName);
-  OmKeyInfo fileKeyInfo = metadataManager.getKeyTable().get(fileKeyBytes);
+  volumeName, bucketName, keyName);
+  fileKeyInfo = metadataManager.getKeyTable().get(fileKeyBytes);
+
+  // Check if the key is a directory.
+  if (fileKeyInfo == null) {
+String dirKey = OzoneFSUtils.addTrailingSlashIfNeeded(keyName);
+String dirKeyBytes = metadataManager.getOzoneKey(
+volumeName, bucketName, dirKey);
+OmKeyInfo dirKeyInfo = metadataManager.getKeyTable().get(dirKeyBytes);
+if (dirKeyInfo != null) {
+  return new OzoneFileStatus(dirKeyInfo, scmBlockSize, true);
+}
+  }
+} finally {
+  metadataManager.getLock().releaseReadLock(BUCKET_LOCK, volumeName,
+  bucketName);
+
+  // if the key is a file then do refresh pipeline info in OM by asking SCM
   if (fileKeyInfo != null) {
-if (args.getRefreshPipeline()) {
+if (refreshPipeline) {
   refreshPipeline(fileKeyInfo);
 }

Review comment:
   should we order datanodes when the key is a file?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 edited a comment on pull request #1149: HDDS-3878. Make OMHA serviceID optional if one (but only one) is defined in the config

2020-07-07 Thread GitBox


bharatviswa504 edited a comment on pull request #1149:
URL: https://github.com/apache/hadoop-ozone/pull/1149#issuecomment-655106253


   I think this is a deliberate choice not to pick from config, if it is there. 
As hive stores the table path in its metastore. So, having a complete path is 
better. 
   
   
   Just an example scenario/
   1. If the user has config of the remote and local cluster, mistakenly he is 
using remote cluster config when running commands, so the user is talking to a 
remote cluster, not the local cluster(which users want to do). These kinds of 
errors cannot be caught. I feel it is better to always provide service id from 
the user instead of picking from config. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 edited a comment on pull request #1149: HDDS-3878. Make OMHA serviceID optional if one (but only one) is defined in the config

2020-07-07 Thread GitBox


bharatviswa504 edited a comment on pull request #1149:
URL: https://github.com/apache/hadoop-ozone/pull/1149#issuecomment-655106253


   I think this is a deliberate choice not to pick from config, if there, as 
hive stores the table path in its metastore. So, having a complete path is 
better.
   
   
   Just an example scenario/
   1. If the user has config of the remote and local cluster, mistakenly he is 
using remote cluster config when running commands, so the user is talking to a 
remote cluster, not the local cluster(which users want to do). These kinds of 
errors cannot be caught. I feel it is better to always provide service id from 
the user instead of picking from config. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #1149: HDDS-3878. Make OMHA serviceID optional if one (but only one) is defined in the config

2020-07-07 Thread GitBox


bharatviswa504 commented on pull request #1149:
URL: https://github.com/apache/hadoop-ozone/pull/1149#issuecomment-655106253


   I think this is a deliberate choice not to pick from config, if there, as 
hive stores the table path in its metastore. So, having a complete path is 
better.
   
   
   Just an example scenario/
   1. If the user has config of the remote and local cluster, mistakenly he is 
using remote cluster config when running commands, so the user is talking to a 
remote cluster, not the local cluster(which users want to do). These kinds of 
errors cannot be caught. I feel it is better to always provide service id from 
the user instead of picking from config. (Reason the user has both if the user 
wants to perform distcp)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3878) Make OMHA serviceID optional if one (but only one) is defined in the config

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3878:
-
Labels: pull-request-available  (was: )

> Make OMHA serviceID optional if one (but only one) is defined in the config 
> 
>
> Key: HDDS-3878
> URL: https://issues.apache.org/jira/browse/HDDS-3878
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>
> om.serviceId is required on case of OM.HA in all the client parameters even 
> if there is only one om.serviceId and it can be chosen.
> My goal is:
>  1. Provide better usability
>  2. Simplify the documentation task ;-)
> With using the om.serviceId from the config if 
>  1. config is available
>  2. om ha is configured 
>  3. only one service is configured
> It also makes easier to run the same tests with/without HA



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #1089: HDDS-3705. [OFS] Implement getTrashRoots for trash cleanup

2020-07-07 Thread GitBox


xiaoyuyao commented on a change in pull request #1089:
URL: https://github.com/apache/hadoop-ozone/pull/1089#discussion_r451112711



##
File path: 
hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneClientAdapterImpl.java
##
@@ -516,6 +520,62 @@ public FileStatusAdapter getFileStatus(String path, URI 
uri,
 }
   }
 
+  /**
+   * Get trash roots for current user or all users.
+   *
+   * Note:
+   * 1. When allUsers flag is false, this only returns the trash roots for
+   * those that the current user has access to.
+   * 2. Also it is not particularly efficient to use this API when there are
+   * a lot of volumes and buckets as the client has to iterate through all
+   * buckets in all volumes.
+   *
+   * @param allUsers return trashRoots of all users if true, used by emptier
+   * @param fs Pointer to the current OFS FileSystem
+   * @return
+   */
+  public Collection getTrashRoots(boolean allUsers,
+  BasicRootedOzoneFileSystem fs) {
+List ret = new ArrayList<>();
+try {
+  Iterator iterVol;
+  String username = UserGroupInformation.getCurrentUser().getUserName();
+  if (allUsers) {
+iterVol = objectStore.listVolumes("");
+  } else {
+iterVol = objectStore.listVolumesByUser(username, "", "");
+  }
+  while (iterVol.hasNext()) {
+OzoneVolume volume = iterVol.next();
+Path volumePath = new Path(OZONE_URI_DELIMITER, volume.getName());
+Iterator bucketIter = volume.listBuckets("");
+while (bucketIter.hasNext()) {
+  OzoneBucket bucket = bucketIter.next();
+  Path bucketPath = new Path(volumePath, bucket.getName());
+  Path trashRoot = new Path(bucketPath, FileSystem.TRASH_PREFIX);
+  if (allUsers) {
+if (fs.exists(trashRoot)) {
+  for (FileStatus candidate : fs.listStatus(trashRoot)) {
+if (fs.exists(candidate.getPath()) && candidate.isDirectory()) 
{
+  ret.add(candidate);
+}
+  }
+}
+  } else {
+Path userTrash = new Path(trashRoot, username);
+if (fs.exists(userTrash) &&
+fs.getFileStatus(userTrash).isDirectory()) {
+  ret.add(fs.getFileStatus(userTrash));
+}
+  }
+}
+  }
+} catch (IOException ex) {
+  throw new RuntimeException(ex);

Review comment:
   I don't think we should throw RuntimeException here. We can log a 
warning and return an empty collection instead. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #1089: HDDS-3705. [OFS] Implement getTrashRoots for trash cleanup

2020-07-07 Thread GitBox


smengcl commented on a change in pull request #1089:
URL: https://github.com/apache/hadoop-ozone/pull/1089#discussion_r451113467



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestRootedOzoneFileSystem.java
##
@@ -871,4 +876,103 @@ public void testFailToDeleteRoot() throws IOException {
 Assert.assertFalse(fs.delete(new Path("/"), true));
   }
 
+  /**
+   * Test getTrashRoots() in OFS. Different from the existing test for o3fs.
+   */
+  @Test
+  public void testGetTrashRoots() throws IOException {
+String username = UserGroupInformation.getCurrentUser().getShortUserName();
+OzoneVolume volume1 = objectStore.getVolume(volumeName);
+String prevOwner = volume1.getOwner();
+// Set owner of the volume to current user, so it will show up in vol list
+Assert.assertTrue(volume1.setOwner(username));
+
+Path trashRoot1 = new Path(bucketPath, TRASH_PREFIX);
+Path user1Trash1 = new Path(trashRoot1, username);
+// When user trash dir isn't been created
+Assert.assertEquals(0, fs.getTrashRoots(false).size());
+Assert.assertEquals(0, fs.getTrashRoots(true).size());
+// Let's create our first user1 (current user) trash dir.
+fs.mkdirs(user1Trash1);
+// Results should be getTrashRoots(false)=1, gTR(true)=1
+Collection res = fs.getTrashRoots(false);
+Assert.assertEquals(1, res.size());
+res.forEach(e -> Assert.assertEquals(

Review comment:
   done d997ac0fb09b0145824ae8196e6b9bb47956c086





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3931) Maven warning due to deprecated expression pom.artifactId

2020-07-07 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-3931:
-
Fix Version/s: 0.6.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Maven warning due to deprecated expression pom.artifactId
> -
>
> Key: HDDS-3931
> URL: https://issues.apache.org/jira/browse/HDDS-3931
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.6.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>
> {code:title=mvn clean}
> [INFO] Scanning for projects...
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-ozone-interface-client:jar:0.6.0-SNAPSHOT
> [WARNING] The expression ${pom.artifactId} is deprecated. Please use 
> ${project.artifactId} instead.
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-ozone-common:jar:0.6.0-SNAPSHOT
> [WARNING] The expression ${pom.artifactId} is deprecated. Please use 
> ${project.artifactId} instead.
> ...
> {code}
> Same warning in {{hadoop-hdds/pom.xml}} was fixed during review of HDDS-3875, 
> but the one in {{hadoop-ozone/pom.xml}} was left.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #1172: HDDS-3931. Maven warning due to deprecated expression pom.artifactId

2020-07-07 Thread GitBox


bharatviswa504 commented on pull request #1172:
URL: https://github.com/apache/hadoop-ozone/pull/1172#issuecomment-655095351


   Thank You @adoroszlai for the contribution.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #1172: HDDS-3931. Maven warning due to deprecated expression pom.artifactId

2020-07-07 Thread GitBox


bharatviswa504 merged pull request #1172:
URL: https://github.com/apache/hadoop-ozone/pull/1172


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #1089: HDDS-3705. [OFS] Implement getTrashRoots for trash cleanup

2020-07-07 Thread GitBox


xiaoyuyao commented on a change in pull request #1089:
URL: https://github.com/apache/hadoop-ozone/pull/1089#discussion_r451103682



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestRootedOzoneFileSystem.java
##
@@ -871,4 +876,103 @@ public void testFailToDeleteRoot() throws IOException {
 Assert.assertFalse(fs.delete(new Path("/"), true));
   }
 
+  /**
+   * Test getTrashRoots() in OFS. Different from the existing test for o3fs.
+   */
+  @Test
+  public void testGetTrashRoots() throws IOException {
+String username = UserGroupInformation.getCurrentUser().getShortUserName();
+OzoneVolume volume1 = objectStore.getVolume(volumeName);
+String prevOwner = volume1.getOwner();
+// Set owner of the volume to current user, so it will show up in vol list
+Assert.assertTrue(volume1.setOwner(username));
+
+Path trashRoot1 = new Path(bucketPath, TRASH_PREFIX);
+Path user1Trash1 = new Path(trashRoot1, username);
+// When user trash dir isn't been created
+Assert.assertEquals(0, fs.getTrashRoots(false).size());
+Assert.assertEquals(0, fs.getTrashRoots(true).size());
+// Let's create our first user1 (current user) trash dir.
+fs.mkdirs(user1Trash1);
+// Results should be getTrashRoots(false)=1, gTR(true)=1
+Collection res = fs.getTrashRoots(false);
+Assert.assertEquals(1, res.size());
+res.forEach(e -> Assert.assertEquals(
+user1Trash1.toString(), e.getPath().toUri().getPath()));
+res = fs.getTrashRoots(true);
+Assert.assertEquals(1, res.size());
+res.forEach(e -> Assert.assertEquals(
+user1Trash1.toString(), e.getPath().toUri().getPath()));
+
+// Create one more trash for user2 in the same bucket
+Path user2Trash1 = new Path(trashRoot1, "testuser2");
+fs.mkdirs(user2Trash1);
+// Results should be getTrashRoots(false)=1, gTR(true)=2
+Assert.assertEquals(1, fs.getTrashRoots(false).size());
+Assert.assertEquals(2, fs.getTrashRoots(true).size());
+
+// Create a new bucket in the same volume
+final String bucketName2 = "trashroottest2";
+volume1.createBucket(bucketName2);
+Path bucketPath2 = new Path(volumePath, bucketName2);
+Path trashRoot2 = new Path(bucketPath2, TRASH_PREFIX);
+Path user1Trash2 = new Path(trashRoot2, username);
+// Create a file at the trash location, it shouldn't be recognized as trash
+try (FSDataOutputStream out1 = fs.create(user1Trash2)) {
+  out1.write(123);
+}
+// Results should still be getTrashRoots(false)=1, gTR(true)=2
+Assert.assertEquals(1, fs.getTrashRoots(false).size());
+res.forEach(e -> Assert.assertEquals(

Review comment:
   I see you have the assertion here for user1Trash1. So you can ignore the 
previous comment. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #1089: HDDS-3705. [OFS] Implement getTrashRoots for trash cleanup

2020-07-07 Thread GitBox


xiaoyuyao commented on a change in pull request #1089:
URL: https://github.com/apache/hadoop-ozone/pull/1089#discussion_r451103207



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestRootedOzoneFileSystem.java
##
@@ -871,4 +876,103 @@ public void testFailToDeleteRoot() throws IOException {
 Assert.assertFalse(fs.delete(new Path("/"), true));
   }
 
+  /**
+   * Test getTrashRoots() in OFS. Different from the existing test for o3fs.
+   */
+  @Test
+  public void testGetTrashRoots() throws IOException {
+String username = UserGroupInformation.getCurrentUser().getShortUserName();
+OzoneVolume volume1 = objectStore.getVolume(volumeName);
+String prevOwner = volume1.getOwner();
+// Set owner of the volume to current user, so it will show up in vol list
+Assert.assertTrue(volume1.setOwner(username));
+
+Path trashRoot1 = new Path(bucketPath, TRASH_PREFIX);
+Path user1Trash1 = new Path(trashRoot1, username);
+// When user trash dir isn't been created
+Assert.assertEquals(0, fs.getTrashRoots(false).size());
+Assert.assertEquals(0, fs.getTrashRoots(true).size());
+// Let's create our first user1 (current user) trash dir.
+fs.mkdirs(user1Trash1);
+// Results should be getTrashRoots(false)=1, gTR(true)=1
+Collection res = fs.getTrashRoots(false);
+Assert.assertEquals(1, res.size());
+res.forEach(e -> Assert.assertEquals(
+user1Trash1.toString(), e.getPath().toUri().getPath()));
+res = fs.getTrashRoots(true);
+Assert.assertEquals(1, res.size());
+res.forEach(e -> Assert.assertEquals(
+user1Trash1.toString(), e.getPath().toUri().getPath()));
+
+// Create one more trash for user2 in the same bucket
+Path user2Trash1 = new Path(trashRoot1, "testuser2");
+fs.mkdirs(user2Trash1);
+// Results should be getTrashRoots(false)=1, gTR(true)=2
+Assert.assertEquals(1, fs.getTrashRoots(false).size());

Review comment:
   Can we assert the trash roots is for user one with invoke 
getTrashRoots(false)?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] codecov-commenter commented on pull request #1089: HDDS-3705. [OFS] Implement getTrashRoots for trash cleanup

2020-07-07 Thread GitBox


codecov-commenter commented on pull request #1089:
URL: https://github.com/apache/hadoop-ozone/pull/1089#issuecomment-655086823


   # 
[Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1089?src=pr&el=h1) 
Report
   > Merging 
[#1089](https://codecov.io/gh/apache/hadoop-ozone/pull/1089?src=pr&el=desc) 
into 
[master](https://codecov.io/gh/apache/hadoop-ozone/commit/1d13b4fb18d5ceb830380f09b62fa1740c96b5f5&el=desc)
 will **increase** coverage by `0.11%`.
   > The diff coverage is `89.65%`.
   
   [![Impacted file tree 
graph](https://codecov.io/gh/apache/hadoop-ozone/pull/1089/graphs/tree.svg?width=650&height=150&src=pr&token=5YeeptJMby)](https://codecov.io/gh/apache/hadoop-ozone/pull/1089?src=pr&el=tree)
   
   ```diff
   @@ Coverage Diff  @@
   ## master#1089  +/-   ##
   
   + Coverage 73.34%   73.46%   +0.11% 
   - Complexity 996210035  +73 
   
 Files   969  974   +5 
 Lines 4947049714 +244 
 Branches   4859 4892  +33 
   
   + Hits  3628536520 +235 
   + Misses1086910863   -6 
   - Partials   2316 2331  +15 
   ```
   
   
   | [Impacted 
Files](https://codecov.io/gh/apache/hadoop-ozone/pull/1089?src=pr&el=tree) | 
Coverage Δ | Complexity Δ | |
   |---|---|---|---|
   | 
[...va/org/apache/hadoop/ozone/client/ObjectStore.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1089/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL2NsaWVudC9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL296b25lL2NsaWVudC9PYmplY3RTdG9yZS5qYXZh)
 | `89.85% <ø> (ø)` | `22.00 <0.00> (ø)` | |
   | 
[...op/fs/ozone/BasicRootedOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1089/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNSb290ZWRPem9uZUNsaWVudEFkYXB0ZXJJbXBsLmphdmE=)
 | `70.12% <89.28%> (+1.50%)` | `58.00 <10.00> (+10.00)` | |
   | 
[...he/hadoop/fs/ozone/BasicRootedOzoneFileSystem.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1089/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNSb290ZWRPem9uZUZpbGVTeXN0ZW0uamF2YQ==)
 | `74.48% <100.00%> (+0.07%)` | `51.00 <1.00> (+1.00)` | |
   | 
[...iner/ozoneimpl/ContainerScrubberConfiguration.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1089/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIvb3pvbmVpbXBsL0NvbnRhaW5lclNjcnViYmVyQ29uZmlndXJhdGlvbi5qYXZh)
 | `81.81% <0.00%> (-18.19%)` | `7.00% <0.00%> (-1.00%)` | |
   | 
[...rg/apache/hadoop/hdds/scm/pipeline/PipelineID.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1089/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvaGRkcy9zY20vcGlwZWxpbmUvUGlwZWxpbmVJRC5qYXZh)
 | `88.88% <0.00%> (-5.23%)` | `12.00% <0.00%> (+1.00%)` | :arrow_down: |
   | 
[...ent/algorithms/SCMContainerPlacementRackAware.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1089/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2NvbnRhaW5lci9wbGFjZW1lbnQvYWxnb3JpdGhtcy9TQ01Db250YWluZXJQbGFjZW1lbnRSYWNrQXdhcmUuamF2YQ==)
 | `76.69% <0.00%> (-3.01%)` | `31.00% <0.00%> (-2.00%)` | |
   | 
[...rg/apache/hadoop/hdds/conf/OzoneConfiguration.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1089/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvaGRkcy9jb25mL096b25lQ29uZmlndXJhdGlvbi5qYXZh)
 | `69.14% <0.00%> (-1.64%)` | `17.00% <0.00%> (+1.00%)` | :arrow_down: |
   | 
[...p/ozone/container/keyvalue/helpers/ChunkUtils.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1089/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIva2V5dmFsdWUvaGVscGVycy9DaHVua1V0aWxzLmphdmE=)
 | `85.45% <0.00%> (-0.91%)` | `30.00% <0.00%> (-1.00%)` | |
   | 
[.../org/apache/hadoop/hdds/scm/pipeline/Pipeline.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1089/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvaGRkcy9zY20vcGlwZWxpbmUvUGlwZWxpbmUuamF2YQ==)
 | `86.30% <0.00%> (-0.85%)` | `48.00% <0.00%> (+2.00%)` | :arrow_down: |
   | 
[...hadoop/ozone/om/ratis/OzoneManagerRatisServer.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1089/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lLW1hbmFnZXIvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9vbS9yYXRpcy9Pem9uZU1hbmFnZXJSYXRpc1NlcnZlci5qYXZh)
 | `79.29% <0.00%> (-0.79%)` | `35.00% <0.00%> (-1.00%)` | |
   | ... and [48 
more](https://codecov.io/gh/apache/hadoop-ozone/pull/1089/diff?src=pr

[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #1089: HDDS-3705. [OFS] Implement getTrashRoots for trash cleanup

2020-07-07 Thread GitBox


xiaoyuyao commented on a change in pull request #1089:
URL: https://github.com/apache/hadoop-ozone/pull/1089#discussion_r451102321



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestRootedOzoneFileSystem.java
##
@@ -871,4 +876,103 @@ public void testFailToDeleteRoot() throws IOException {
 Assert.assertFalse(fs.delete(new Path("/"), true));
   }
 
+  /**
+   * Test getTrashRoots() in OFS. Different from the existing test for o3fs.
+   */
+  @Test
+  public void testGetTrashRoots() throws IOException {
+String username = UserGroupInformation.getCurrentUser().getShortUserName();
+OzoneVolume volume1 = objectStore.getVolume(volumeName);
+String prevOwner = volume1.getOwner();
+// Set owner of the volume to current user, so it will show up in vol list
+Assert.assertTrue(volume1.setOwner(username));
+
+Path trashRoot1 = new Path(bucketPath, TRASH_PREFIX);
+Path user1Trash1 = new Path(trashRoot1, username);
+// When user trash dir isn't been created
+Assert.assertEquals(0, fs.getTrashRoots(false).size());
+Assert.assertEquals(0, fs.getTrashRoots(true).size());
+// Let's create our first user1 (current user) trash dir.
+fs.mkdirs(user1Trash1);
+// Results should be getTrashRoots(false)=1, gTR(true)=1
+Collection res = fs.getTrashRoots(false);
+Assert.assertEquals(1, res.size());
+res.forEach(e -> Assert.assertEquals(

Review comment:
   NIT: forEach is unnecessary as we have assert the size is 1.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #1166: HDDS-3914. Remove LevelDB configuration option for DN Metastore

2020-07-07 Thread GitBox


xiaoyuyao commented on a change in pull request #1166:
URL: https://github.com/apache/hadoop-ozone/pull/1166#discussion_r451048561



##
File path: 
hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/utils/TestMetadataStore.java
##
@@ -166,12 +170,12 @@ public void testIterator() throws Exception {
   public void testMetaStoreConfigDifferentFromType() throws IOException {
 
 OzoneConfiguration conf = new OzoneConfiguration();
-conf.set(OzoneConfigKeys.OZONE_METADATA_STORE_IMPL, storeImpl);
+
 String dbType;
 GenericTestUtils.setLogLevel(MetadataStoreBuilder.LOG, Level.DEBUG);
 GenericTestUtils.LogCapturer logCapturer =
 GenericTestUtils.LogCapturer.captureLogs(MetadataStoreBuilder.LOG);
-if (storeImpl.equals(OzoneConfigKeys.OZONE_METADATA_STORE_IMPL_LEVELDB)) {
+if (storeImpl.equals(CONTAINER_DB_TYPE_LEVELDB)) {

Review comment:
   The logic needs to be reversed, maybe a typo. 

##
File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java
##
@@ -186,8 +186,8 @@ private void checkContainerFile() throws IOException {
 }
 
 dbType = onDiskContainerData.getContainerDBType();
-if (!dbType.equals(OZONE_METADATA_STORE_IMPL_ROCKSDB) &&
-!dbType.equals(OZONE_METADATA_STORE_IMPL_LEVELDB)) {
+if (!dbType.equals(CONTAINER_DB_TYPE_ROCKSDB) &&

Review comment:
   Thanks @hanishakoneru  for working on this. Not sure if I understand 
correctly, since we are removing LevelDB support on DN, do we still need to 
define two types of DB for container here? 

##
File path: 
hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/utils/TestMetadataStore.java
##
@@ -90,12 +93,12 @@ public void init() throws IOException {
 + "-" + storeImpl.toLowerCase());
 
 OzoneConfiguration conf = new OzoneConfiguration();
-conf.set(OzoneConfigKeys.OZONE_METADATA_STORE_IMPL, storeImpl);
 
 store = MetadataStoreBuilder.newBuilder()
 .setConf(conf)
 .setCreateIfMissing(true)
 .setDbFile(testDir)
+.setDBType(storeImpl)

Review comment:
   Do we still need this?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3880) Improve OM HA Robot tests

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3880:
-
Labels: pull-request-available  (was: )

> Improve OM HA Robot tests
> -
>
> Key: HDDS-3880
> URL: https://issues.apache.org/jira/browse/HDDS-3880
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>
> This Jira aims to address the following:
> 1. Add robot test for Install Snapshot feature 
> 2. Fix the flakiness in OM HA robot tests (HDDS-3313)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru opened a new pull request #1173: HDDS-3880. Improve OM HA Robot test

2020-07-07 Thread GitBox


hanishakoneru opened a new pull request #1173:
URL: https://github.com/apache/hadoop-ozone/pull/1173


   ## What changes were proposed in this pull request?
   
   This Jira aims to address the following:
   1. Add robot test for Install Snapshot feature 
   2. Fix the flakiness in OM HA robot tests (HDDS-3313)
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3880
   
   ## How was this patch tested?
   
   Robot tests added



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #1049: HDDS-3662 Decouple finalizeAndDestroyPipeline.

2020-07-07 Thread GitBox


xiaoyuyao commented on a change in pull request #1049:
URL: https://github.com/apache/hadoop-ozone/pull/1049#discussion_r451023617



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineManagerV2Impl.java
##
@@ -410,18 +399,29 @@ public void scrubPipeline(ReplicationType type, 
ReplicationFactor factor)
 ScmConfigKeys.OZONE_SCM_PIPELINE_ALLOCATED_TIMEOUT,
 ScmConfigKeys.OZONE_SCM_PIPELINE_ALLOCATED_TIMEOUT_DEFAULT,
 TimeUnit.MILLISECONDS);
-List needToSrubPipelines = stateManager.getPipelines(type, 
factor,
-Pipeline.PipelineState.ALLOCATED).stream()
-.filter(p -> currentTime.toEpochMilli() - p.getCreationTimestamp()
-.toEpochMilli() >= pipelineScrubTimeoutInMills)
-.collect(Collectors.toList());
-for (Pipeline p : needToSrubPipelines) {
-  LOG.info("Scrubbing pipeline: id: " + p.getId().toString() +
-  " since it stays at ALLOCATED stage for " +
-  Duration.between(currentTime, p.getCreationTimestamp()).toMinutes() +
-  " mins.");
-  finalizeAndDestroyPipeline(p, false);
+
+List candidates = stateManager.getPipelines(type, factor);
+
+for (Pipeline p : candidates) {
+  // scrub pipelines who stay ALLOCATED for too long.
+  if (p.getPipelineState() == Pipeline.PipelineState.ALLOCATED &&
+  (currentTime.toEpochMilli() - p.getCreationTimestamp()
+  .toEpochMilli() >= pipelineScrubTimeoutInMills)) {
+LOG.info("Scrubbing pipeline: id: " + p.getId().toString() +
+" since it stays at ALLOCATED stage for " +
+Duration.between(currentTime, p.getCreationTimestamp())
+.toMinutes() + " mins.");
+closePipeline(p, false);
+  }
+  // scrub pipelines who stay CLOSED for too long.
+  if (p.getPipelineState() == Pipeline.PipelineState.CLOSED) {

Review comment:
   Do we need to check the time against 
ScmConfigKeys.OZONE_SCM_PIPELINE_DESTROY_TIMEOUT since it enter into the CLOSED 
state before closecontainer and removepipeline? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #1049: HDDS-3662 Decouple finalizeAndDestroyPipeline.

2020-07-07 Thread GitBox


xiaoyuyao commented on a change in pull request #1049:
URL: https://github.com/apache/hadoop-ozone/pull/1049#discussion_r451024886



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
##
@@ -421,18 +438,29 @@ public void scrubPipeline(ReplicationType type, 
ReplicationFactor factor)
 ScmConfigKeys.OZONE_SCM_PIPELINE_ALLOCATED_TIMEOUT,
 ScmConfigKeys.OZONE_SCM_PIPELINE_ALLOCATED_TIMEOUT_DEFAULT,
 TimeUnit.MILLISECONDS);
-List needToSrubPipelines = stateManager.getPipelines(type, 
factor,
-Pipeline.PipelineState.ALLOCATED).stream()
-.filter(p -> currentTime.toEpochMilli() - p.getCreationTimestamp()
-.toEpochMilli() >= pipelineScrubTimeoutInMills)
-.collect(Collectors.toList());
-for (Pipeline p : needToSrubPipelines) {
-  LOG.info("Scrubbing pipeline: id: " + p.getId().toString() +
-  " since it stays at ALLOCATED stage for " +
-  Duration.between(currentTime, p.getCreationTimestamp()).toMinutes() +
-  " mins.");
-  finalizeAndDestroyPipeline(p, false);
+
+List candidates = stateManager.getPipelines(type, factor);
+
+for (Pipeline p : candidates) {
+  // scrub pipelines who stay ALLOCATED for too long.
+  if (p.getPipelineState() == Pipeline.PipelineState.ALLOCATED &&
+  (currentTime.toEpochMilli() - p.getCreationTimestamp()
+  .toEpochMilli() >= pipelineScrubTimeoutInMills)) {
+LOG.info("Scrubbing pipeline: id: " + p.getId().toString() +
+" since it stays at ALLOCATED stage for " +
+Duration.between(currentTime, p.getCreationTimestamp())
+.toMinutes() + " mins.");
+closePipeline(p, false);
+  }
+  // scrub pipelines who stay CLOSED for too long.
+  if (p.getPipelineState() == Pipeline.PipelineState.CLOSED) {

Review comment:
   Same as the comments for V2 manager.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #1049: HDDS-3662 Decouple finalizeAndDestroyPipeline.

2020-07-07 Thread GitBox


xiaoyuyao commented on a change in pull request #1049:
URL: https://github.com/apache/hadoop-ozone/pull/1049#discussion_r451024282



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineManagerV2Impl.java
##
@@ -310,94 +321,72 @@ public void openPipeline(PipelineID pipelineId) throws 
IOException {
   }
 
   /**
-   * Finalizes pipeline in the SCM. Removes pipeline and makes rpc call to
-   * destroy pipeline on the datanodes immediately or after timeout based on 
the
-   * value of onTimeout parameter.
-   *
-   * @param pipeline- Pipeline to be destroyed
-   * @param onTimeout   - if true pipeline is removed and destroyed on
-   *datanodes after timeout
-   * @throws IOException
-   */
-  @Override
-  public void finalizeAndDestroyPipeline(Pipeline pipeline, boolean onTimeout)
-  throws IOException {
-LOG.info("Destroying pipeline:{}", pipeline);
-finalizePipeline(pipeline.getId());
-if (onTimeout) {
-  long pipelineDestroyTimeoutInMillis =
-  
conf.getTimeDuration(ScmConfigKeys.OZONE_SCM_PIPELINE_DESTROY_TIMEOUT,
-  ScmConfigKeys.OZONE_SCM_PIPELINE_DESTROY_TIMEOUT_DEFAULT,
-  TimeUnit.MILLISECONDS);
-  scheduler.schedule(() -> destroyPipeline(pipeline),
-  pipelineDestroyTimeoutInMillis, TimeUnit.MILLISECONDS, LOG,
-  String.format("Destroy pipeline failed for pipeline:%s", pipeline));
-} else {
-  destroyPipeline(pipeline);
-}
-  }
-
-  /**
-   * Moves the pipeline to CLOSED state and sends close container command for
-   * all the containers in the pipeline.
+   * Removes the pipeline from the db and pipeline state map.
*
-   * @param pipelineId - ID of the pipeline to be moved to CLOSED state.
+   * @param pipeline - pipeline to be removed
* @throws IOException
*/
-  private void finalizePipeline(PipelineID pipelineId) throws IOException {
+  protected void removePipeline(Pipeline pipeline) throws IOException {
+pipelineFactory.close(pipeline.getType(), pipeline);
+PipelineID pipelineID = pipeline.getId();
 lock.writeLock().lock();
 try {
-  Pipeline pipeline = stateManager.getPipeline(pipelineId);
-  if (!pipeline.isClosed()) {
-stateManager.updatePipelineState(
-pipelineId.getProtobuf(), 
HddsProtos.PipelineState.PIPELINE_CLOSED);
-LOG.info("Pipeline {} moved to CLOSED state", pipeline);
-  }
-
-  // TODO fire events to datanodes for closing pipelines
-//  Set containerIDs = stateManager.getContainers(pipelineId);
-//  for (ContainerID containerID : containerIDs) {
-//eventPublisher.fireEvent(SCMEvents.CLOSE_CONTAINER, containerID);
-//  }
-  metrics.removePipelineMetrics(pipelineId);
+  stateManager.removePipeline(pipelineID.getProtobuf());
+  metrics.incNumPipelineDestroyed();
+} catch (IOException ex) {
+  metrics.incNumPipelineDestroyFailed();
+  throw ex;
 } finally {
   lock.writeLock().unlock();
 }
   }
 
   /**
-   * Removes pipeline from SCM. Sends ratis command to destroy pipeline on all
-   * the datanodes for ratis pipelines.
-   *
-   * @param pipeline- Pipeline to be destroyed
+   * Fire events to close all containers related to the input pipeline.
+   * @param pipelineId - ID of the pipeline.
* @throws IOException
*/
-  protected void destroyPipeline(Pipeline pipeline) throws IOException {
-pipelineFactory.close(pipeline.getType(), pipeline);
-// remove the pipeline from the pipeline manager
-removePipeline(pipeline.getId());
-triggerPipelineCreation();
+  protected void closeContainersForPipeline(final PipelineID pipelineId)
+  throws IOException {
+Set containerIDs = stateManager.getContainers(pipelineId);
+for (ContainerID containerID : containerIDs) {
+  eventPublisher.fireEvent(SCMEvents.CLOSE_CONTAINER, containerID);
+}
   }
 
   /**
-   * Removes the pipeline from the db and pipeline state map.
-   *
-   * @param pipelineId - ID of the pipeline to be removed
+   * put pipeline in CLOSED state.
+   * @param pipeline - ID of the pipeline.
+   * @param onTimeout - whether to remove pipeline after some time.
* @throws IOException
*/
-  protected void removePipeline(PipelineID pipelineId) throws IOException {
+  @Override
+  public void closePipeline(Pipeline pipeline, boolean onTimeout)
+  throws IOException {
+PipelineID pipelineID = pipeline.getId();
 lock.writeLock().lock();
 try {
-  stateManager.removePipeline(pipelineId.getProtobuf());
-  metrics.incNumPipelineDestroyed();
-} catch (IOException ex) {
-  metrics.incNumPipelineDestroyFailed();
-  throw ex;
+  if (!pipeline.isClosed()) {
+stateManager.updatePipelineState(pipelineID.getProtobuf(),
+HddsProtos.PipelineState.PIPELINE_CLOSED);
+LOG.info("Pipeline {} moved to CLOSED state", pipeline);
+

[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #1049: HDDS-3662 Decouple finalizeAndDestroyPipeline.

2020-07-07 Thread GitBox


xiaoyuyao commented on a change in pull request #1049:
URL: https://github.com/apache/hadoop-ozone/pull/1049#discussion_r451023617



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineManagerV2Impl.java
##
@@ -410,18 +399,29 @@ public void scrubPipeline(ReplicationType type, 
ReplicationFactor factor)
 ScmConfigKeys.OZONE_SCM_PIPELINE_ALLOCATED_TIMEOUT,
 ScmConfigKeys.OZONE_SCM_PIPELINE_ALLOCATED_TIMEOUT_DEFAULT,
 TimeUnit.MILLISECONDS);
-List needToSrubPipelines = stateManager.getPipelines(type, 
factor,
-Pipeline.PipelineState.ALLOCATED).stream()
-.filter(p -> currentTime.toEpochMilli() - p.getCreationTimestamp()
-.toEpochMilli() >= pipelineScrubTimeoutInMills)
-.collect(Collectors.toList());
-for (Pipeline p : needToSrubPipelines) {
-  LOG.info("Scrubbing pipeline: id: " + p.getId().toString() +
-  " since it stays at ALLOCATED stage for " +
-  Duration.between(currentTime, p.getCreationTimestamp()).toMinutes() +
-  " mins.");
-  finalizeAndDestroyPipeline(p, false);
+
+List candidates = stateManager.getPipelines(type, factor);
+
+for (Pipeline p : candidates) {
+  // scrub pipelines who stay ALLOCATED for too long.
+  if (p.getPipelineState() == Pipeline.PipelineState.ALLOCATED &&
+  (currentTime.toEpochMilli() - p.getCreationTimestamp()
+  .toEpochMilli() >= pipelineScrubTimeoutInMills)) {
+LOG.info("Scrubbing pipeline: id: " + p.getId().toString() +
+" since it stays at ALLOCATED stage for " +
+Duration.between(currentTime, p.getCreationTimestamp())
+.toMinutes() + " mins.");
+closePipeline(p, false);
+  }
+  // scrub pipelines who stay CLOSED for too long.
+  if (p.getPipelineState() == Pipeline.PipelineState.CLOSED) {

Review comment:
   Do we need to check the time since it enter into the CLOSED state before 
closecontainer and removepipeline? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3509) Closing container with unhealthy replica on open pipeline

2020-07-07 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-3509:
-
Labels:   (was: TriagePending)

> Closing container with unhealthy replica on open pipeline
> -
>
> Key: HDDS-3509
> URL: https://issues.apache.org/jira/browse/HDDS-3509
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Nilotpal Nandi
>Assignee: Nanda kumar
>Priority: Major
>
> When a container replica of an OPEN container is marked as UNHEALTHY, SCM 
> tries to close the container.
> If the pipeline is still healthy, we try to close the container via Ratis. 
> We could run into a scenario where the datanode which marked the container 
> replica as UNHEALTHY is the pipeline leader. In such case that datanode 
> (leader) should process the close container command even though the container 
> replica is in UNHEALTHY state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on pull request #1089: HDDS-3705. [OFS] Implement getTrashRoots for trash cleanup

2020-07-07 Thread GitBox


smengcl commented on pull request #1089:
URL: https://github.com/apache/hadoop-ozone/pull/1089#issuecomment-654981393


   The new test contaminated the results of other tests in the same test class. 
Fixing this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on pull request #1156: HDDS-3879. Introduce SCM and OM layoutVersion zero to the VERSION file

2020-07-07 Thread GitBox


avijayanhwx commented on pull request #1156:
URL: https://github.com/apache/hadoop-ozone/pull/1156#issuecomment-654968796


   > I started on the changes to add in the software version too, but in doing 
that I starting to think about whether we need to actually store it.
   > 
   > I will discuss with @avijayanhwx and see what conclusion we come to. If we 
need it, we may as well get it in as part of this change.
   
   Yes, this change as it is can go in.
   
   @swagle We will not have any backward compat issues with respect to 
layout/software version since the default behavior can handle that.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3910) JooqCodeGenerator interrupted but still alive

2020-07-07 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-3910.

   Fix Version/s: 0.6.0
Target Version/s:   (was: 0.7.0)
  Resolution: Fixed

> JooqCodeGenerator interrupted but still alive
> -
>
> Key: HDDS-3910
> URL: https://issues.apache.org/jira/browse/HDDS-3910
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Neo Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>
> Build takes 15 seconds longer than necessary due to:
> {code}
> 2020-07-01 14:25:21,449 INFO  jooq.Constants (JooqLogger.java:info(338)) -
> @@
> @@
>   @@@@
> @@
>   @@  @@@@
> @@    @@  @@@@
> @@@@@@
> @@
> @@
> @@@@@@
> @@@@  @@    @@
> @@@@  @@    @@
> @@@@  @  @  @@
> @@@@@@
> @@@  @
> @@
> @@  Thank you for using jOOQ 3.11.9
> 14:25:37,274 [WARNING] thread 
> Thread[Timer-0,5,org.hadoop.ozone.recon.codegen.JooqCodeGenerator] was 
> interrupted but is still alive after waiting at least 15000msecs
> 14:25:37,275 [WARNING] thread 
> Thread[Timer-0,5,org.hadoop.ozone.recon.codegen.JooqCodeGenerator] will 
> linger despite being asked to die via interruption
> 14:25:37,275 [WARNING] thread Thread[derby.rawStoreDaemon,5,derby.daemons] 
> will linger despite being asked to die via interruption
> 14:25:37,275 [WARNING] NOTE: 2 thread(s) did not finish despite being asked 
> to  via interruption. This is not a problem with exec:java, it is a problem 
> with the running code. Although not serious, it should be remedied.
> 14:25:37,276 [WARNING] Couldn't destroy threadgroup 
> org.codehaus.mojo.exec.ExecJavaMojo$IsolatedThreadGroup[name=org.hadoop.ozone.recon.codegen.JooqCodeGenerator,maxpri=10]
> java.lang.IllegalThreadStateException
> at java.lang.ThreadGroup.destroy (ThreadGroup.java:778)
> at org.codehaus.mojo.exec.ExecJavaMojo.execute (ExecJavaMojo.java:328)
> at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo 
> (DefaultBuildPluginManager.java:137)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:210)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:156)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:148)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
> (LifecycleModuleBuilder.java:117)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
> (LifecycleModuleBuilder.java:81)
> at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
>  (SingleThreadedBuilder.java:56)
> at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
> (LifecycleStarter.java:128)
> at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
> at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
> at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
> at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
> at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
> at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
> ...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3910) JooqCodeGenerator interrupted but still alive

2020-07-07 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-3910:
---
Labels:   (was: pull-request-available)

> JooqCodeGenerator interrupted but still alive
> -
>
> Key: HDDS-3910
> URL: https://issues.apache.org/jira/browse/HDDS-3910
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Neo Yang
>Priority: Major
> Fix For: 0.6.0
>
>
> Build takes 15 seconds longer than necessary due to:
> {code}
> 2020-07-01 14:25:21,449 INFO  jooq.Constants (JooqLogger.java:info(338)) -
> @@
> @@
>   @@@@
> @@
>   @@  @@@@
> @@    @@  @@@@
> @@@@@@
> @@
> @@
> @@@@@@
> @@@@  @@    @@
> @@@@  @@    @@
> @@@@  @  @  @@
> @@@@@@
> @@@  @
> @@
> @@  Thank you for using jOOQ 3.11.9
> 14:25:37,274 [WARNING] thread 
> Thread[Timer-0,5,org.hadoop.ozone.recon.codegen.JooqCodeGenerator] was 
> interrupted but is still alive after waiting at least 15000msecs
> 14:25:37,275 [WARNING] thread 
> Thread[Timer-0,5,org.hadoop.ozone.recon.codegen.JooqCodeGenerator] will 
> linger despite being asked to die via interruption
> 14:25:37,275 [WARNING] thread Thread[derby.rawStoreDaemon,5,derby.daemons] 
> will linger despite being asked to die via interruption
> 14:25:37,275 [WARNING] NOTE: 2 thread(s) did not finish despite being asked 
> to  via interruption. This is not a problem with exec:java, it is a problem 
> with the running code. Although not serious, it should be remedied.
> 14:25:37,276 [WARNING] Couldn't destroy threadgroup 
> org.codehaus.mojo.exec.ExecJavaMojo$IsolatedThreadGroup[name=org.hadoop.ozone.recon.codegen.JooqCodeGenerator,maxpri=10]
> java.lang.IllegalThreadStateException
> at java.lang.ThreadGroup.destroy (ThreadGroup.java:778)
> at org.codehaus.mojo.exec.ExecJavaMojo.execute (ExecJavaMojo.java:328)
> at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo 
> (DefaultBuildPluginManager.java:137)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:210)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:156)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:148)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
> (LifecycleModuleBuilder.java:117)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
> (LifecycleModuleBuilder.java:81)
> at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
>  (SingleThreadedBuilder.java:56)
> at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
> (LifecycleStarter.java:128)
> at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
> at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
> at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
> at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
> at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
> at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
> ...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai merged pull request #1170: HDDS-3910. JooqCodeGenerator interrupted but still alive

2020-07-07 Thread GitBox


adoroszlai merged pull request #1170:
URL: https://github.com/apache/hadoop-ozone/pull/1170


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1170: HDDS-3910. JooqCodeGenerator interrupted but still alive

2020-07-07 Thread GitBox


adoroszlai commented on pull request #1170:
URL: https://github.com/apache/hadoop-ozone/pull/1170#issuecomment-654966338


   Thanks @avijayanhwx for the review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on pull request #1170: HDDS-3910. JooqCodeGenerator interrupted but still alive

2020-07-07 Thread GitBox


avijayanhwx commented on pull request #1170:
URL: https://github.com/apache/hadoop-ozone/pull/1170#issuecomment-654963861


   Thanks for fixing this @cku328. LGTM +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3927) Add OZONE_MANAGER_OPTS and OZONE_DATANODE_OPTS

2020-07-07 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng reassigned HDDS-3927:


Assignee: Siyao Meng

> Add OZONE_MANAGER_OPTS and OZONE_DATANODE_OPTS
> --
>
> Key: HDDS-3927
> URL: https://issues.apache.org/jira/browse/HDDS-3927
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Similar to {{HDFS_NAMENODE_OPTS}}, {{HDFS_DATANODE_OPTS}}, etc., we should 
> have {{OZONE_MANAGER_OPTS}}, {{OZONE_DATANODE_OPTS}} to allow adding JVM args 
> for GC tuning and debugging.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3874) ITestRootedOzoneContract tests are flaky

2020-07-07 Thread Jitendra Nath Pandey (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-3874:
---
Target Version/s: 0.7.0  (was: 0.6.0)

> ITestRootedOzoneContract tests are flaky
> 
>
> Key: HDDS-3874
> URL: https://issues.apache.org/jira/browse/HDDS-3874
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Marton Elek
>Assignee: Siyao Meng
>Priority: Blocker
>
> Different tests are failed with similar reasons:
> {code}
> java.lang.Exception: test timed out after 18 milliseconds
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
>   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
>   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.waitOnFlushFutures(BlockOutputStream.java:537)
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFlush(BlockOutputStream.java:499)
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.close(BlockOutputStream.java:514)
>   at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.close(BlockOutputStreamEntry.java:149)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleStreamAction(KeyOutputStream.java:483)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleFlushOrClose(KeyOutputStream.java:457)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.close(KeyOutputStream.java:510)
>   at 
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.close(OzoneFSOutputStream.java:56)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.createFile(ContractTestUtils.java:638)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractOpenTest.testOpenFileTwice(AbstractContractOpenTest.java:135)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
> Example:
> https://github.com/elek/ozone-build-results/blob/master/2020/06/16/1051/it-filesystem-contract/hadoop-ozone/integration-test/org.apache.hadoop.fs.ozone.contract.rooted.ITestRootedOzoneContractOpen.txt
> But same problem here:
> https://github.com/elek/hadoop-ozone/runs/810175295?check_suite_focus=true 
> (contract)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3932) Hide jOOQ logo message from the log output on compile

2020-07-07 Thread Jitendra Nath Pandey (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-3932:
---
   Fix Version/s: (was: 0.6.0)
Target Version/s: 0.7.0  (was: 0.6.0)

> Hide jOOQ logo message from the log output on compile
> -
>
> Key: HDDS-3932
> URL: https://issues.apache.org/jira/browse/HDDS-3932
> Project: Hadoop Distributed Data Store
>  Issue Type: Wish
>  Components: Ozone Recon
>Reporter: Neo Yang
>Assignee: Neo Yang
>Priority: Minor
>  Labels: pull-request-available
>
> When Ozone Recon _(org.apache.hadoop:hadoop-ozone-recon)_ compiles, it prints 
> out this self-ad message:
> {code:java}
> 2020-07-07 15:39:05,719 INFO  jooq.Constants (JooqLogger.java:info(338)) - 
> @@
> @@
>   @@@@
> @@
>   @@  @@@@
> @@    @@  @@@@
> @@@@@@
> @@
> @@
> @@@@@@
> @@@@  @@    @@
> @@@@  @@    @@
> @@@@  @  @  @@
> @@@@@@
> @@@  @
> @@
> @@  Thank you for using jOOQ 3.11.9
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1170: HDDS-3910. JooqCodeGenerator interrupted but still alive

2020-07-07 Thread GitBox


adoroszlai commented on pull request #1170:
URL: https://github.com/apache/hadoop-ozone/pull/1170#issuecomment-654862931


   @avijayanhwx please let us know if you have any concerns about this change, 
otherwise I would like to merge it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on pull request #1162: HDDS-3921. IllegalArgumentException triggered in SCMContainerPlacemen…

2020-07-07 Thread GitBox


sodonnel commented on pull request #1162:
URL: https://github.com/apache/hadoop-ozone/pull/1162#issuecomment-654823654


   Thanks for this change. I also wonder if there is a bug in the method 
`isContainerUnderReplicated(...)` which leads to this problem, and results in 
more processing that necessary for mis-replicated containers with inFlight 
Additions.
   
   In `isContainerUnderReplicated` is uses only the live replicas to check for 
mis-replication, but then it considers inflight add and delete for under 
replication. Should we also include the inflightAdds when considering 
mis-replication in that method?
   
   For `isContainerOverReplicated` it also uses the inflightAdds and deletes to 
check for over replication.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #1162: HDDS-3921. IllegalArgumentException triggered in SCMContainerPlacemen…

2020-07-07 Thread GitBox


sodonnel commented on a change in pull request #1162:
URL: https://github.com/apache/hadoop-ozone/pull/1162#discussion_r450820397



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
##
@@ -543,11 +543,15 @@ private void handleUnderReplicatedContainer(final 
ContainerInfo container,
 List targetReplicas = new ArrayList<>(source);
 // Then add any pending additions
 targetReplicas.addAll(replicationInFlight);
-
-int delta = replicationFactor - getReplicaCount(id, replicas);
 final ContainerPlacementStatus placementStatus =
 containerPlacement.validateContainerPlacement(
 targetReplicas, replicationFactor);
+int delta = replicationFactor - getReplicaCount(id, replicas);
+if (placementStatus.isPolicySatisfied() && delta <= 0) {

Review comment:
   Rather than this new IF block here, would it make sense to simply add:
   
   ```
   if (replicasNeeded <= 0) {
 LOG.debug(...);
 return;
   }
   ```
   
   At line 554 / 558, just after the line:
   
   ```
   final int replicasNeeded
   = delta < misRepDelta ? misRepDelta : delta;
   ```
   
   That would avoid needing to check call `placementStatus.isPolicySatisfied()` 
and then `placementStatus.misReplicationCount()` afterwards, as 
`misReplicationCount()` calls `isPolicySatisfied()` anyway.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #1162: HDDS-3921. IllegalArgumentException triggered in SCMContainerPlacemen…

2020-07-07 Thread GitBox


sodonnel commented on a change in pull request #1162:
URL: https://github.com/apache/hadoop-ozone/pull/1162#discussion_r450820397



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
##
@@ -543,11 +543,15 @@ private void handleUnderReplicatedContainer(final 
ContainerInfo container,
 List targetReplicas = new ArrayList<>(source);
 // Then add any pending additions
 targetReplicas.addAll(replicationInFlight);
-
-int delta = replicationFactor - getReplicaCount(id, replicas);
 final ContainerPlacementStatus placementStatus =
 containerPlacement.validateContainerPlacement(
 targetReplicas, replicationFactor);
+int delta = replicationFactor - getReplicaCount(id, replicas);
+if (placementStatus.isPolicySatisfied() && delta <= 0) {

Review comment:
   Rather than this new IF block here, would it make sense to simply add:
   
   ```
   if (replicasNeeded <= 0) {
 LOG.debug(...);
 return;
   }
   ```
   
   At line 554 / 558, just after the line:
   
   ```
   final int replicasNeeded
   = delta < misRepDelta ? misRepDelta : delta;
   ```
   
   That would avoid needing to check call `placementStatus.isPolicySatisfied()` 
and then `placementStatus.misReplicationCount()`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #1162: HDDS-3921. IllegalArgumentException triggered in SCMContainerPlacemen…

2020-07-07 Thread GitBox


sodonnel commented on a change in pull request #1162:
URL: https://github.com/apache/hadoop-ozone/pull/1162#discussion_r450820397



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
##
@@ -543,11 +543,15 @@ private void handleUnderReplicatedContainer(final 
ContainerInfo container,
 List targetReplicas = new ArrayList<>(source);
 // Then add any pending additions
 targetReplicas.addAll(replicationInFlight);
-
-int delta = replicationFactor - getReplicaCount(id, replicas);
 final ContainerPlacementStatus placementStatus =
 containerPlacement.validateContainerPlacement(
 targetReplicas, replicationFactor);
+int delta = replicationFactor - getReplicaCount(id, replicas);
+if (placementStatus.isPolicySatisfied() && delta <= 0) {

Review comment:
   Rather than this new IF block here, would it make sense to simply add:
   
   ```
   if (replicasNeeded <= 0) {
 LOG.debug(...);
 return;
   }
   
   At line 554 / 558, just after the line:
   
   ```
   final int replicasNeeded
   = delta < misRepDelta ? misRepDelta : delta;
   ```
   
   That would avoid needing to check call `placementStatus.isPolicySatisfied()` 
and then `placementStatus.misReplicationCount()`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3582) Update Checkstyle rule

2020-07-07 Thread maobaolong (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maobaolong updated HDDS-3582:
-
Description: 
Now. the rules of checkstyle is hadoop style, as we all know, hadoop repo 
started 10 years ago, so some of its rules need to be update.

For example.
 - 120 length characters of a line make more sense, because most of all have a 
better monitor than 10 years ago.
 - We need more style rules, such as, behind and after "+", "=", " \{" and "}" 
should have a space.

 - We should have manage the import into group and order them
 - ModifierOrder is needed
 - Remove the package javadoc rules.
 - Unify the fold line rule
 - The <> of Generics, shouldn't contains empty char. 

hope our community getting better and better

  was:
Now. the rules of checkstyle is hadoop style, as we all know, hadoop repo 
started 10 years ago, so some of its rules need to be update.

For example.
- 120 length characters of a line make more sense, because most of all have a 
better monitor than 10 years ago.
- We need more style rules, such as, behind and after "{" and "}" should have a 
space. 
- We should have manage the import into group and order them
- ModifierOrder is needed
- Remove the package javadoc rules.

hope our community getting better and better  


> Update Checkstyle rule
> --
>
> Key: HDDS-3582
> URL: https://issues.apache.org/jira/browse/HDDS-3582
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.6.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: image-2020-07-07-19-25-59-658.png, screenshot-1.png, 
> screenshot-2.png, screenshot-3.png, screenshot-4.png
>
>
> Now. the rules of checkstyle is hadoop style, as we all know, hadoop repo 
> started 10 years ago, so some of its rules need to be update.
> For example.
>  - 120 length characters of a line make more sense, because most of all have 
> a better monitor than 10 years ago.
>  - We need more style rules, such as, behind and after "+", "=", " \{" and 
> "}" should have a space.
>  - We should have manage the import into group and order them
>  - ModifierOrder is needed
>  - Remove the package javadoc rules.
>  - Unify the fold line rule
>  - The <> of Generics, shouldn't contains empty char. 
> hope our community getting better and better



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on pull request #1156: HDDS-3879. Introduce SCM and OM layoutVersion zero to the VERSION file

2020-07-07 Thread GitBox


sodonnel commented on pull request #1156:
URL: https://github.com/apache/hadoop-ozone/pull/1156#issuecomment-654791797


   I started on the changes to add in the software version too, but in doing 
that I starting to think about whether we need to actually store it. 
   
   I will discuss with @avijayanhwx and see what conclusion we come to. If we 
need it, we may as well get it in as part of this change.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3582) Update Checkstyle rule

2020-07-07 Thread maobaolong (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152669#comment-17152669
 ] 

maobaolong commented on HDDS-3582:
--

!image-2020-07-07-19-25-59-658.png!

 

There are another strange style of Ozone. we really need more rule of 
checkstyle to forbid these codes merge into master.

> Update Checkstyle rule
> --
>
> Key: HDDS-3582
> URL: https://issues.apache.org/jira/browse/HDDS-3582
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.6.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: image-2020-07-07-19-25-59-658.png, screenshot-1.png, 
> screenshot-2.png, screenshot-3.png, screenshot-4.png
>
>
> Now. the rules of checkstyle is hadoop style, as we all know, hadoop repo 
> started 10 years ago, so some of its rules need to be update.
> For example.
> - 120 length characters of a line make more sense, because most of all have a 
> better monitor than 10 years ago.
> - We need more style rules, such as, behind and after "{" and "}" should have 
> a space. 
> - We should have manage the import into group and order them
> - ModifierOrder is needed
> - Remove the package javadoc rules.
> hope our community getting better and better  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3582) Update Checkstyle rule

2020-07-07 Thread maobaolong (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maobaolong updated HDDS-3582:
-
Attachment: image-2020-07-07-19-25-59-658.png

> Update Checkstyle rule
> --
>
> Key: HDDS-3582
> URL: https://issues.apache.org/jira/browse/HDDS-3582
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.6.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: image-2020-07-07-19-25-59-658.png, screenshot-1.png, 
> screenshot-2.png, screenshot-3.png, screenshot-4.png
>
>
> Now. the rules of checkstyle is hadoop style, as we all know, hadoop repo 
> started 10 years ago, so some of its rules need to be update.
> For example.
> - 120 length characters of a line make more sense, because most of all have a 
> better monitor than 10 years ago.
> - We need more style rules, such as, behind and after "{" and "}" should have 
> a space. 
> - We should have manage the import into group and order them
> - ModifierOrder is needed
> - Remove the package javadoc rules.
> hope our community getting better and better  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3922) Display the pipeline info on scm web page

2020-07-07 Thread maobaolong (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152665#comment-17152665
 ] 

maobaolong commented on HDDS-3922:
--

[~avijayan] I also agree collect all information into Recon is the best 
approach , but we don't have a recon server now, and, maybe in some situation, 
we cannot deploy a recon server, so please keep the UI of the existing server.  
 

> Display the pipeline info on scm web page
> -
>
> Key: HDDS-3922
> URL: https://issues.apache.org/jira/browse/HDDS-3922
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Affects Versions: 0.7.0
>Reporter: maobaolong
>Priority: Major
>  Labels: SCM, UI, webpage
> Attachments: image-2020-07-06-10-17-08-324.png, 
> image-2020-07-06-10-18-58-151.png
>
>
> !image-2020-07-06-10-18-58-151.png!
>  
> !image-2020-07-06-10-17-08-324.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3834) We need a edge(master) docs website updated by nightly build

2020-07-07 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152663#comment-17152663
 ] 

Marton Elek commented on HDDS-3834:
---

No idea. INFRA or build@a.o should be asked.

 1. We either need a temporary space which is available from github actions env.
 2. Or update hadoop-site.git repository from daily build and add one more 
commit
 3. Or create a daily build for site on ci.apache.org

> We need a edge(master) docs website updated by nightly build
> 
>
> Key: HDDS-3834
> URL: https://issues.apache.org/jira/browse/HDDS-3834
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: documentation
>Affects Versions: 0.7.0
>Reporter: maobaolong
>Priority: Major
>
> Reference https://docs.alluxio.io/os/user/edge/en/Overview.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3933) Memory leak because of too many Datanode State Machine Thread

2020-07-07 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3933:
-
Description: 
When create 22345th  Datanode State Machine Thread, OOM happened.
!screenshot-1.png! 
 !screenshot-2.png! 

  was:
When create 22345th  Datanode State Machine Thread, OOM happened.
!screenshot-1.png! 


> Memory leak because of too many Datanode State Machine Thread
> -
>
> Key: HDDS-3933
> URL: https://issues.apache.org/jira/browse/HDDS-3933
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> When create 22345th  Datanode State Machine Thread, OOM happened.
> !screenshot-1.png! 
>  !screenshot-2.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3933) Memory leak because of too many Datanode State Machine Thread

2020-07-07 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3933:
-
Attachment: screenshot-2.png

> Memory leak because of too many Datanode State Machine Thread
> -
>
> Key: HDDS-3933
> URL: https://issues.apache.org/jira/browse/HDDS-3933
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> When create 22345th  Datanode State Machine Thread, OOM happened.
> !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3933) Memory leak because of too many Datanode State Machine Thread

2020-07-07 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3933:
-
Description: 
When create 22345th  Datanode State Machine Thread, OOM happened.
!screenshot-1.png! 

> Memory leak because of too many Datanode State Machine Thread
> -
>
> Key: HDDS-3933
> URL: https://issues.apache.org/jira/browse/HDDS-3933
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> When create 22345th  Datanode State Machine Thread, OOM happened.
> !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3933) Memory leak because of too many Datanode State Machine Thread

2020-07-07 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3933:
-
Attachment: screenshot-1.png

> Memory leak because of too many Datanode State Machine Thread
> -
>
> Key: HDDS-3933
> URL: https://issues.apache.org/jira/browse/HDDS-3933
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3933) Memory leak because of too many Datanode State Machine Thread

2020-07-07 Thread runzhiwang (Jira)
runzhiwang created HDDS-3933:


 Summary: Memory leak because of too many Datanode State Machine 
Thread
 Key: HDDS-3933
 URL: https://issues.apache.org/jira/browse/HDDS-3933
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: runzhiwang
Assignee: runzhiwang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] lokeshj1703 commented on pull request #1121: HDDS-3432. Enable TestBlockDeletion test cases.

2020-07-07 Thread GitBox


lokeshj1703 commented on pull request #1121:
URL: https://github.com/apache/hadoop-ozone/pull/1121#issuecomment-654748191


   @adoroszlai Thanks for verifying! I see appendEntries timeout in the 
particular run. The blocks creation in all the datanodes timed out in the run. 
The failure occurs in iteration 12.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3659) put a new file to exist key with different factor or type don't update the omkeyinfo

2020-07-07 Thread HuangTao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HuangTao reassigned HDDS-3659:
--

Assignee: HuangTao

> put a new file to exist key with different factor or type don't update the 
> omkeyinfo
> 
>
> Key: HDDS-3659
> URL: https://issues.apache.org/jira/browse/HDDS-3659
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.6.0
>Reporter: maobaolong
>Assignee: HuangTao
>Priority: Critical
>  Labels: Triaged
>
>  bin/ozone sh key put  -r THREE  /myvol/mybucket/NOTICE.txt NOTICE.txt
>  bin/ozone sh key put  -r ONE  /myvol/mybucket/NOTICE.txt NOTICE.txt
>  bin/ozone sh key info /myvol/mybucket/NOTICE.txt NOTICE.txt
> it should be ONE



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3918) ConcurrentModificationException in ContainerReportHandler.onMessage

2020-07-07 Thread Sammi Chen (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152605#comment-17152605
 ] 

Sammi Chen commented on HDDS-3918:
--

[~jnp] I usually see it after SCM restart. 

> ConcurrentModificationException in ContainerReportHandler.onMessage
> ---
>
> Key: HDDS-3918
> URL: https://issues.apache.org/jira/browse/HDDS-3918
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Nanda kumar
>Priority: Major
>
> 2020-07-03 14:51:45,489 [EventQueue-ContainerReportForContainerReportHandler] 
> ERROR org.apache.hadoop.hdds.server.events.SingleThreadExecutor: Error on 
> execution message 
> org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher$ContainerReportFromDatanode@8f6e7cb
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1445)
> at java.util.HashMap$KeyIterator.next(HashMap.java:1469)
> at 
> java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1044)
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:127)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:50)
> at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:81)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 2020-07-03 14:51:45,648 [EventQueue-ContainerReportForContainerReportHandler] 
> ERROR org.apache.hadoop.hdds.server.events.SingleThreadExecutor: Error on 
> execution message 
> org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher$ContainerReportFromDatanode@49d2b84b
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1445)
> at java.util.HashMap$KeyIterator.next(HashMap.java:1469)
> at 
> java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1044)
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:127)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:50)
> at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:81)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3931) Maven warning due to deprecated expression pom.artifactId

2020-07-07 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-3931:
---
Status: Patch Available  (was: In Progress)

> Maven warning due to deprecated expression pom.artifactId
> -
>
> Key: HDDS-3931
> URL: https://issues.apache.org/jira/browse/HDDS-3931
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.6.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Trivial
>  Labels: pull-request-available
>
> {code:title=mvn clean}
> [INFO] Scanning for projects...
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-ozone-interface-client:jar:0.6.0-SNAPSHOT
> [WARNING] The expression ${pom.artifactId} is deprecated. Please use 
> ${project.artifactId} instead.
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-ozone-common:jar:0.6.0-SNAPSHOT
> [WARNING] The expression ${pom.artifactId} is deprecated. Please use 
> ${project.artifactId} instead.
> ...
> {code}
> Same warning in {{hadoop-hdds/pom.xml}} was fixed during review of HDDS-3875, 
> but the one in {{hadoop-ozone/pom.xml}} was left.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3913) Recon build should ignore proxies

2020-07-07 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-3913:
---
Labels:   (was: pull-request-available)

> Recon build should ignore proxies
> -
>
> Key: HDDS-3913
> URL: https://issues.apache.org/jira/browse/HDDS-3913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Affects Versions: 0.6.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
> Fix For: 0.6.0
>
>
> [https://github.com/eirslett/frontend-maven-plugin] used by Recon to install 
> pnpm incorrectly passes proxy parameters from maven settings. The frontend 
> maven plugin should ignore proxy for pnpm to avoid build failures when proxy 
> is configured.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3913) Recon build should ignore proxies

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3913:
-
Labels: pull-request-available  (was: )

> Recon build should ignore proxies
> -
>
> Key: HDDS-3913
> URL: https://issues.apache.org/jira/browse/HDDS-3913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Affects Versions: 0.6.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>
> [https://github.com/eirslett/frontend-maven-plugin] used by Recon to install 
> pnpm incorrectly passes proxy parameters from maven settings. The frontend 
> maven plugin should ignore proxy for pnpm to avoid build failures when proxy 
> is configured.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3913) Recon build should ignore proxies

2020-07-07 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-3913:
---
   Fix Version/s: 0.6.0
Target Version/s:   (was: 0.6.0)
  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

> Recon build should ignore proxies
> -
>
> Key: HDDS-3913
> URL: https://issues.apache.org/jira/browse/HDDS-3913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Affects Versions: 0.6.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>
> [https://github.com/eirslett/frontend-maven-plugin] used by Recon to install 
> pnpm incorrectly passes proxy parameters from maven settings. The frontend 
> maven plugin should ignore proxy for pnpm to avoid build failures when proxy 
> is configured.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3913) Recon build should ignore proxies

2020-07-07 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-3913:
---
Labels:   (was: pull-request-available)

> Recon build should ignore proxies
> -
>
> Key: HDDS-3913
> URL: https://issues.apache.org/jira/browse/HDDS-3913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Affects Versions: 0.6.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> [https://github.com/eirslett/frontend-maven-plugin] used by Recon to install 
> pnpm incorrectly passes proxy parameters from maven settings. The frontend 
> maven plugin should ignore proxy for pnpm to avoid build failures when proxy 
> is configured.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1159: HDDS-3913. Recon build should ignore proxies

2020-07-07 Thread GitBox


adoroszlai commented on pull request #1159:
URL: https://github.com/apache/hadoop-ozone/pull/1159#issuecomment-654707281


   Thanks @vivekratnavel for the fix and @GlenGeng for the review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai merged pull request #1159: HDDS-3913. Recon build should ignore proxies

2020-07-07 Thread GitBox


adoroszlai merged pull request #1159:
URL: https://github.com/apache/hadoop-ozone/pull/1159


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1121: HDDS-3432. Enable TestBlockDeletion test cases.

2020-07-07 Thread GitBox


adoroszlai commented on pull request #1121:
URL: https://github.com/apache/hadoop-ozone/pull/1121#issuecomment-654703364


   Thanks @lokeshj1703 for updating the patch.  I still see 1/20 failure due to 
timeout at `TestBlockDeletion.testBlockDeletion(TestBlockDeletion.java:174)`.
   
   https://github.com/adoroszlai/hadoop-ozone/runs/844624809



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3931) Maven warning due to deprecated expression pom.artifactId

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3931:
-
Labels: pull-request-available  (was: )

> Maven warning due to deprecated expression pom.artifactId
> -
>
> Key: HDDS-3931
> URL: https://issues.apache.org/jira/browse/HDDS-3931
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.6.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Trivial
>  Labels: pull-request-available
>
> {code:title=mvn clean}
> [INFO] Scanning for projects...
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-ozone-interface-client:jar:0.6.0-SNAPSHOT
> [WARNING] The expression ${pom.artifactId} is deprecated. Please use 
> ${project.artifactId} instead.
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-ozone-common:jar:0.6.0-SNAPSHOT
> [WARNING] The expression ${pom.artifactId} is deprecated. Please use 
> ${project.artifactId} instead.
> ...
> {code}
> Same warning in {{hadoop-hdds/pom.xml}} was fixed during review of HDDS-3875, 
> but the one in {{hadoop-ozone/pom.xml}} was left.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai opened a new pull request #1172: HDDS-3931. Maven warning due to deprecated expression pom.artifactId

2020-07-07 Thread GitBox


adoroszlai opened a new pull request #1172:
URL: https://github.com/apache/hadoop-ozone/pull/1172


   ## What changes were proposed in this pull request?
   
   Fix Maven warning caused by deprecated expression `pom.artifactId`:
   
   ```
   [INFO] Scanning for projects...
   [WARNING]
   [WARNING] Some problems were encountered while building the effective model 
for org.apache.hadoop:hadoop-ozone-interface-client:jar:0.6.0-SNAPSHOT
   [WARNING] The expression ${pom.artifactId} is deprecated. Please use 
${project.artifactId} instead.
   [WARNING]
   [WARNING] Some problems were encountered while building the effective model 
for org.apache.hadoop:hadoop-ozone-common:jar:0.6.0-SNAPSHOT
   [WARNING] The expression ${pom.artifactId} is deprecated. Please use 
${project.artifactId} instead.
   ...
   ```
   
   https://issues.apache.org/jira/browse/HDDS-3931
   
   ## How was this patch tested?
   
   ```
   $ mvn -DskipTests clean package
   [INFO] Scanning for projects...
   [INFO] 

   [INFO] Detecting the operating system and CPU architecture
   ...
   [INFO] BUILD SUCCESS
   ```
   
   https://github.com/adoroszlai/hadoop-ozone/runs/844565409



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3707) UUID can be non unique for a huge samples

2020-07-07 Thread maobaolong (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maobaolong resolved HDDS-3707.
--
  Assignee: maobaolong
Resolution: Won't Fix

> UUID can be non unique for a huge samples
> -
>
> Key: HDDS-3707
> URL: https://issues.apache.org/jira/browse/HDDS-3707
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, Ozone Manager, SCM
>Affects Versions: 0.7.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Minor
>  Labels: Triaged
>
> Now, we have used UUID as id for many places, for example, DataNodeId, 
> pipelineId. I believe that it should be pretty less chance to met collision, 
> but, if met the collision, we are in trouble.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cku328 commented on pull request #1170: HDDS-3910. JooqCodeGenerator interrupted but still alive

2020-07-07 Thread GitBox


cku328 commented on pull request #1170:
URL: https://github.com/apache/hadoop-ozone/pull/1170#issuecomment-654680208


   Thanks @adoroszlai  for the review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cku328 opened a new pull request #1171: HDDS-3932. Hide jOOQ logo message from the log output on compile

2020-07-07 Thread GitBox


cku328 opened a new pull request #1171:
URL: https://github.com/apache/hadoop-ozone/pull/1171


   ## What changes were proposed in this pull request?
   
   Add system property to hide this self-ad message from the log output.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3932
   
   ## How was this patch tested?
   
   Check compiled logs.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3932) Hide jOOQ logo message from the log output on compile

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3932:
-
Labels: pull-request-available  (was: )

> Hide jOOQ logo message from the log output on compile
> -
>
> Key: HDDS-3932
> URL: https://issues.apache.org/jira/browse/HDDS-3932
> Project: Hadoop Distributed Data Store
>  Issue Type: Wish
>  Components: Ozone Recon
>Reporter: Neo Yang
>Assignee: Neo Yang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>
> When Ozone Recon _(org.apache.hadoop:hadoop-ozone-recon)_ compiles, it prints 
> out this self-ad message:
> {code:java}
> 2020-07-07 15:39:05,719 INFO  jooq.Constants (JooqLogger.java:info(338)) - 
> @@
> @@
>   @@@@
> @@
>   @@  @@@@
> @@    @@  @@@@
> @@@@@@
> @@
> @@
> @@@@@@
> @@@@  @@    @@
> @@@@  @@    @@
> @@@@  @  @  @@
> @@@@@@
> @@@  @
> @@
> @@  Thank you for using jOOQ 3.11.9
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3932) Hide jOOQ logo message from the log output on compile

2020-07-07 Thread Neo Yang (Jira)
Neo Yang created HDDS-3932:
--

 Summary: Hide jOOQ logo message from the log output on compile
 Key: HDDS-3932
 URL: https://issues.apache.org/jira/browse/HDDS-3932
 Project: Hadoop Distributed Data Store
  Issue Type: Wish
  Components: Ozone Recon
Reporter: Neo Yang
Assignee: Neo Yang
 Fix For: 0.6.0


When Ozone Recon _(org.apache.hadoop:hadoop-ozone-recon)_ compiles, it prints 
out this self-ad message:
{code:java}
2020-07-07 15:39:05,719 INFO  jooq.Constants (JooqLogger.java:info(338)) - 
@@
@@
  @@@@
@@
  @@  @@@@
@@    @@  @@@@
@@@@@@
@@
@@
@@@@@@
@@@@  @@    @@
@@@@  @@    @@
@@@@  @  @  @@
@@@@@@
@@@  @
@@
@@  Thank you for using jOOQ 3.11.9
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3931) Maven warning due to deprecated expression pom.artifactId

2020-07-07 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-3931:
---
Description: 
{code:title=mvn clean}
[INFO] Scanning for projects...
[WARNING]
[WARNING] Some problems were encountered while building the effective model for 
org.apache.hadoop:hadoop-ozone-interface-client:jar:0.6.0-SNAPSHOT
[WARNING] The expression ${pom.artifactId} is deprecated. Please use 
${project.artifactId} instead.
[WARNING]
[WARNING] Some problems were encountered while building the effective model for 
org.apache.hadoop:hadoop-ozone-common:jar:0.6.0-SNAPSHOT
[WARNING] The expression ${pom.artifactId} is deprecated. Please use 
${project.artifactId} instead.
...
{code}

Same warning in {{hadoop-hdds/pom.xml}} was fixed during review of HDDS-3875, 
but the one in {{hadoop-ozone/pom.xml}} was left.

  was:
{code}
The expression ${pom.artifactId} is deprecated. Please use 
${project.artifactId} instead.
{code}


> Maven warning due to deprecated expression pom.artifactId
> -
>
> Key: HDDS-3931
> URL: https://issues.apache.org/jira/browse/HDDS-3931
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.6.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Trivial
>
> {code:title=mvn clean}
> [INFO] Scanning for projects...
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-ozone-interface-client:jar:0.6.0-SNAPSHOT
> [WARNING] The expression ${pom.artifactId} is deprecated. Please use 
> ${project.artifactId} instead.
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-ozone-common:jar:0.6.0-SNAPSHOT
> [WARNING] The expression ${pom.artifactId} is deprecated. Please use 
> ${project.artifactId} instead.
> ...
> {code}
> Same warning in {{hadoop-hdds/pom.xml}} was fixed during review of HDDS-3875, 
> but the one in {{hadoop-ozone/pom.xml}} was left.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



  1   2   >