[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #816: HDDS-3381. OzoneManager starts 2 OzoneManagerDoubleBuffer for HA cluster.

2020-04-13 Thread GitBox
mukul1987 commented on a change in pull request #816: HDDS-3381. OzoneManager 
starts 2 OzoneManagerDoubleBuffer for HA cluster.
URL: https://github.com/apache/hadoop-ozone/pull/816#discussion_r407884806
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
 ##
 @@ -106,6 +106,12 @@
   private final OzoneManager impl;
   private OzoneManagerDoubleBuffer ozoneManagerDoubleBuffer;
 
+
+  public OzoneManagerRequestHandler(OzoneManager om) {
 
 Review comment:
   Should the isRatisEnabled flag passed here and then we should assign the 
ozoneManagerDoubleBuffer using a getter in the 
OzoneManagerProtocolServerSideTranslatorPB.java ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3380) MiniOzoneHAClusterImpl#initOMRatisConf will reset the configs and causes for test failures

2020-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3380:
-
Labels: pull-request-available  (was: )

> MiniOzoneHAClusterImpl#initOMRatisConf will reset the configs and causes for 
> test failures
> --
>
> Key: HDDS-3380
> URL: https://issues.apache.org/jira/browse/HDDS-3380
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: HA, test
>Affects Versions: 0.5.0
>Reporter: Uma Maheswara Rao G
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>
> While I was debugging some code paths using miniOzoneCluster:
> For example in TestOzoneHAManager:
> it plans to trigger snapshots at threshold 50 and same was configured and 
> passed to MiniOzoneHACluster. But inside 
> MiniOzoneHAClusterImpl#initOMRatisConf, it will silently reset to 100L. So, 
> test will expect snapshot to trigger after 50 transactions, but it will not.
>  
> It will keep wait even after rolling at 50:
> {quote}GenericTestUtils.waitFor(() -> {
>  if (ozoneManager.getRatisSnapshotIndex() > 0) {
>  return true;
>  }
>  return false;
> }, 1000, 10);
> {quote}
>  
> {quote}2020-04-12 03:54:21,296 
> [omNode-1@group-523986131536-SegmentedRaftLogWorker] INFO 
> segmented.SegmentedRaftLogWorker (SegmentedRaftLogWorker.java:execute(583)) - 
> omNode-1@group-523986131536-SegmentedRaftLogWorker: created new log segment 
> /Users/ugangumalla/Work/repos/hadoop-ozone/hadoop-ozone/integration-test/target/test-dir/MiniOzoneClusterImpl-fce544cd-3a80-4b0b-ac92-463cf391975c/omNode-1/ratis/c9bc4cf4-3bc3-3c60-a66b-523986131536/current/log_inprogress_49
> {quote}
>  
> So, respecting user passed configurations will fix the issue. I will post the 
> patch later in some time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] umamaheswararao opened a new pull request #817: HDDS-3380. MiniOzoneHAClusterImpl#initOMRatisConf will reset the conf…

2020-04-13 Thread GitBox
umamaheswararao opened a new pull request #817: HDDS-3380. 
MiniOzoneHAClusterImpl#initOMRatisConf will reset the conf…
URL: https://github.com/apache/hadoop-ozone/pull/817
 
 
   …igs and causes for test failures
   
   ## What changes were proposed in this pull request?
   
   Moved the configs settings only to required test class instead of silently 
resetting test modified configs. Also took liberty to change assertion from 
NOT_A_FILE to DIR_NOT_FOUND, which seems correct code.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3380 
   
   ## How was this patch tested?
   
   Ran the TestOzoneManagerHA tests. SnapShot test is passing after fixing 
resetting issue.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3381) OzoneManager starts 2 OzoneManagerDoubleBuffer for HA clusters

2020-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3381:
-
Labels: MiniOzoneChaosCluster pull-request-available  (was: 
MiniOzoneChaosCluster)

> OzoneManager starts 2 OzoneManagerDoubleBuffer for HA clusters
> --
>
> Key: HDDS-3381
> URL: https://issues.apache.org/jira/browse/HDDS-3381
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, test
>Reporter: Mukul Kumar Singh
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>
> OzoneManager starts 2 OzoneManagerDoubleBuffer for HA clusters. In the 
> following example for 3 OM HA instances, 6 OzoneManagerDoubleBuffer instances 
> were created.
> {code}
> ➜  chaos-2020-04-12-20-21-11-IST grep canFlush stack1
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.canFlush(OzoneManagerDoubleBuffer.java:344)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.canFlush(OzoneManagerDoubleBuffer.java:344)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.canFlush(OzoneManagerDoubleBuffer.java:344)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.canFlush(OzoneManagerDoubleBuffer.java:344)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.canFlush(OzoneManagerDoubleBuffer.java:344)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.canFlush(OzoneManagerDoubleBuffer.java:344)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 opened a new pull request #816: HDDS-3381. OzoneManager starts 2 OzoneManagerDoubleBuffer for HA cluster.

2020-04-13 Thread GitBox
bharatviswa504 opened a new pull request #816: HDDS-3381. OzoneManager starts 2 
OzoneManagerDoubleBuffer for HA cluster.
URL: https://github.com/apache/hadoop-ozone/pull/816
 
 
   ## What changes were proposed in this pull request?
   
   Removed initialization of doubleBuffer when ratis is enabled in 
OzoneManagerProtocolServerSideTranslatorPB.java. As doubleBuffer is only needed 
in StateMachine when ratis is enabled.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3381
   
   ## How was this patch tested?
   
   Existing tests should cover this.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3219) Allow users to list all volumes

2020-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3219:
-
Labels: pull-request-available  (was: )

> Allow users to list all volumes
> ---
>
> Key: HDDS-3219
> URL: https://issues.apache.org/jira/browse/HDDS-3219
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>
> Users should be able to see the complete list of volumes in the system.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 opened a new pull request #815: HDDS-3219. Write operation when both OM followers are shutdown.

2020-04-13 Thread GitBox
bharatviswa504 opened a new pull request #815: HDDS-3219. Write operation when 
both OM followers are shutdown.
URL: https://github.com/apache/hadoop-ozone/pull/815
 
 
   ## What changes were proposed in this pull request?
   
   Added a new parameter for om rpc client time out. In this way, it will only 
affect OM Rpc Client.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3291
   
   ## How was this patch tested?
   
   Tested this on a docker cluster with the below settings. We should increase 
the timeout duration to a larger value so that OM will think it is the leader 
for a longer period even though it is not, and the request will be accepted by 
leader, and it will retry forever.
   OZONE-SITE.XML_ozone.om.client.rpc.timeout=30s
   OZONE-SITE.XML_ozone.om.leader.election.minimum.timeout.duration=1m
   
   Now with this patch, request fails after 15 retries. And for OM Server which 
it thinks it is leader, we get SocketTimeOutException, and move to next OM.
   
   Logs:
   ```
   2020-04-13 21:59:44,667 [main] INFO  RetryInvocationHandler:411 - 
com.google.protobuf.ServiceException: java.net.UnknownHostException: Invalid 
host name: local host is: (unknown); destination host is: "om3":9862; 
java.net.UnknownHostException; For more details see:  
http://wiki.apache.org/hadoop/UnknownHost, while invoking 
$Proxy20.submitRequest over nodeId=om3,nodeAddress=om3:9862 after 13 failover 
attempts. Trying to failover immediately.
   2020-04-13 21:59:44,667 [main] INFO  RetryInvocationHandler:411 - 
com.google.protobuf.ServiceException: java.net.UnknownHostException: Invalid 
host name: local host is: (unknown); destination host is: "om1":9862; 
java.net.UnknownHostException; For more details see:  
http://wiki.apache.org/hadoop/UnknownHost, while invoking 
$Proxy20.submitRequest over nodeId=om1,nodeAddress=om1:9862 after 14 failover 
attempts. Trying to failover immediately.
   2020-04-13 22:00:14,677 [main] INFO  RetryInvocationHandler:411 - 
com.google.protobuf.ServiceException: java.net.SocketTimeoutException: Call 
From 531e9bfac0d9/172.24.0.4 to om2:9862 failed on socket timeout exception: 
java.net.SocketTimeoutException: 3 millis timeout while waiting for channel 
to be ready for read. ch : java.nio.channels.SocketChannel[connected 
local=/172.24.0.4:47798 remote=om2/172.24.0.7:9862]; For more details see:  
http://wiki.apache.org/hadoop/SocketTimeout, while invoking 
$Proxy20.submitRequest over nodeId=om2,nodeAddress=om2:9862 after 15 failover 
attempts. Trying to failover immediately.
   2020-04-13 22:00:14,678 [main] ERROR OMFailoverProxyProvider:286 - Failed to 
connect to OMs: [nodeId=om1,nodeAddress=om1:9862, 
nodeId=om3,nodeAddress=om3:9862, nodeId=om2,nodeAddress=om2:9862]. Attempted 15 
failover
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3384) Update SpringFramework

2020-04-13 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HDDS-3384:
-

 Summary: Update SpringFramework
 Key: HDDS-3384
 URL: https://issues.apache.org/jira/browse/HDDS-3384
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Affects Versions: 0.4.1
Reporter: Wei-Chiu Chuang


We are on SpringFramework 5.1.3. We should update to newer versions (5.1.14 or 
5.2.x)

Also,
{code:java|title=hadoop-ozone/recon-codegen/pom.xml}

  org.springframework
  spring-jdbc
  5.1.3.RELEASE

{code}
It should specify the version with ${{{spring.version}}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3383) Update Netty to 4.1.48.Final

2020-04-13 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDDS-3383.
---
Resolution: Later

HDDS-3177 updated to Netty 4.1.47. We update it again later. No need to do it 
right now.

> Update Netty to 4.1.48.Final
> 
>
> Key: HDDS-3383
> URL: https://issues.apache.org/jira/browse/HDDS-3383
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Affects Versions: 0.5.0
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> We are currently on Netty 4.1.45.Final. We should update to the latest 
> 4.1.48.Final



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2945) Implement ofs://: Add robot tests for mkdir

2020-04-13 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng resolved HDDS-2945.
--
  Assignee: Xiaoyu Yao
Resolution: Fixed

> Implement ofs://: Add robot tests for mkdir
> ---
>
> Key: HDDS-2945
> URL: https://issues.apache.org/jira/browse/HDDS-2945
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We need to add extra robot tests case (in addition to the existing ones 
> adapted from o3fs) for ofs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Moved] (HDDS-3383) CLONE - Update Netty to 4.1.48.Final

2020-04-13 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang moved HADOOP-16983 to HDDS-3383:


  Key: HDDS-3383  (was: HADOOP-16983)
Affects Version/s: (was: 3.3.0)
   0.5.0
 Workflow: patch-available, re-open possible  (was: 
no-reopen-closed, patch-avail)
  Project: Hadoop Distributed Data Store  (was: Hadoop Common)

> CLONE - Update Netty to 4.1.48.Final
> 
>
> Key: HDDS-3383
> URL: https://issues.apache.org/jira/browse/HDDS-3383
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Affects Versions: 0.5.0
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> We are currently on Netty 4.1.45.Final. We should update to the latest 
> 4.1.48.Final



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3383) Update Netty to 4.1.48.Final

2020-04-13 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDDS-3383:
--
Summary: Update Netty to 4.1.48.Final  (was: CLONE - Update Netty to 
4.1.48.Final)

> Update Netty to 4.1.48.Final
> 
>
> Key: HDDS-3383
> URL: https://issues.apache.org/jira/browse/HDDS-3383
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Affects Versions: 0.5.0
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> We are currently on Netty 4.1.45.Final. We should update to the latest 
> 4.1.48.Final



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on issue #692: HDDS-3168. Improve read efficiency by merging a lot of RPC call getContainerWithPipeline into one

2020-04-13 Thread GitBox
xiaoyuyao commented on issue #692: HDDS-3168. Improve read efficiency by 
merging a lot of RPC call getContainerWithPipeline into one
URL: https://github.com/apache/hadoop-ozone/pull/692#issuecomment-613116591
 
 
   Thanks @runzhiwang  for the update. The latest change LGTM, +1. 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2976) Recon throws error while trying to get snapshot in secure environment

2020-04-13 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan resolved HDDS-2976.
-
Resolution: Pending Closed

Merged the PR.

> Recon throws error while trying to get snapshot in secure environment
> -
>
> Key: HDDS-2976
> URL: https://issues.apache.org/jira/browse/HDDS-2976
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Prashant Pogde
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Recon throws the following exception while trying to get snapshot from OM in 
> a secure env:
> {code:java}
> 10:19:24.743 PMINFO OzoneManagerServiceProviderImpl Obtaining full snapshot 
> from Ozone Manager
> 10:19:24.754 PMERROR OzoneManagerServiceProviderImpl Unable to obtain Ozone 
> Manager DB Snapshot. 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:154)
>   at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:2020)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1127)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379)
>   at 
> org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:394)
>   at 
> org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:353)
>   at 
> org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:141)
>   at 
> org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
>   at 
> org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
>   at 
> org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
>   at 
> org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
>   at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
>   at 
> org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
>   at 
> org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
>   at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>   at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
>   at 
> org.apache.hadoop.ozone.recon.ReconUtils.makeHttpCall(ReconUtils.java:232)
>   at 
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.getOzoneManagerDBSnapshot(OzoneManagerServiceProviderImpl.java:239)
>   at 
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.updateReconOmDBWithNewSnapshot(OzoneManagerServiceProviderImpl.java:267)
>   at 
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.syncDataFromOM(OzoneManagerServiceProviderImpl.java:358)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 10:19:24.755 PMERROR OzoneManagerServiceProviderImpl Null snapshot location 
> got from OM.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on issue #783: HDDS-2976. Recon throws error while trying to get snapshot over https

2020-04-13 Thread GitBox
avijayanhwx commented on issue #783: HDDS-2976. Recon throws error while trying 
to get snapshot over https
URL: https://github.com/apache/hadoop-ozone/pull/783#issuecomment-613035510
 
 
   Thank you for the fix @prashantpogde, and the reviews @vivekratnavel & 
@adoroszlai.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx merged pull request #783: HDDS-2976. Recon throws error while trying to get snapshot over https

2020-04-13 Thread GitBox
avijayanhwx merged pull request #783: HDDS-2976. Recon throws error while 
trying to get snapshot over https
URL: https://github.com/apache/hadoop-ozone/pull/783
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete and batchRename.

2020-04-13 Thread GitBox
rakeshadr commented on a change in pull request #814: HDDS-3286. 
BasicOzoneFileSystem  support batchDelete and batchRename.
URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r407596897
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
 ##
 @@ -748,19 +751,34 @@ public String toString() {
  */
 boolean iterate() throws IOException {
   LOG.trace("Iterating path {}", path);
+  List keyList = new ArrayList<>();
   if (status.isDirectory()) {
 LOG.trace("Iterating directory:{}", pathKey);
 while (keyIterator.hasNext()) {
   BasicKeyInfo key = keyIterator.next();
   LOG.trace("iterating key:{}", key.getName());
-  if (!processKey(key.getName())) {
+  if (!key.getName().equals("")) {
+keyList.add(key.getName());
+  }
+  int batchSize = getConf().getInt("ozone.fs.iterate.batch-size", 1);
 
 Review comment:
   Instead of hard coding, can you do 
`OzoneConfigKeys.OZONE_FS_ITERATE_BATCH_SIZE` ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete and batchRename.

2020-04-13 Thread GitBox
rakeshadr commented on a change in pull request #814: HDDS-3286. 
BasicOzoneFileSystem  support batchDelete and batchRename.
URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r407591434
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/RenameInfo.java
 ##
 @@ -0,0 +1,45 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.helpers;
+
+/**
+ * The data interface needed to the rename operation.
+ */
+public class RenameInfo {
+  private String  fromKey;
+  private  OmKeyInfo fromKeyValue;
+  private String toKey;
+
+  public RenameInfo(String fromKey, OmKeyInfo fromKeyValue, String toKey) {
+this.fromKey = fromKey;
+this.fromKeyValue = fromKeyValue;
+this.toKey = toKey;
+  }
+
+  public String getFromKey() {
+return fromKey;
+  }
+
+  public OmKeyInfo getFromKeyValue() {
+return fromKeyValue;
 
 Review comment:
   Can we do` fromKeyValue.getKeyName(); `?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete and batchRename.

2020-04-13 Thread GitBox
rakeshadr commented on a change in pull request #814: HDDS-3286. 
BasicOzoneFileSystem  support batchDelete and batchRename.
URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r407590881
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/RenameInfo.java
 ##
 @@ -0,0 +1,45 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.helpers;
+
+/**
+ * The data interface needed to the rename operation.
+ */
+public class RenameInfo {
+  private String  fromKey;
 
 Review comment:
   Should we need `fromKey` member variable? Is this duplicate info and we can 
get it from `fromKeyValue.getKeyName();` , right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete and batchRename.

2020-04-13 Thread GitBox
rakeshadr commented on a change in pull request #814: HDDS-3286. 
BasicOzoneFileSystem  support batchDelete and batchRename.
URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r407598506
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/key/TestOMKeyRenameRequest.java
 ##
 @@ -327,11 +331,13 @@ private OMRequest doPreExecute(OMRequest 
originalOmRequest) throws Exception {
* @return OMRequest
*/
   private OMRequest createRenameKeyRequest(String toKeyName) {
-KeyArgs keyArgs = KeyArgs.newBuilder().setKeyName(keyName)
+Map renameKeyMap = new HashMap<>();
 
 Review comment:
   General comment for both `delete and rename `batch apis.
   
   Please add some test cases with `all, partial, no success` behaviors. Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete and batchRename.

2020-04-13 Thread GitBox
rakeshadr commented on a change in pull request #814: HDDS-3286. 
BasicOzoneFileSystem  support batchDelete and batchRename.
URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r407563141
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/protocol/ClientProtocol.java
 ##
 @@ -288,6 +288,17 @@ OzoneInputStream getKey(String volumeName, String 
bucketName, String keyName)
   void deleteKey(String volumeName, String bucketName, String keyName)
   throws IOException;
 
+  /**
+   * Deletes key List.
+   * @param volumeName Name of the Volume
+   * @param bucketName Name of the Bucket
+   * @param keyNameList List of the Key
+   * @throws IOException
+   */
+  void deleteKeyList(String volumeName, String bucketName,
 
 Review comment:
   The above comment naming convention exists here as well. Please take care 
the same in all applicable places. Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete and batchRename.

2020-04-13 Thread GitBox
rakeshadr commented on a change in pull request #814: HDDS-3286. 
BasicOzoneFileSystem  support batchDelete and batchRename.
URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r407558148
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java
 ##
 @@ -382,11 +382,25 @@ public void deleteKey(String key) throws IOException {
 proxy.deleteKey(volumeName, name, key);
   }
 
+  /**
+   * Deletes key from the bucket.
 
 Review comment:
   Can you please rephrase java comment conveying the list of keys. How about 
something like below,
   
   "Deletes the given list of keys from the bucket"
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete and batchRename.

2020-04-13 Thread GitBox
rakeshadr commented on a change in pull request #814: HDDS-3286. 
BasicOzoneFileSystem  support batchDelete and batchRename.
URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r407573404
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
 ##
 @@ -116,51 +119,53 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 boolean acquiredLock = false;
 OMClientResponse omClientResponse = null;
 Result result = null;
+List omKeyInfoList= new ArrayList<>();
 try {
-  // check Acl
-  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
-  IAccessAuthorizer.ACLType.DELETE, OzoneObj.ResourceType.KEY);
-
-  String objectKey = omMetadataManager.getOzoneKey(
-  volumeName, bucketName, keyName);
-
+  if (keyNameList.size() ==0) {
 
 Review comment:
   Please correct indentation after `==` symbol. ` if (keyNameList.size() == 0) 
{`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete and batchRename.

2020-04-13 Thread GitBox
rakeshadr commented on a change in pull request #814: HDDS-3286. 
BasicOzoneFileSystem  support batchDelete and batchRename.
URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r407581371
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
 ##
 @@ -116,51 +119,53 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 boolean acquiredLock = false;
 OMClientResponse omClientResponse = null;
 Result result = null;
+List omKeyInfoList= new ArrayList<>();
 try {
-  // check Acl
-  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
-  IAccessAuthorizer.ACLType.DELETE, OzoneObj.ResourceType.KEY);
-
-  String objectKey = omMetadataManager.getOzoneKey(
-  volumeName, bucketName, keyName);
-
+  if (keyNameList.size() ==0) {
+throw new OMException("Key not found", KEY_NOT_FOUND);
+  }
   acquiredLock = omMetadataManager.getLock().acquireWriteLock(BUCKET_LOCK,
-  volumeName, bucketName);
-
+  volumeName, bucketName);
   // Validate bucket and volume exists or not.
   validateBucketAndVolume(omMetadataManager, volumeName, bucketName);
-
-  OmKeyInfo omKeyInfo = omMetadataManager.getKeyTable().get(objectKey);
-  if (omKeyInfo == null) {
-throw new OMException("Key not found", KEY_NOT_FOUND);
+  Table keyTable = omMetadataManager.getKeyTable();
+  for (String keyName : keyNameList) {
+// check Acl
+checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
+IAccessAuthorizer.ACLType.DELETE, OzoneObj.ResourceType.KEY);
+String objectKey = omMetadataManager.getOzoneKey(
+volumeName, bucketName, keyName);
+OmKeyInfo omKeyInfo = keyTable.get(objectKey);
+if (omKeyInfo == null) {
 
 Review comment:
   What if one of the key not found in the list of keys and assume there are 
10keys and the 5th key not found. What is the contract batch api provides 
`executes all or executes till first failure or none of them` ?
   
   Please consider `Check if this transaction is a replay of ratis logs.` this 
validation check also or any other exception. As there is no strict 
recommendations, I'm keeping this open for discussions:-).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete and batchRename.

2020-04-13 Thread GitBox
rakeshadr commented on a change in pull request #814: HDDS-3286. 
BasicOzoneFileSystem  support batchDelete and batchRename.
URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r407568953
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyArgs.java
 ##
 @@ -70,6 +72,8 @@ private OmKeyArgs(String volumeName, String bucketName, 
String keyName,
 this.refreshPipeline = refreshPipeline;
 this.acls = acls;
 this.sortDatanodesInPipeline = sortDatanode;
+this.keyNameList = keyNameList;
 
 Review comment:
   Do we need `String keyName` argument ? Can you please incorporate `String 
keyName` argument into the `keyNameList` argument, something similar you have 
very well refactored for `keyNameList.add(keyName);` in deleteKey api.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete and batchRename.

2020-04-13 Thread GitBox
rakeshadr commented on a change in pull request #814: HDDS-3286. 
BasicOzoneFileSystem  support batchDelete and batchRename.
URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r407561162
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java
 ##
 @@ -382,11 +382,25 @@ public void deleteKey(String key) throws IOException {
 proxy.deleteKey(volumeName, name, key);
   }
 
+  /**
+   * Deletes key from the bucket.
+   * @param keyList List of the key name to be deleted.
+   * @throws IOException
+   */
+  public void deleteKeyList(List keyList) throws IOException {
+proxy.deleteKeyList(volumeName, name, keyList);
+  }
+
   public void renameKey(String fromKeyName, String toKeyName)
   throws IOException {
 proxy.renameKey(volumeName, name, fromKeyName, toKeyName);
   }
 
+  public void renameKey(Map keyMap)
 
 Review comment:
   Can we follow naming convention for the collection APIs?
   
How about something like below. The same comment applicable to delete list 
api as well.
   `#renameKeys`
   `#deleteKeys`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete and batchRename.

2020-04-13 Thread GitBox
rakeshadr commented on a change in pull request #814: HDDS-3286. 
BasicOzoneFileSystem  support batchDelete and batchRename.
URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r407558500
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java
 ##
 @@ -382,11 +382,25 @@ public void deleteKey(String key) throws IOException {
 proxy.deleteKey(volumeName, name, key);
   }
 
+  /**
+   * Deletes key from the bucket.
+   * @param keyList List of the key name to be deleted.
+   * @throws IOException
+   */
+  public void deleteKeyList(List keyList) throws IOException {
+proxy.deleteKeyList(volumeName, name, keyList);
+  }
+
   public void renameKey(String fromKeyName, String toKeyName)
 
 Review comment:
   Please add javadoc.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete and batchRename.

2020-04-13 Thread GitBox
rakeshadr commented on a change in pull request #814: HDDS-3286. 
BasicOzoneFileSystem  support batchDelete and batchRename.
URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r407588198
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRenameRequest.java
 ##
 @@ -127,112 +131,124 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OmKeyInfo fromKeyValue = null;
 String toKey = null, fromKey = null;
 Result result = null;
+List renameInfoList = new ArrayList<>();
 try {
-  if (toKeyName.length() == 0 || fromKeyName.length() == 0) {
-throw new OMException("Key name is empty",
-OMException.ResultCodes.INVALID_KEY_NAME);
+  if (renameKeyMap.size() == 0) {
+throw new OMException("Key not found " + fromKey, KEY_NOT_FOUND);
   }
-  // check Acls to see if user has access to perform delete operation on
-  // old key and create operation on new key
-  checkKeyAcls(ozoneManager, volumeName, bucketName, fromKeyName,
-  IAccessAuthorizer.ACLType.DELETE, OzoneObj.ResourceType.KEY);
-  checkKeyAcls(ozoneManager, volumeName, bucketName, toKeyName,
-  IAccessAuthorizer.ACLType.CREATE, OzoneObj.ResourceType.KEY);
-
   acquiredLock = omMetadataManager.getLock().acquireWriteLock(BUCKET_LOCK,
-  volumeName, bucketName);
-
-  // Validate bucket and volume exists or not.
-  validateBucketAndVolume(omMetadataManager, volumeName, bucketName);
-
-  // Check if toKey exists
-  fromKey = omMetadataManager.getOzoneKey(volumeName, bucketName,
-  fromKeyName);
-  toKey = omMetadataManager.getOzoneKey(volumeName, bucketName, toKeyName);
-  OmKeyInfo toKeyValue = omMetadataManager.getKeyTable().get(toKey);
-
-  if (toKeyValue != null) {
-
-// Check if this transaction is a replay of ratis logs.
-if (isReplay(ozoneManager, toKeyValue, trxnLogIndex)) {
-
-  // Check if fromKey is still in the DB and created before this
-  // replay.
-  // For example, lets say we have the following sequence of
-  // transactions.
-  // Trxn 1 : Create Key1
-  // Trnx 2 : Rename Key1 to Key2 -> Deletes Key1 and Creates Key2
-  // Now if these transactions are replayed:
-  // Replay Trxn 1 : Creates Key1 again as Key1 does not exist in 
DB
-  // Replay Trxn 2 : Key2 is not created as it exists in DB and the
-  // request would be deemed a replay. But Key1
-  // is still in the DB and needs to be deleted.
-  fromKeyValue = omMetadataManager.getKeyTable().get(fromKey);
-  if (fromKeyValue != null) {
-// Check if this replay transaction was after the fromKey was
-// created. If so, we have to delete the fromKey.
-if (ozoneManager.isRatisEnabled() &&
-trxnLogIndex > fromKeyValue.getUpdateID()) {
-  // Add to cache. Only fromKey should be deleted. ToKey already
-  // exists in DB as this transaction is a replay.
-  result = Result.DELETE_FROM_KEY_ONLY;
-  Table keyTable = omMetadataManager
-  .getKeyTable();
-  keyTable.addCacheEntry(new CacheKey<>(fromKey),
-  new CacheValue<>(Optional.absent(), trxnLogIndex));
+  volumeName, bucketName);
+  for (Map.Entry renameKeyEntry : renameKeyMap.entrySet()) 
{
+String fromKeyName = renameKeyEntry.getKey();
 
 Review comment:
   Above comment is applicable `renamekeys` as well.
   
   What if one of the key not found in the list of keys and assume there are 
10keys and the 5th key not found. What is the contract batch api provides 
`executes all or executes till first failure or none of them` ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao edited a comment on issue #751: HDDS-3321. Prometheus endpoint should not have Authentication filter …

2020-04-13 Thread GitBox
xiaoyuyao edited a comment on issue #751: HDDS-3321. Prometheus endpoint should 
not have Authentication filter …
URL: https://github.com/apache/hadoop-ozone/pull/751#issuecomment-612984665
 
 
   Thanks @elek for the pointer. Adding token/password support for prometheus 
endpoint sounds good to me. With token support, I think it makes more sense to 
skip SPNEGO for the prometheus endpoint. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on issue #751: HDDS-3321. Prometheus endpoint should not have Authentication filter …

2020-04-13 Thread GitBox
xiaoyuyao commented on issue #751: HDDS-3321. Prometheus endpoint should not 
have Authentication filter …
URL: https://github.com/apache/hadoop-ozone/pull/751#issuecomment-612984665
 
 
   Thanks @elek for the pointer. Adding token/password support for prometheus 
endpoint sounds good to me. With token support, I think it makes more sense to 
skip SPNEGO for the prometheus endpoing. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-3378) OzoneManager group init failed because of incorrect snapshot directory location

2020-04-13 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17082477#comment-17082477
 ] 

Bharat Viswanadham edited comment on HDDS-3378 at 4/13/20, 4:48 PM:


I see this is happening because ozone.om.ratis.snapshot.dir if it is not set, 
we default to ozone.om.ratis.storage.dir. Even if ozone.om.ratis.storage.dir. 
is not defined we fall back to ozone.metadata.dirs.

As this is not a ratis group, we are hitting this exception. I see one way to 
fix this is, create ozone.om.ratis.storage.dir with "ratis" path for Raft 
Storage locations, instead of ozone.om.ratis.storage.dir value directly. And 
for a snapshot directory with "snapshot", in this way, ratis will not hit this 
error, as we don't directly use the value of ozone.om.ratis.storage.dir.

So, the directory structure looks like

ozone.om.ratis.storage.dir -> /var/om


{code:java}
For ratis storage dir
/var/om/ratis

And for snapshot dir
/var/om/snapshot
{code}


Previously this is like

{code:java}
For ratis storage dir
/var/om/

And for snapshot dir
/var/om/snapshot
{code}

This is one way to fix, if there is any better way to do this, I am happy to go 
forward with it.





was (Author: bharatviswa):
I see this is happening because ozone.om.ratis.snapshot.dir if it is not set, 
we default to ozone.om.ratis.storage.dir. Even if ozone.om.ratis.storage.dir. 
is not defined we fall back to ozone.metadata.dirs.

As this is not a ratis group, we are hitting this exception. I see one way to 
fix this is, create ozone.om.ratis.storage.dir with "ratis" path for Raft 
Storage locations, instead of ozone.om.ratis.storage.dir value directly. And 
for a snapshot directory with "snapshot", in this way, ratis will not hit this 
error, as we don't directly use the value of ozone.om.ratis.storage.dir.

So, the directory structure looks like

ozone.om.ratis.storage.dir -> /var/om


{code:java}
For ratis storage dir
/var/om/ratis

And for snapshot dir
/var/om/snapshot
{code}


Previously this is like

{code:java}
For ratis storage dir
/var/om/

And for snapshot dir
/var/om/snapshot
{code}





> OzoneManager group init failed because of incorrect snapshot directory 
> location
> ---
>
> Key: HDDS-3378
> URL: https://issues.apache.org/jira/browse/HDDS-3378
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, test
>Affects Versions: 0.6.0
>Reporter: Mukul Kumar Singh
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> OzoneManager group init failed because of incorrect snapshot directory 
> location
> {code}
> 2020-04-11 20:07:57,180 [pool-59-thread-1] INFO  server.RaftServerConfigKeys 
> (ConfUtils.java:logGet(44)) - raft.server.storage.dir = 
> [/tmp/chaos-2020-04-11-20-05-25-IST/MiniOzoneClusterImpl-80aafc97-1b12-4bc0-9baf-7f42185b0995/omNode-3/ratis]
>  (custom)
> 2020-04-11 20:07:57,180 [pool-59-thread-1] INFO  impl.RaftServerProxy 
> (RaftServerProxy.java:lambda$null$0(191)) - omNode-3: found a subdirectory 
> /tmp/chaos-2020-04-11-20-05-25-IST/MiniOzoneClusterImpl-80aafc97-1b12-4bc0-9baf-7f42185b0995/omNode-3/ratis/snapshot
> 2020-04-11 20:07:57,181 [pool-59-thread-1] WARN  impl.RaftServerProxy 
> (RaftServerProxy.java:lambda$null$0(197)) - omNode-3: Failed to initialize 
> the group directory 
> /tmp/chaos-2020-04-11-20-05-25-IST/MiniOzoneClusterImpl-80aafc97-1b12-4bc0-9baf-7f42185b0995/omNode-3/ratis/snapshot.
>   Ignoring it
> java.lang.IllegalArgumentException: Invalid UUID string: snapshot
> at java.util.UUID.fromString(UUID.java:194)
> at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$null$0(RaftServerProxy.java:192)
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
> at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
> at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$initGroups$1(RaftServerProxy.java:189)
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
> at 
> java.ut

[jira] [Commented] (HDDS-3378) OzoneManager group init failed because of incorrect snapshot directory location

2020-04-13 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17082477#comment-17082477
 ] 

Bharat Viswanadham commented on HDDS-3378:
--

I see this is happening because ozone.om.ratis.snapshot.dir if it is not set, 
we default to ozone.om.ratis.storage.dir. Even if ozone.om.ratis.storage.dir. 
is not defined we fall back to ozone.metadata.dirs.

As this is not a ratis group, we are hitting this exception. I see one way to 
fix this is, create ozone.om.ratis.storage.dir with "ratis" path for Raft 
Storage locations, instead of ozone.om.ratis.storage.dir value directly. And 
for a snapshot directory with "snapshot", in this way, ratis will not hit this 
error, as we don't directly use the value of ozone.om.ratis.storage.dir.

So, the directory structure looks like

ozone.om.ratis.storage.dir -> /var/om


{code:java}
For ratis storage dir
/var/om/ratis

And for snapshot dir
/var/om/snapshot
{code}


Previously this is like

{code:java}
For ratis storage dir
/var/om/

And for snapshot dir
/var/om/snapshot
{code}





> OzoneManager group init failed because of incorrect snapshot directory 
> location
> ---
>
> Key: HDDS-3378
> URL: https://issues.apache.org/jira/browse/HDDS-3378
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, test
>Affects Versions: 0.6.0
>Reporter: Mukul Kumar Singh
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> OzoneManager group init failed because of incorrect snapshot directory 
> location
> {code}
> 2020-04-11 20:07:57,180 [pool-59-thread-1] INFO  server.RaftServerConfigKeys 
> (ConfUtils.java:logGet(44)) - raft.server.storage.dir = 
> [/tmp/chaos-2020-04-11-20-05-25-IST/MiniOzoneClusterImpl-80aafc97-1b12-4bc0-9baf-7f42185b0995/omNode-3/ratis]
>  (custom)
> 2020-04-11 20:07:57,180 [pool-59-thread-1] INFO  impl.RaftServerProxy 
> (RaftServerProxy.java:lambda$null$0(191)) - omNode-3: found a subdirectory 
> /tmp/chaos-2020-04-11-20-05-25-IST/MiniOzoneClusterImpl-80aafc97-1b12-4bc0-9baf-7f42185b0995/omNode-3/ratis/snapshot
> 2020-04-11 20:07:57,181 [pool-59-thread-1] WARN  impl.RaftServerProxy 
> (RaftServerProxy.java:lambda$null$0(197)) - omNode-3: Failed to initialize 
> the group directory 
> /tmp/chaos-2020-04-11-20-05-25-IST/MiniOzoneClusterImpl-80aafc97-1b12-4bc0-9baf-7f42185b0995/omNode-3/ratis/snapshot.
>   Ignoring it
> java.lang.IllegalArgumentException: Invalid UUID string: snapshot
> at java.util.UUID.fromString(UUID.java:194)
> at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$null$0(RaftServerProxy.java:192)
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
> at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
> at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$initGroups$1(RaftServerProxy.java:189)
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> at 
> java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291)
> at 
> java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
> at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
> at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401)
> at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734)
> at 
> java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160)
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
> at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:583)
> at 
> org.apache.ratis.server.impl.RaftSer

[jira] [Assigned] (HDDS-3381) OzoneManager starts 2 OzoneManagerDoubleBuffer for HA clusters

2020-04-13 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-3381:


Assignee: Bharat Viswanadham

> OzoneManager starts 2 OzoneManagerDoubleBuffer for HA clusters
> --
>
> Key: HDDS-3381
> URL: https://issues.apache.org/jira/browse/HDDS-3381
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, test
>Reporter: Mukul Kumar Singh
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> OzoneManager starts 2 OzoneManagerDoubleBuffer for HA clusters. In the 
> following example for 3 OM HA instances, 6 OzoneManagerDoubleBuffer instances 
> were created.
> {code}
> ➜  chaos-2020-04-12-20-21-11-IST grep canFlush stack1
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.canFlush(OzoneManagerDoubleBuffer.java:344)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.canFlush(OzoneManagerDoubleBuffer.java:344)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.canFlush(OzoneManagerDoubleBuffer.java:344)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.canFlush(OzoneManagerDoubleBuffer.java:344)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.canFlush(OzoneManagerDoubleBuffer.java:344)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.canFlush(OzoneManagerDoubleBuffer.java:344)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] aryangupta1998 opened a new pull request #782: HDDS-3352. Support for native ozone filesystem client using libhdfs.

2020-04-13 Thread GitBox
aryangupta1998 opened a new pull request #782: HDDS-3352. Support for native 
ozone filesystem client using libhdfs.
URL: https://github.com/apache/hadoop-ozone/pull/782
 
 
   ## What changes were proposed in this pull request?
   
   This jira is to bring in support for native ozone filesystem client using 
libhdfs.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3352
   
   ## How was this patch tested?
   
   Tested manually


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] aryangupta1998 closed pull request #782: HDDS-3352. Support for native ozone filesystem client using libhdfs.

2020-04-13 Thread GitBox
aryangupta1998 closed pull request #782: HDDS-3352. Support for native ozone 
filesystem client using libhdfs.
URL: https://github.com/apache/hadoop-ozone/pull/782
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3350) Ozone Retry Policy Improvements

2020-04-13 Thread Mukul Kumar Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDDS-3350:
---

Assignee: Lokesh Jain

> Ozone Retry Policy Improvements
> ---
>
> Key: HDDS-3350
> URL: https://issues.apache.org/jira/browse/HDDS-3350
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: Retry Behaviour in Ozone Client.pdf, Retry Behaviour in 
> Ozone Client_Updated.pdf
>
>
> Currently any ozone client request can spend a huge amount of time in retries 
> and ozone client can retry its requests very aggressively. The waiting time 
> can thus be very high before a client request fails. Further aggressive 
> retries by ratis client used by ozone can bog down a ratis pipeline leader. 
> The Jira aims to make changes to the current retry behavior in Ozone client. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2576) Handle InterruptedException in ratis related files

2020-04-13 Thread Daniel delValle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel delValle reassigned HDDS-2576:
-

Assignee: Daniel delValle

> Handle InterruptedException in ratis related files
> --
>
> Key: HDDS-2576
> URL: https://issues.apache.org/jira/browse/HDDS-2576
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Daniel delValle
>Priority: Major
>  Labels: newbie, pull-request-available, sonar
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> OzoneManagerDoubleBuffer: 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md-VxKcVY8lQ4Zrtu&open=AW5md-VxKcVY8lQ4Zrtu]
> OzoneManagerRatisClient: 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md-VsKcVY8lQ4Zrtf&open=AW5md-VsKcVY8lQ4Zrtf]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2576) Handle InterruptedException in ratis related files

2020-04-13 Thread Daniel delValle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel delValle updated HDDS-2576:
--
Status: Patch Available  (was: Open)

> Handle InterruptedException in ratis related files
> --
>
> Key: HDDS-2576
> URL: https://issues.apache.org/jira/browse/HDDS-2576
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Daniel delValle
>Priority: Major
>  Labels: newbie, pull-request-available, sonar
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> OzoneManagerDoubleBuffer: 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md-VxKcVY8lQ4Zrtu&open=AW5md-VxKcVY8lQ4Zrtu]
> OzoneManagerRatisClient: 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md-VsKcVY8lQ4Zrtf&open=AW5md-VsKcVY8lQ4Zrtf]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2569) Handle InterruptedException in LogStreamServlet

2020-04-13 Thread Daniel delValle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel delValle updated HDDS-2569:
--
Status: Patch Available  (was: Open)

> Handle InterruptedException in LogStreamServlet
> ---
>
> Key: HDDS-2569
> URL: https://issues.apache.org/jira/browse/HDDS-2569
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Daniel delValle
>Priority: Major
>  Labels: newbie, pull-request-available, sonar
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md-yJKcVY8lQ4ZsIf&open=AW5md-yJKcVY8lQ4ZsIf



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2575) Handle InterruptedException in LogSubcommand

2020-04-13 Thread Daniel delValle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel delValle reassigned HDDS-2575:
-

Assignee: Daniel delValle

> Handle InterruptedException in LogSubcommand
> 
>
> Key: HDDS-2575
> URL: https://issues.apache.org/jira/browse/HDDS-2575
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Daniel delValle
>Priority: Major
>  Labels: newbie, pull-request-available, sonar
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md-mpKcVY8lQ4ZsAH&open=AW5md-mpKcVY8lQ4ZsAH



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2575) Handle InterruptedException in LogSubcommand

2020-04-13 Thread Daniel delValle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel delValle updated HDDS-2575:
--
Status: Patch Available  (was: In Progress)

> Handle InterruptedException in LogSubcommand
> 
>
> Key: HDDS-2575
> URL: https://issues.apache.org/jira/browse/HDDS-2575
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Daniel delValle
>Priority: Major
>  Labels: newbie, pull-request-available, sonar
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md-mpKcVY8lQ4ZsAH&open=AW5md-mpKcVY8lQ4ZsAH



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2569) Handle InterruptedException in LogStreamServlet

2020-04-13 Thread Daniel delValle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel delValle reassigned HDDS-2569:
-

Assignee: Daniel delValle

> Handle InterruptedException in LogStreamServlet
> ---
>
> Key: HDDS-2569
> URL: https://issues.apache.org/jira/browse/HDDS-2569
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Daniel delValle
>Priority: Major
>  Labels: newbie, pull-request-available, sonar
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md-yJKcVY8lQ4ZsIf&open=AW5md-yJKcVY8lQ4ZsIf



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3377) Remove guava 26.0-android jar

2020-04-13 Thread Mukul Kumar Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh resolved HDDS-3377.
-
Fix Version/s: 0.6.0
   Resolution: Fixed

Thanks for the contribution [~weichiu]. the pr looks good to me. I have merged 
this.

> Remove guava 26.0-android jar
> -
>
> Key: HDDS-3377
> URL: https://issues.apache.org/jira/browse/HDDS-3377
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I missed this during HDDS-3000
> guava-26.0-android is not used but if it's in the classpath (copied 
> explicitly in pom file), it could potentially load this one and cause runtime 
> error.
> {noformat}
> $ find . -name guava*
> ./hadoop-ozone/ozonefs-lib-legacy/target/classes/libs/META-INF/maven/com.google.guava/guava
> ./hadoop-ozone/dist/target/ozone-0.4.0.7.1.1.0-SNAPSHOT/share/ozone/lib/guava-26.0-android.jar
> ./hadoop-ozone/dist/target/ozone-0.4.0.7.1.1.0-SNAPSHOT/share/ozone/lib/guava-28.2-jre.jar
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2568) Handle InterruptedException in OzoneContainer

2020-04-13 Thread Mukul Kumar Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-2568:

Fix Version/s: 0.6.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks for the contribution [~delvalle.dani]. I have merged this pull request. 
Also I have added you as a contributor.

> Handle InterruptedException in OzoneContainer
> -
>
> Key: HDDS-2568
> URL: https://issues.apache.org/jira/browse/HDDS-2568
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Daniel delValle
>Priority: Major
>  Labels: newbie, pull-request-available, sonar
> Fix For: 0.6.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md-9sKcVY8lQ4ZsUh&open=AW5md-9sKcVY8lQ4ZsUh
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2568) Handle InterruptedException in OzoneContainer

2020-04-13 Thread Mukul Kumar Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDDS-2568:
---

Assignee: Daniel delValle

> Handle InterruptedException in OzoneContainer
> -
>
> Key: HDDS-2568
> URL: https://issues.apache.org/jira/browse/HDDS-2568
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Daniel delValle
>Priority: Major
>  Labels: newbie, pull-request-available, sonar
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md-9sKcVY8lQ4ZsUh&open=AW5md-9sKcVY8lQ4ZsUh
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 commented on issue #801: HDDS-2568. Handle InterruptedException in OzoneContainer

2020-04-13 Thread GitBox
mukul1987 commented on issue #801: HDDS-2568. Handle InterruptedException in 
OzoneContainer
URL: https://github.com/apache/hadoop-ozone/pull/801#issuecomment-612913718
 
 
   Thanks for the contribution @danidelvalle . +1 the patch looks good to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 merged pull request #801: HDDS-2568. Handle InterruptedException in OzoneContainer

2020-04-13 Thread GitBox
mukul1987 merged pull request #801: HDDS-2568. Handle InterruptedException in 
OzoneContainer
URL: https://github.com/apache/hadoop-ozone/pull/801
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] aryangupta1998 commented on issue #782: HDDS-3352. Support for native ozone filesystem client using libhdfs.

2020-04-13 Thread GitBox
aryangupta1998 commented on issue #782: HDDS-3352. Support for native ozone 
filesystem client using libhdfs.
URL: https://github.com/apache/hadoop-ozone/pull/782#issuecomment-612894239
 
 
   > I feel we should rename the filesystem to o3 in place of oz. I mean lets 
call it libo3fs and o3fs_read and write and every where else ?
   
   I have renamed all the files(o3fs_read,  write, etc) and 
directories(libo3fs, libo3fs-examples)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] aryangupta1998 commented on a change in pull request #782: HDDS-3352. Support for native ozone filesystem client using libhdfs.

2020-04-13 Thread GitBox
aryangupta1998 commented on a change in pull request #782: HDDS-3352. Support 
for native ozone filesystem client using libhdfs.
URL: https://github.com/apache/hadoop-ozone/pull/782#discussion_r407472321
 
 

 ##
 File path: hadoop-ozone/native-client/libozone/ozfs.c
 ##
 @@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "ozfs.h"
+#include "hdfs/hdfs.h"
+#include 
+#include 
+#include 
+#include 
+
+
+ozfsFS ozfsConnect(const char *host, tPort port, const char *bucket, const 
char *vol)
+{
+struct hdfsBuilder *bld = hdfsNewBuilder();
+int len = 0;
+if (!bld)
+return NULL;
+len = strlen(host) + strlen(bucket) + strlen(vol) + strlen("o3fs://");
+char string[len + 2];
 
 Review comment:
   snprintf just appends a null character at the end and we dont need that null 
character thats why i mentioned string[len+2]. In snprintf we have to pass 
len+3 because its function requirement.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] aryangupta1998 commented on a change in pull request #782: HDDS-3352. Support for native ozone filesystem client using libhdfs.

2020-04-13 Thread GitBox
aryangupta1998 commented on a change in pull request #782: HDDS-3352. Support 
for native ozone filesystem client using libhdfs.
URL: https://github.com/apache/hadoop-ozone/pull/782#discussion_r407472539
 
 

 ##
 File path: hadoop-ozone/native-client/libozone/ozfs.c
 ##
 @@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "ozfs.h"
+#include "hdfs/hdfs.h"
+#include 
+#include 
+#include 
+#include 
+
+
+ozfsFS ozfsConnect(const char *host, tPort port, const char *bucket, const 
char *vol)
+{
+struct hdfsBuilder *bld = hdfsNewBuilder();
+int len = 0;
+if (!bld)
+return NULL;
+len = strlen(host) + strlen(bucket) + strlen(vol) + strlen("o3fs://");
+char string[len + 2];
+snprintf(string, len + 3, "o3fs://%s.%s.%s", bucket, vol, host);
 
 Review comment:
   Added #define O3FS "o3fs://"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] aryangupta1998 commented on a change in pull request #782: HDDS-3352. Support for native ozone filesystem client using libhdfs.

2020-04-13 Thread GitBox
aryangupta1998 commented on a change in pull request #782: HDDS-3352. Support 
for native ozone filesystem client using libhdfs.
URL: https://github.com/apache/hadoop-ozone/pull/782#discussion_r407471088
 
 

 ##
 File path: hadoop-ozone/native-client/libozone/ozfs.c
 ##
 @@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "ozfs.h"
+#include "hdfs/hdfs.h"
+#include 
+#include 
+#include 
+#include 
+
+
+ozfsFS ozfsConnect(const char *host, tPort port, const char *bucket, const 
char *vol)
+{
+struct hdfsBuilder *bld = hdfsNewBuilder();
+int len = 0;
+if (!bld)
+return NULL;
+len = strlen(host) + strlen(bucket) + strlen(vol) + strlen("o3fs://");
+char string[len + 2];
+snprintf(string, len + 3, "o3fs://%s.%s.%s", bucket, vol, host);
 
 Review comment:
   Example added


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] aryangupta1998 commented on a change in pull request #782: HDDS-3352. Support for native ozone filesystem client using libhdfs.

2020-04-13 Thread GitBox
aryangupta1998 commented on a change in pull request #782: HDDS-3352. Support 
for native ozone filesystem client using libhdfs.
URL: https://github.com/apache/hadoop-ozone/pull/782#discussion_r407471222
 
 

 ##
 File path: hadoop-ozone/native-client/libozone/ozfs.h
 ##
 @@ -0,0 +1,43 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef OZFS_DOT_H
+#define OZFS_DOT_H
+
+#include "hdfs/hdfs.h"
+
+struct hdfs_internal;
+typedef struct hdfs_internal* ozfsFS;
+
+struct hdfsFile_internal;
+typedef struct hdfsFile_internal* ozfsFile;
+
+ozfsFS ozfsConnect(const char* nn, tPort port, const char* bucket, const char* 
volume);
+
+ozfsFile ozfsOpenFile(ozfsFS fs, const char *path, int flags, int bufferSize, 
short replication, tSize blockSize);
 
 Review comment:
   Resolved.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] aryangupta1998 commented on a change in pull request #782: HDDS-3352. Support for native ozone filesystem client using libhdfs.

2020-04-13 Thread GitBox
aryangupta1998 commented on a change in pull request #782: HDDS-3352. Support 
for native ozone filesystem client using libhdfs.
URL: https://github.com/apache/hadoop-ozone/pull/782#discussion_r407471348
 
 

 ##
 File path: hadoop-ozone/native-client/libozone/ozfs.c
 ##
 @@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "ozfs.h"
+#include "hdfs/hdfs.h"
+#include 
+#include 
+#include 
+#include 
+
+
+ozfsFS ozfsConnect(const char *host, tPort port, const char *bucket, const 
char *vol)
+{
+struct hdfsBuilder *bld = hdfsNewBuilder();
+int len = 0;
+if (!bld)
 
 Review comment:
   Added


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3297) TestOzoneClientKeyGenerator is flaky

2020-04-13 Thread Shashikant Banerjee (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17082269#comment-17082269
 ] 

Shashikant Banerjee commented on HDDS-3297:
---

[~adoroszlai], you are right. The issue seems to be happening with Ratis 
intermittently with specific sequence. It's not related to the test 

> TestOzoneClientKeyGenerator is flaky
> 
>
> Key: HDDS-3297
> URL: https://issues.apache.org/jira/browse/HDDS-3297
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Marton Elek
>Priority: Critical
> Attachments: 
> org.apache.hadoop.ozone.freon.TestOzoneClientKeyGenerator-output.txt
>
>
> Sometimes it's hanging and stopped after a timeout.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 merged pull request #808: HDDS-3377. Remove guava 26.0-android jar.

2020-04-13 Thread GitBox
mukul1987 merged pull request #808: HDDS-3377. Remove guava 26.0-android jar.
URL: https://github.com/apache/hadoop-ozone/pull/808
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 commented on issue #814: HDDS-3286. BasicOzoneFileSystem support batchDelete and batchRename.

2020-04-13 Thread GitBox
mukul1987 commented on issue #814: HDDS-3286. BasicOzoneFileSystem  support 
batchDelete and batchRename.
URL: https://github.com/apache/hadoop-ozone/pull/814#issuecomment-612878195
 
 
   Please have a look at https://issues.apache.org/jira/browse/HDDS-2939. The 
problem is being addressed there.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3286) BasicOzoneFileSystem support batchDelete and batchRename

2020-04-13 Thread Mukul Kumar Singh (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17082265#comment-17082265
 ] 

Mukul Kumar Singh commented on HDDS-3286:
-

Hi [~micahzhao], this problem is being fixed via HDDS-2939. Please have a look 
there and let us know of your opinion.

> BasicOzoneFileSystem  support batchDelete and batchRename
> -
>
> Key: HDDS-3286
> URL: https://issues.apache.org/jira/browse/HDDS-3286
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Filesystem
>Reporter: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>     Currently delete file is to get all the keys in the directory, and then 
> delete one by one. And the same with rename. This makes for poor performance.
>     By tested the deletion path with 100,000 files, which took 3718.70 sec. 
> And rename it took 7327.936.
>     We plan to change this part to a batch operation to improve performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on issue #716: HDDS-3155. Improved ozone client flush implementation to make it faster.

2020-04-13 Thread GitBox
bshashikant commented on issue #716: HDDS-3155. Improved ozone client flush 
implementation to make it faster.
URL: https://github.com/apache/hadoop-ozone/pull/716#issuecomment-612874083
 
 
I checked the behaviour in HDFS and it seems like approach here makes ozone 
flush() similar to what HDFS flush is currently doing.
   
   We can also have an  implementation as @xiaoyuyao mentioned like a time 
based flush on lines similar to S3AFlush()?
   
   @xiaoyuyao , what do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant removed a comment on issue #716: HDDS-3155. Improved ozone client flush implementation to make it faster.

2020-04-13 Thread GitBox
bshashikant removed a comment on issue #716: HDDS-3155. Improved ozone client 
flush implementation to make it faster.
URL: https://github.com/apache/hadoop-ozone/pull/716#issuecomment-606602244
 
 
   I think there will be an alternate proposal to schedule a flush job over 
time rather tha doing a flush on specific data size boundary. Can we close this 
pr for now?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc opened a new pull request #716: HDDS-3155. Improved ozone client flush implementation to make it faster.

2020-04-13 Thread GitBox
captainzmc opened a new pull request #716: HDDS-3155. Improved ozone client 
flush implementation to make it faster.
URL: https://github.com/apache/hadoop-ozone/pull/716
 
 
   ## What changes were proposed in this pull request?
   
   When we run MR Job (with 1000 maps)  based on OzoneFileSystem. After the map 
and reduce has finished 100%, the appmaster pauses More than 40 minutes .
   `20/03/05 14:43:33 INFO mapreduce.Job: map 100% reduce 100% `
   `20/03/05 15:29:52 INFO mapreduce.Job: Job job_1583385253878_0002 completed 
successfully`
   It turns out that the appmaster writes all the task events to the log one by 
one, calling flush once for each one. This operation is very time consuming in 
ozone.
   
   HDFS currently has two flush ports, flush () and hflush ().
   flush() : flush the data from client  buffer to the client package 
(dfs.write.packet.size default 64k). If the package is not full, it will not be 
sent to the datanode.
   hflush(): each invocation sends the data in the buffer to the datanode.
   
   Now, ozone's flush is more similar to HDFS's hflush. This PR adds an 
implementation of flush similar to HDFS‘s flush. Using 
ozone.client.stream.buffer.flush.delay to control whether to enable(not enabled 
by default). If we enabled it, when we call the flush() method, we will 
determine whether the data in the current buffer is greater than 
ozone.client.stream.buffer.size. If greater than, we will send it to the 
datanode. Otherwise, we will not send it.
   
   The flush performance has been significantly improved through testing. The 
job is no longer blocked, It will take 1 second to exit after MR finished.
   `20/03/25 11:04:04 INFO mapreduce.Job:  map 100% reduce 100%`
   `20/03/25 11:04:05 INFO mapreduce.Job: Job job_1585104739905_0002 completed 
successfully`
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3155
   
   ## How was this patch tested?
   
   Run yarn on the ozone, perform the testdfsio job below, start a thousand 
maps. And see the exit time after map and reduce 100%.
   `hadoop jar  /path/of/hadoop-mapreduce-client-jobclient-2.8.5-tests.jar 
TestDFSIO -write -nrFiles 1000 -fileSize 1KB  -resFile /tmp/dfsio-write.out`
   
   Add the following configuration in ozone-site.xml and repeat the above 
command to see the execution.
   ``
`   ozone.client.stream.buffer.flush.delay`
`   true`
``


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc opened a new pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete and batchRename.

2020-04-13 Thread GitBox
captainzmc opened a new pull request #814: HDDS-3286. BasicOzoneFileSystem  
support batchDelete and batchRename.
URL: https://github.com/apache/hadoop-ozone/pull/814
 
 
   ## What changes were proposed in this pull request?
   
   Currently delete file is to get all the keys in the directory, and then 
delete one by one. And the same with rename. This makes for poor performance. 
By tested the deletion directory with 100,000 files, which took 3718.70 sec. 
And rename it took 7327.936 sec.
   Using this PR, when batch-size is set to 100, the time of delete and 
rename's directory of 100,000  files is 62.498 sec and 46.002 sec. Performance 
improved nearly 100 times。
   
   https://issues.apache.org/jira/browse/HDDS-3286
   
   ## How was this patch tested?
   Set the conf in your code or ozone-site.xml, The default value of 
ozone.fs.iterate.batch-size is 1.
   Configuration conf = new Configuration();
   conf.setInt("ozone.fs.iterate.batch-size",100);
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3286) BasicOzoneFileSystem support batchDelete and batchRename

2020-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3286:
-
Labels: pull-request-available  (was: )

> BasicOzoneFileSystem  support batchDelete and batchRename
> -
>
> Key: HDDS-3286
> URL: https://issues.apache.org/jira/browse/HDDS-3286
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Filesystem
>Reporter: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
>
>     Currently delete file is to get all the keys in the directory, and then 
> delete one by one. And the same with rename. This makes for poor performance.
>     By tested the deletion path with 100,000 files, which took 3718.70 sec. 
> And rename it took 7327.936.
>     We plan to change this part to a batch operation to improve performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #782: HDDS-3352. Support for native ozone filesystem client using libhdfs.

2020-04-13 Thread GitBox
mukul1987 commented on a change in pull request #782: HDDS-3352. Support for 
native ozone filesystem client using libhdfs.
URL: https://github.com/apache/hadoop-ozone/pull/782#discussion_r407391833
 
 

 ##
 File path: hadoop-ozone/native-client/libozone/ozfs.c
 ##
 @@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "ozfs.h"
+#include "hdfs/hdfs.h"
+#include 
+#include 
+#include 
+#include 
+
+
+ozfsFS ozfsConnect(const char *host, tPort port, const char *bucket, const 
char *vol)
+{
+struct hdfsBuilder *bld = hdfsNewBuilder();
+int len = 0;
+if (!bld)
+return NULL;
+len = strlen(host) + strlen(bucket) + strlen(vol) + strlen("o3fs://");
+char string[len + 2];
 
 Review comment:
   why len +2 here and len+3 on the next line ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #782: HDDS-3352. Support for native ozone filesystem client using libhdfs.

2020-04-13 Thread GitBox
mukul1987 commented on a change in pull request #782: HDDS-3352. Support for 
native ozone filesystem client using libhdfs.
URL: https://github.com/apache/hadoop-ozone/pull/782#discussion_r407391573
 
 

 ##
 File path: hadoop-ozone/native-client/libozone/ozfs.c
 ##
 @@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "ozfs.h"
+#include "hdfs/hdfs.h"
+#include 
+#include 
+#include 
+#include 
+
+
+ozfsFS ozfsConnect(const char *host, tPort port, const char *bucket, const 
char *vol)
+{
+struct hdfsBuilder *bld = hdfsNewBuilder();
+int len = 0;
+if (!bld)
 
 Review comment:
   lets have { and } after the if and at the end of if statement


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #782: HDDS-3352. Support for native ozone filesystem client using libhdfs.

2020-04-13 Thread GitBox
mukul1987 commented on a change in pull request #782: HDDS-3352. Support for 
native ozone filesystem client using libhdfs.
URL: https://github.com/apache/hadoop-ozone/pull/782#discussion_r407391314
 
 

 ##
 File path: hadoop-ozone/native-client/libozone/ozfs.c
 ##
 @@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "ozfs.h"
+#include "hdfs/hdfs.h"
+#include 
+#include 
+#include 
+#include 
+
+
+ozfsFS ozfsConnect(const char *host, tPort port, const char *bucket, const 
char *vol)
+{
+struct hdfsBuilder *bld = hdfsNewBuilder();
+int len = 0;
+if (!bld)
+return NULL;
+len = strlen(host) + strlen(bucket) + strlen(vol) + strlen("o3fs://");
+char string[len + 2];
+snprintf(string, len + 3, "o3fs://%s.%s.%s", bucket, vol, host);
 
 Review comment:
   lets have o3fs as a #define at the start of the file.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #782: HDDS-3352. Support for native ozone filesystem client using libhdfs.

2020-04-13 Thread GitBox
mukul1987 commented on a change in pull request #782: HDDS-3352. Support for 
native ozone filesystem client using libhdfs.
URL: https://github.com/apache/hadoop-ozone/pull/782#discussion_r407390426
 
 

 ##
 File path: hadoop-ozone/native-client/libozone/ozfs.c
 ##
 @@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "ozfs.h"
+#include "hdfs/hdfs.h"
+#include 
+#include 
+#include 
+#include 
+
+
+ozfsFS ozfsConnect(const char *host, tPort port, const char *bucket, const 
char *vol)
+{
+struct hdfsBuilder *bld = hdfsNewBuilder();
+int len = 0;
+if (!bld)
+return NULL;
+len = strlen(host) + strlen(bucket) + strlen(vol) + strlen("o3fs://");
+char string[len + 2];
+snprintf(string, len + 3, "o3fs://%s.%s.%s", bucket, vol, host);
 
 Review comment:
   Can you please add an examle here ? I mean how does the final URI look like ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #782: HDDS-3352. Support for native ozone filesystem client using libhdfs.

2020-04-13 Thread GitBox
mukul1987 commented on a change in pull request #782: HDDS-3352. Support for 
native ozone filesystem client using libhdfs.
URL: https://github.com/apache/hadoop-ozone/pull/782#discussion_r407390873
 
 

 ##
 File path: hadoop-ozone/native-client/libozone/ozfs.h
 ##
 @@ -0,0 +1,43 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef OZFS_DOT_H
+#define OZFS_DOT_H
+
+#include "hdfs/hdfs.h"
+
+struct hdfs_internal;
+typedef struct hdfs_internal* ozfsFS;
+
+struct hdfsFile_internal;
+typedef struct hdfsFile_internal* ozfsFile;
+
+ozfsFS ozfsConnect(const char* nn, tPort port, const char* bucket, const char* 
volume);
+
+ozfsFile ozfsOpenFile(ozfsFS fs, const char *path, int flags, int bufferSize, 
short replication, tSize blockSize);
 
 Review comment:
   More than 80 characters in a line.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3352) Support for native ozone filesystem client using libhdfs

2020-04-13 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-3352:
--
Status: Patch Available  (was: Open)

> Support for native ozone filesystem client using libhdfs
> 
>
> Key: HDDS-3352
> URL: https://issues.apache.org/jira/browse/HDDS-3352
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Ozone Client
>Reporter: Nanda kumar
>Assignee: Aryan Gupta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This jira is to bring in support for native ozone filesystem client using 
> libhdfs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org