[jira] [Resolved] (HDDS-2893) Handle replay of KeyPurge Request

2020-01-31 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2893.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Handle replay of KeyPurge Request
> -
>
> Key: HDDS-2893
> URL: https://issues.apache.org/jira/browse/HDDS-2893
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> If KeyPurgeRequest is replayed, we do not want to purge the keys which were 
> created after the original purge request was received. This could happen if a 
> key was deleted, purged and then created and deleted again. The the purge 
> request was replayed, it would purge the key deleted after the original purge 
> request was completed.
> Hence, to maintain idempotence, we should only purge those keys from 
> DeletedKeys table that have updateID < transactionLogIndex of the request.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #450: HDDS-2893. Handle replay of KeyPurge Request.

2020-01-31 Thread GitBox
bharatviswa504 merged pull request #450: HDDS-2893. Handle replay of KeyPurge 
Request.
URL: https://github.com/apache/hadoop-ozone/pull/450
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2971) Issue with current replay checks in OM requests

2020-01-31 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2971:
-
Parent: HDDS-505
Issue Type: Sub-task  (was: Bug)

> Issue with current replay checks in OM requests
> ---
>
> Key: HDDS-2971
> URL: https://issues.apache.org/jira/browse/HDDS-2971
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>
> # Create Key - T1
>  # Commit Key - T2
>  # Remove Acl - T3
> With the current way of replay logic, we shall see Acl Exception for replays 
> when the user created the entity ACLS are removed by the T3 transaction.
>  
> Current logic:
>  # Check ACL
>  # Then verify it is replay or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2971) Issue with current replay checks in OM requests

2020-01-31 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2971:


 Summary: Issue with current replay checks in OM requests
 Key: HDDS-2971
 URL: https://issues.apache.org/jira/browse/HDDS-2971
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


# Create Key - T1
 # Commit Key - T2
 # Remove Acl - T3

With the current way of replay logic, we shall see Acl Exception for replays 
when the user created the entity ACLS are removed by the T3 transaction.

 

Current logic:
 # Check ACL
 # Then verify it is replay or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2970) Add Tombstone Marker to Ozone Objects to avoid all replays

2020-01-31 Thread Hanisha Koneru (Jira)
Hanisha Koneru created HDDS-2970:


 Summary: Add Tombstone Marker to Ozone Objects to avoid all replays
 Key: HDDS-2970
 URL: https://issues.apache.org/jira/browse/HDDS-2970
 Project: Hadoop Distributed Data Store
  Issue Type: New Feature
  Components: om
Reporter: Hanisha Koneru


When ratis is enabled for OM, it is possible that transactions will be 
replayed. Though most of the replays would be caught and ignored, there are 
some scenarios in which a transaction could be replayed.

For example, Key1 is created and deleted. If Key1 create transaction is 
replayed, there is no way to determine if this is a replayed transaction as 
Key1 would not be existing in the DB anymore. 

To avoid such replay scenarios, we could add something similar to HBase's 
Tombstone marker to entries when they are deleted. A background job can later 
delete these objects (once a snapshot is taken with snapshot index greater than 
the object's updateID).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2957) listBuckets result should include the exact match of bucketPrefix

2020-01-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2957:
-
Labels: pull-request-available  (was: )

> listBuckets result should include the exact match of bucketPrefix
> -
>
> Key: HDDS-2957
> URL: https://issues.apache.org/jira/browse/HDDS-2957
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-2957.test.patch
>
>
> {{OzoneVolume.listBuckets(String bucketPrefix)}} behaves differently than 
> {{ObjectStore.listVolumes(String volumePrefix)}} in terms of given prefix. In 
> short, {{listBuckets}} ignores the {{bucketPrefix}} in the result if it is an 
> *exact* match, while {{listVolumes}} doesn't.
> e.g. If we have a bucket named {{bucket-12345}}, currently 
> {{OzoneVolume.listBuckets("bucket-12345")}} will NOT return {{bucket-12345}} 
> in its result.
> Please see my attached test case for this. - I will move the test from 
> {{TestOzoneFileSystem}} to a proper place ({{TestOzoneRpcClientAbstract}}, 
> probably) in the PR.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #504: HDDS-2953. Handle replay of S3 requests

2020-01-31 Thread GitBox
bharatviswa504 commented on a change in pull request #504: HDDS-2953. Handle 
replay of S3 requests
URL: https://github.com/apache/hadoop-ozone/pull/504#discussion_r373733885
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3InitiateMultipartUploadRequest.java
 ##
 @@ -118,6 +127,28 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 
   validateBucketAndVolume(omMetadataManager, volumeName, bucketName);
 
+  // Check if this transaction is a replay.
+  // We check only the KeyTable here and not the OpenKeyTable. In case
+  // this transaction is a replay but the transaction was not committed
+  // to the KeyTable, then we recreate the key in OpenKey table. This is
+  // okay as all the subsequent transactions would also be replayed and
+  // the openKey table would eventually reach the same state.
+  // The reason we do not check the OpenKey table is to avoid a DB read
+  // in regular non-replay scenario.
+  String dbKeyName = omMetadataManager.getOzoneKey(volumeName, bucketName,
 
 Review comment:
   Looks like checking keyTable is not required here. I feel we can ignore it 
here, as unnecessary extra DB read for all MPU initate cases.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #504: HDDS-2953. Handle replay of S3 requests

2020-01-31 Thread GitBox
bharatviswa504 commented on a change in pull request #504: HDDS-2953. Handle 
replay of S3 requests
URL: https://github.com/apache/hadoop-ozone/pull/504#discussion_r373733885
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3InitiateMultipartUploadRequest.java
 ##
 @@ -118,6 +127,28 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 
   validateBucketAndVolume(omMetadataManager, volumeName, bucketName);
 
+  // Check if this transaction is a replay.
+  // We check only the KeyTable here and not the OpenKeyTable. In case
+  // this transaction is a replay but the transaction was not committed
+  // to the KeyTable, then we recreate the key in OpenKey table. This is
+  // okay as all the subsequent transactions would also be replayed and
+  // the openKey table would eventually reach the same state.
+  // The reason we do not check the OpenKey table is to avoid a DB read
+  // in regular non-replay scenario.
+  String dbKeyName = omMetadataManager.getOzoneKey(volumeName, bucketName,
 
 Review comment:
   Looks like checking keyTable is not required here. I feel we can ignore it 
here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on issue #450: HDDS-2893. Handle replay of KeyPurge Request.

2020-01-31 Thread GitBox
hanishakoneru commented on issue #450: HDDS-2893. Handle replay of KeyPurge 
Request.
URL: https://github.com/apache/hadoop-ozone/pull/450#issuecomment-580958415
 
 
   Sorry missed pushing the new commit. Added it now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #450: HDDS-2893. Handle replay of KeyPurge Request.

2020-01-31 Thread GitBox
bharatviswa504 commented on issue #450: HDDS-2893. Handle replay of KeyPurge 
Request.
URL: https://github.com/apache/hadoop-ozone/pull/450#issuecomment-580949746
 
 
   @hanishakoneru I don't see any latest commit, the last commit is from 
Yesterday.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2956) Handle Replay of AllocateBlock request

2020-01-31 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2956.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Handle Replay of AllocateBlock request
> --
>
> Key: HDDS-2956
> URL: https://issues.apache.org/jira/browse/HDDS-2956
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> To ensure that allocate block operations are idempotent, compare the 
> transactionID with the objectID and updateID to make sure that the 
> transaction is not a replay. If the transactionID <= updateID, then it 
> implies that the transaction is a replay and hence it should be skipped.
> OMAllocateBlockRequest is made idempotent in this Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #505: HDDS-2956. Handle Replay of AllocateBlock request

2020-01-31 Thread GitBox
bharatviswa504 merged pull request #505: HDDS-2956. Handle Replay of 
AllocateBlock request
URL: https://github.com/apache/hadoop-ozone/pull/505
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #505: HDDS-2956. Handle Replay of AllocateBlock request

2020-01-31 Thread GitBox
bharatviswa504 commented on issue #505: HDDS-2956. Handle Replay of 
AllocateBlock request
URL: https://github.com/apache/hadoop-ozone/pull/505#issuecomment-580949286
 
 
   Thank You @hanishakoneru for the contribution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #510: HDDS-2958. Handle replay of OM Volume ACL requests

2020-01-31 Thread GitBox
bharatviswa504 commented on a change in pull request #510: HDDS-2958. Handle 
replay of OM Volume ACL requests
URL: https://github.com/apache/hadoop-ozone/pull/510#discussion_r373723123
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeSetAclRequest.java
 ##
 @@ -80,29 +80,36 @@ public String getVolumeName() {
 
   @Override
   OMClientResponse onSuccess(OMResponse.Builder omResponse,
-  OmVolumeArgs omVolumeArgs, boolean result){
+  OmVolumeArgs omVolumeArgs, boolean aclApplied){
 omResponse.setSetAclResponse(OzoneManagerProtocolProtos.SetAclResponse
-.newBuilder().setResponse(result).build());
-return new OMVolumeAclOpResponse(omVolumeArgs, omResponse.build());
+.newBuilder().setResponse(aclApplied).build());
+return new OMVolumeAclOpResponse(omResponse.build(), omVolumeArgs);
   }
 
   @Override
   OMClientResponse onFailure(OMResponse.Builder omResponse,
   IOException ex) {
-return new OMVolumeAclOpResponse(null,
-createErrorOMResponse(omResponse, ex));
+return new OMVolumeAclOpResponse(createErrorOMResponse(omResponse, ex));
   }
 
   @Override
-  void onComplete(IOException ex) {
-if (ex == null) {
-  if (LOG.isDebugEnabled()) {
-LOG.debug("Set acls: {} to volume: {} success!",
-getAcls(), getVolumeName());
-  }
-} else {
-  LOG.error("Set acls {} to volume {} failed!",
-  getAcls(), getVolumeName(), ex);
+  void onComplete(Result result, IOException ex, long trxnLogIndex) {
 
 Review comment:
   Yes, missed it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #510: HDDS-2958. Handle replay of OM Volume ACL requests

2020-01-31 Thread GitBox
bharatviswa504 commented on a change in pull request #510: HDDS-2958. Handle 
replay of OM Volume ACL requests
URL: https://github.com/apache/hadoop-ozone/pull/510#discussion_r373723057
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeAclRequest.java
 ##
 @@ -73,53 +74,71 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 
 OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
 boolean lockAcquired = false;
+Result result = null;
 try {
   // check Acl
   if (ozoneManager.getAclsEnabled()) {
 checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
 OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.WRITE_ACL,
 volume, null, null);
   }
-  lockAcquired =
-  omMetadataManager.getLock().acquireWriteLock(VOLUME_LOCK, volume);
+  lockAcquired = omMetadataManager.getLock().acquireWriteLock(
+  VOLUME_LOCK, volume);
   String dbVolumeKey = omMetadataManager.getVolumeKey(volume);
   omVolumeArgs = omMetadataManager.getVolumeTable().get(dbVolumeKey);
   if (omVolumeArgs == null) {
 throw new OMException(OMException.ResultCodes.VOLUME_NOT_FOUND);
   }
 
+  // Check if this transaction is a replay of ratis logs.
+  // If this is a replay, then the response has already been returned to
+  // the client. So take no further action and return a dummy
+  // OMClientResponse.
+  if (isReplay(ozoneManager, omVolumeArgs.getUpdateID(),
+  trxnLogIndex)) {
+throw new OMReplayException();
+  }
+
   // result is false upon add existing acl or remove non-existing acl
-  boolean result = true;
+  boolean applyAcl = true;
   try {
 omVolumeAclOp.apply(ozoneAcls, omVolumeArgs);
   } catch (OMException ex) {
-result = false;
+applyAcl = false;
   }
 
-  if (result) {
+  if (applyAcl) {
+omVolumeArgs.setUpdateID(trxnLogIndex);
 
 Review comment:
   No when replay it will not be considered as replay as UpdateID will be less 
than transaction ID. As we are doing best effort to find all the replay cases, 
I think we can set updateID when applyAcl false and it will help in replay.
   
   > So this would add an unnecessary DB op.
   
   We are using DoubleBuffer and Batch, so we should be fine here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #487: [WIP] HDDS-2883. Change the default client settings accordingly with change in default chunk size.

2020-01-31 Thread GitBox
bharatviswa504 commented on issue #487: [WIP] HDDS-2883. Change the default 
client settings accordingly with change in default chunk size.
URL: https://github.com/apache/hadoop-ozone/pull/487#issuecomment-580947575
 
 
   > Thanks @bharatviswa504 for working on this. Shouldn't the default values 
be changed in `OzoneConfigKeys`, too?
   
   Thank You @adoroszlai for the review. Yes, we need to update. Updated in the 
latest commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao merged pull request #511: HDDS-2665. Merge master to HDDS-2665-ofs branch

2020-01-31 Thread GitBox
xiaoyuyao merged pull request #511: HDDS-2665. Merge master to HDDS-2665-ofs 
branch
URL: https://github.com/apache/hadoop-ozone/pull/511
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on issue #511: HDDS-2665. Merge master to HDDS-2665-ofs branch

2020-01-31 Thread GitBox
xiaoyuyao commented on issue #511: HDDS-2665. Merge master to HDDS-2665-ofs 
branch
URL: https://github.com/apache/hadoop-ozone/pull/511#issuecomment-580941811
 
 
   +1 from me too. Thanks for merge and fix this @smengcl .


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2957) listBuckets result should include the exact match of bucketPrefix

2020-01-31 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-2957:
-
Summary: listBuckets result should include the exact match of bucketPrefix  
(was: listBuckets result excludes the exact match of bucketPrefix)

> listBuckets result should include the exact match of bucketPrefix
> -
>
> Key: HDDS-2957
> URL: https://issues.apache.org/jira/browse/HDDS-2957
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDDS-2957.test.patch
>
>
> {{OzoneVolume.listBuckets(String bucketPrefix)}} behaves differently than 
> {{ObjectStore.listVolumes(String volumePrefix)}} in terms of given prefix. In 
> short, {{listBuckets}} ignores the {{bucketPrefix}} in the result if it is an 
> *exact* match, while {{listVolumes}} doesn't.
> e.g. If we have a bucket named {{bucket-12345}}, currently 
> {{OzoneVolume.listBuckets("bucket-12345")}} will NOT return {{bucket-12345}} 
> in its result.
> Please see my attached test case for this. - I will move the test from 
> {{TestOzoneFileSystem}} to a proper place ({{TestOzoneRpcClientAbstract}}, 
> probably) in the PR.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2957) listBuckets result excludes the exact match of bucketPrefix

2020-01-31 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-2957:
-
Description: 
{{OzoneVolume.listBuckets(String bucketPrefix)}} behaves differently than 
{{ObjectStore.listVolumes(String volumePrefix)}} in terms of given prefix. In 
short, {{listBuckets}} ignores the {{bucketPrefix}} in the result if it is an 
*exact* match, while {{listVolumes}} doesn't.

e.g. If we have a bucket named {{bucket-12345}}, currently 
{{OzoneVolume.listBuckets("bucket-12345")}} will NOT return {{bucket-12345}} in 
its result.

Please see my attached test case for this. - I will move the test from 
{{TestOzoneFileSystem}} to a proper place ({{TestOzoneRpcClientAbstract}}, 
probably) in the PR.

  was:
{{OzoneVolume.listBuckets(String bucketPrefix)}} behaves differently than 
{{ObjectStore.listVolumes(String volumePrefix)}} in terms of given prefix. In 
short, {{listBuckets}} ignores the {{bucketPrefix}} in the result if it is an 
*exact* match, while {{listVolumes}} doesn't.

e.g. If we have a bucket named {{bucket-12345}}, currently 
{{OzoneVolume.listBuckets("bucket-12345")}} result will NOT have 
{{bucket-12345}} in its result.

Please see my attached test case for this. - I will move the test from 
{{TestOzoneFileSystem}} to a proper place ({{TestOzoneRpcClientAbstract}}, 
probably) in the PR.


> listBuckets result excludes the exact match of bucketPrefix
> ---
>
> Key: HDDS-2957
> URL: https://issues.apache.org/jira/browse/HDDS-2957
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDDS-2957.test.patch
>
>
> {{OzoneVolume.listBuckets(String bucketPrefix)}} behaves differently than 
> {{ObjectStore.listVolumes(String volumePrefix)}} in terms of given prefix. In 
> short, {{listBuckets}} ignores the {{bucketPrefix}} in the result if it is an 
> *exact* match, while {{listVolumes}} doesn't.
> e.g. If we have a bucket named {{bucket-12345}}, currently 
> {{OzoneVolume.listBuckets("bucket-12345")}} will NOT return {{bucket-12345}} 
> in its result.
> Please see my attached test case for this. - I will move the test from 
> {{TestOzoneFileSystem}} to a proper place ({{TestOzoneRpcClientAbstract}}, 
> probably) in the PR.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2957) listBuckets result excludes the exact match of bucketPrefix

2020-01-31 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-2957:
-
Description: 
{{OzoneVolume.listBuckets(String bucketPrefix)}} behaves differently than 
{{ObjectStore.listVolumes(String volumePrefix)}} in terms of given prefix. In 
short, {{listBuckets}} ignores the {{bucketPrefix}} in the result if it is an 
*exact* match, while {{listVolumes}} doesn't.

e.g. If we have a bucket named {{bucket-12345}}, currently 
{{OzoneVolume.listBuckets("bucket-12345")}} result will NOT have 
{{bucket-12345}} in its result.

Please see my attached test case for this. - I will move the test from 
{{TestOzoneFileSystem}} to a proper place ({{TestOzoneRpcClientAbstract}}, 
probably) in the PR.

  was:
{{OzoneVolume.listBuckets(String bucketPrefix)}} behaves differently than 
{{ObjectStore.listVolumes(String volumePrefix)}} in terms of given prefix. In 
short, {{listBuckets}} ignores the {{bucketPrefix}} in the result if it is an 
*exact* match, while {{listVolumes}} doesn't.

Please see my attached test case for this. - I know {{TestOzoneFileSystem}} 
won't be the best place for this unit test. Just to prove a point here.


> listBuckets result excludes the exact match of bucketPrefix
> ---
>
> Key: HDDS-2957
> URL: https://issues.apache.org/jira/browse/HDDS-2957
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDDS-2957.test.patch
>
>
> {{OzoneVolume.listBuckets(String bucketPrefix)}} behaves differently than 
> {{ObjectStore.listVolumes(String volumePrefix)}} in terms of given prefix. In 
> short, {{listBuckets}} ignores the {{bucketPrefix}} in the result if it is an 
> *exact* match, while {{listVolumes}} doesn't.
> e.g. If we have a bucket named {{bucket-12345}}, currently 
> {{OzoneVolume.listBuckets("bucket-12345")}} result will NOT have 
> {{bucket-12345}} in its result.
> Please see my attached test case for this. - I will move the test from 
> {{TestOzoneFileSystem}} to a proper place ({{TestOzoneRpcClientAbstract}}, 
> probably) in the PR.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2969) Implement ofs://: Add contract test and integration test

2020-01-31 Thread Siyao Meng (Jira)
Siyao Meng created HDDS-2969:


 Summary: Implement ofs://: Add contract test and integration test
 Key: HDDS-2969
 URL: https://issues.apache.org/jira/browse/HDDS-2969
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Siyao Meng






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on issue #511: HDDS-2665. Merge master to HDDS-2665-ofs branch

2020-01-31 Thread GitBox
smengcl commented on issue #511: HDDS-2665. Merge master to HDDS-2665-ofs branch
URL: https://github.com/apache/hadoop-ozone/pull/511#issuecomment-580914336
 
 
   Just did a `git merge master` again to include `HDDS-2833. Enable 
integrations tests for github actions`, which I think is good to have.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2840) Implement ofs://: mkdir

2020-01-31 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng resolved HDDS-2840.
--
Resolution: Fixed

> Implement ofs://: mkdir
> ---
>
> Key: HDDS-2840
> URL: https://issues.apache.org/jira/browse/HDDS-2840
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> A sub-task in HDDS-2665 to lay the foundation and make mkdir work in the new 
> filesystem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2962) Handle replay of OM Prefix ACL requests

2020-01-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2962:
-
Labels: pull-request-available  (was: )

> Handle replay of OM Prefix ACL requests
> ---
>
> Key: HDDS-2962
> URL: https://issues.apache.org/jira/browse/HDDS-2962
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>
> To ensure that Prefix acl operations are idempotent, compare the 
> transactionID with the objectID and updateID to make sure that the 
> transaction is not a replay. If the transactionID <= updateID, then it 
> implies that the transaction is a replay and hence it should be skipped.
> OMPrefixAclRequests (Add, Remove and Set ACL requests) are made idempotent in 
> this Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru opened a new pull request #513: HDDS-2962. Handle replay of OM Prefix ACL requests

2020-01-31 Thread GitBox
hanishakoneru opened a new pull request #513: HDDS-2962. Handle replay of OM 
Prefix ACL requests
URL: https://github.com/apache/hadoop-ozone/pull/513
 
 
   ## What changes were proposed in this pull request?
   
   To ensure that Prefix acl operations are idempotent, compare the 
transactionID with the objectID and updateID to make sure that the transaction 
is not a replay. If the transactionID <= updateID, then it implies that the 
transaction is a replay and hence it should be skipped.
   
   OMPrefixAclRequests (Add, Remove and Set ACL requests) are made idempotent 
in this Jira.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2962
   
   ## How was this patch tested?
   
   Unit tests added.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2968) Check Debug Log is enabled before constructing log message

2020-01-31 Thread Hanisha Koneru (Jira)
Hanisha Koneru created HDDS-2968:


 Summary: Check Debug Log is enabled before constructing log message
 Key: HDDS-2968
 URL: https://issues.apache.org/jira/browse/HDDS-2968
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Hanisha Koneru


Before constructing debug log message, we should check that Debug Log is 
enabled to avoid evaluating log statements when not required.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2967) Add Transaction Log Index to OM Audit Logs

2020-01-31 Thread Hanisha Koneru (Jira)
Hanisha Koneru created HDDS-2967:


 Summary: Add Transaction Log Index to OM Audit Logs
 Key: HDDS-2967
 URL: https://issues.apache.org/jira/browse/HDDS-2967
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Hanisha Koneru


When ratis is enabled for OM, it is possible that transactions will be 
replayed. Though most of the replays would be caught and ignored, there are 
some scenarios in which a transaction could be replayed.

For example, Key1 is created and deleted. If Key1 create transaction is 
replayed, there is no way to determine if this is replayed transaction as Key1 
would not be existing in the DB anymore. Replaying this transaction would be ok 
as the delete Key1 transaction would also be replayed.

Though in such scenarios replaying transactions is ok, the audit log would also 
have a duplicate entry corresponding to the replayed transaction. To 
distinguish duplicate transactions, we propose adding the transactionLogIndex 
to the audit log for OM operations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2966) ACL checks should be done after acquiring lock

2020-01-31 Thread Hanisha Koneru (Jira)
Hanisha Koneru created HDDS-2966:


 Summary: ACL checks should be done after acquiring lock
 Key: HDDS-2966
 URL: https://issues.apache.org/jira/browse/HDDS-2966
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: HA
Reporter: Hanisha Koneru


Currently in OMClientRequests#validateAndUpdateCache, we perform ACL checks 
before acquiring the required object lock. This could lead to race condition. 
The ACL check should be done after acquiring the lock.
For example, in OMKeyCreateRequest:

{code:java}
  // check Acl
  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
  IAccessAuthorizer.ACLType.CREATE, OzoneObj.ResourceType.KEY);

  acquireLock = omMetadataManager.getLock().acquireWriteLock(BUCKET_LOCK,
  volumeName, bucketName);
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on a change in pull request #510: HDDS-2958. Handle replay of OM Volume ACL requests

2020-01-31 Thread GitBox
hanishakoneru commented on a change in pull request #510: HDDS-2958. Handle 
replay of OM Volume ACL requests
URL: https://github.com/apache/hadoop-ozone/pull/510#discussion_r373608423
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeAclRequest.java
 ##
 @@ -73,53 +74,71 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 
 OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
 boolean lockAcquired = false;
+Result result = null;
 try {
   // check Acl
   if (ozoneManager.getAclsEnabled()) {
 checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
 OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.WRITE_ACL,
 volume, null, null);
   }
-  lockAcquired =
-  omMetadataManager.getLock().acquireWriteLock(VOLUME_LOCK, volume);
+  lockAcquired = omMetadataManager.getLock().acquireWriteLock(
+  VOLUME_LOCK, volume);
   String dbVolumeKey = omMetadataManager.getVolumeKey(volume);
   omVolumeArgs = omMetadataManager.getVolumeTable().get(dbVolumeKey);
   if (omVolumeArgs == null) {
 throw new OMException(OMException.ResultCodes.VOLUME_NOT_FOUND);
   }
 
+  // Check if this transaction is a replay of ratis logs.
+  // If this is a replay, then the response has already been returned to
+  // the client. So take no further action and return a dummy
+  // OMClientResponse.
+  if (isReplay(ozoneManager, omVolumeArgs.getUpdateID(),
+  trxnLogIndex)) {
+throw new OMReplayException();
+  }
+
   // result is false upon add existing acl or remove non-existing acl
-  boolean result = true;
+  boolean applyAcl = true;
   try {
 omVolumeAclOp.apply(ozoneAcls, omVolumeArgs);
   } catch (OMException ex) {
-result = false;
+applyAcl = false;
   }
 
-  if (result) {
+  if (applyAcl) {
+omVolumeArgs.setUpdateID(trxnLogIndex);
 
 Review comment:
   We do not update the DB in case applyAcl is false. So this would add an 
unnecessary DB op. Tradeoff is replay transaction would also check and reach 
the same decision - applyAcl false. So I think it should be ok here. Thoughts?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on a change in pull request #510: HDDS-2958. Handle replay of OM Volume ACL requests

2020-01-31 Thread GitBox
hanishakoneru commented on a change in pull request #510: HDDS-2958. Handle 
replay of OM Volume ACL requests
URL: https://github.com/apache/hadoop-ozone/pull/510#discussion_r373607488
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeSetAclRequest.java
 ##
 @@ -80,29 +80,36 @@ public String getVolumeName() {
 
   @Override
   OMClientResponse onSuccess(OMResponse.Builder omResponse,
-  OmVolumeArgs omVolumeArgs, boolean result){
+  OmVolumeArgs omVolumeArgs, boolean aclApplied){
 omResponse.setSetAclResponse(OzoneManagerProtocolProtos.SetAclResponse
-.newBuilder().setResponse(result).build());
-return new OMVolumeAclOpResponse(omVolumeArgs, omResponse.build());
+.newBuilder().setResponse(aclApplied).build());
+return new OMVolumeAclOpResponse(omResponse.build(), omVolumeArgs);
   }
 
   @Override
   OMClientResponse onFailure(OMResponse.Builder omResponse,
   IOException ex) {
-return new OMVolumeAclOpResponse(null,
-createErrorOMResponse(omResponse, ex));
+return new OMVolumeAclOpResponse(createErrorOMResponse(omResponse, ex));
   }
 
   @Override
-  void onComplete(IOException ex) {
-if (ex == null) {
-  if (LOG.isDebugEnabled()) {
-LOG.debug("Set acls: {} to volume: {} success!",
-getAcls(), getVolumeName());
-  }
-} else {
-  LOG.error("Set acls {} to volume {} failed!",
-  getAcls(), getVolumeName(), ex);
+  void onComplete(Result result, IOException ex, long trxnLogIndex) {
 
 Review comment:
   We are using it to log the exception in case of FAILURE.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on a change in pull request #505: HDDS-2956. Handle Replay of AllocateBlock request

2020-01-31 Thread GitBox
hanishakoneru commented on a change in pull request #505: HDDS-2956. Handle 
Replay of AllocateBlock request
URL: https://github.com/apache/hadoop-ozone/pull/505#discussion_r373606672
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMAllocateBlockRequest.java
 ##
 @@ -160,71 +160,98 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 Map auditMap = buildKeyArgsAuditMap(keyArgs);
 auditMap.put(OzoneConsts.CLIENT_ID, String.valueOf(clientID));
 
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+String openKeyName = omMetadataManager.getOpenKey(volumeName, bucketName,
+keyName, clientID);
+
 OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
 OzoneManagerProtocolProtos.Type.AllocateBlock).setStatus(
 OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+OMClientResponse omClientResponse = null;
 
+OmKeyInfo openKeyInfo = null;
 IOException exception = null;
-OmKeyInfo omKeyInfo = null;
+
 try {
   // check Acl
   checkKeyAclsInOpenKeyTable(ozoneManager, volumeName, bucketName, keyName,
   IAccessAuthorizer.ACLType.WRITE, allocateBlockRequest.getClientID());
 
-  OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
   validateBucketAndVolume(omMetadataManager, volumeName,
   bucketName);
 
-  String openKey = omMetadataManager.getOpenKey(
-  volumeName, bucketName, keyName, clientID);
-
   // Here we don't acquire bucket/volume lock because for a single client
   // allocateBlock is called in serial fashion.
 
-  omKeyInfo = omMetadataManager.getOpenKeyTable().get(openKey);
-  if (omKeyInfo == null) {
-throw new OMException("Open Key not found " + openKey, KEY_NOT_FOUND);
+  openKeyInfo = omMetadataManager.getOpenKeyTable().get(openKeyName);
+  if (openKeyInfo == null) {
+// Check if this transaction is a replay of ratis logs.
+// If the Key was already committed and this transaction is being
+// replayed, we should ignore this transaction.
+String ozoneKey = omMetadataManager.getOzoneKey(volumeName,
+bucketName, keyName);
+OmKeyInfo dbKeyInfo = omMetadataManager.getKeyTable().get(ozoneKey);
+if (dbKeyInfo != null) {
+  if (isReplay(ozoneManager, dbKeyInfo.getUpdateID(), trxnLogIndex)) {
+// This transaction is a replay. Send replay response.
+throw new OMReplayException();
+  }
+}
+throw new OMException("Open Key not found " + openKeyName,
+KEY_NOT_FOUND);
+  }
+
+  // Check if this transaction is a replay of ratis logs.
+  // Check the updateID of the openKey to verify that it is not greater
+  // than the current transactionLogIndex
+  if (isReplay(ozoneManager, openKeyInfo.getUpdateID(), trxnLogIndex)) {
+// This transaction is a replay. Send replay response.
+throw new OMReplayException();
   }
 
   // Append new block
-  omKeyInfo.appendNewBlocks(Collections.singletonList(
+  openKeyInfo.appendNewBlocks(Collections.singletonList(
   OmKeyLocationInfo.getFromProtobuf(blockLocation)), false);
 
   // Set modification time.
-  omKeyInfo.setModificationTime(keyArgs.getModificationTime());
+  openKeyInfo.setModificationTime(keyArgs.getModificationTime());
 
   // Set the UpdateID to current transactionLogIndex
-  omKeyInfo.setUpdateID(transactionLogIndex);
+  openKeyInfo.setUpdateID(trxnLogIndex);
 
   // Add to cache.
   omMetadataManager.getOpenKeyTable().addCacheEntry(
-  new CacheKey<>(openKey), new CacheValue<>(Optional.of(omKeyInfo),
-  transactionLogIndex));
+  new CacheKey<>(openKeyName),
+  new CacheValue<>(Optional.of(openKeyInfo), trxnLogIndex));
 
+  omResponse.setAllocateBlockResponse(AllocateBlockResponse.newBuilder()
+  .setKeyLocation(blockLocation).build());
+  omClientResponse = new OMAllocateBlockResponse(omResponse.build(),
+  openKeyInfo, clientID);
+  LOG.debug("Allocated block for Volume:{}, Bucket:{}, OpenKey:{}",
+  volumeName, bucketName, openKeyName);
 } catch (IOException ex) {
-  exception = ex;
+  if (ex instanceof OMReplayException) {
+omClientResponse = new OMAllocateBlockResponse(createReplayOMResponse(
+omResponse));
+LOG.debug("Replayed Transaction {} ignored. Request: {}", trxnLogIndex,
+allocateBlockRequest);
+  } else {
+omMetrics.incNumBlockAllocateCallFails();
+exception = ex;
+omClientResponse = new OMAllocateBlockResponse(createErrorOMResponse(
+omResponse, exception));
+LOG.error("Allocate Block failed. Volume:{}, Bucket:{}, OpenKey:{}. " +
+"Exception:{}", volumeName, 

[GitHub] [hadoop-ozone] hanishakoneru commented on a change in pull request #505: HDDS-2956. Handle Replay of AllocateBlock request

2020-01-31 Thread GitBox
hanishakoneru commented on a change in pull request #505: HDDS-2956. Handle 
Replay of AllocateBlock request
URL: https://github.com/apache/hadoop-ozone/pull/505#discussion_r373605058
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMAllocateBlockRequest.java
 ##
 @@ -134,8 +135,7 @@ public OMRequest preExecute(OzoneManager ozoneManager) 
throws IOException {
 
   @Override
   public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
-  long transactionLogIndex,
-  OzoneManagerDoubleBufferHelper ozoneManagerDoubleBufferHelper) {
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
 
 Review comment:
   Sure. I will avoid it in future. The variable names were so big that it was 
causing unnecessary increase in the lines of code. I was trying to club code 
cleanup along with these changes. But I will reserve it for separate Jira in 
the future.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on a change in pull request #505: HDDS-2956. Handle Replay of AllocateBlock request

2020-01-31 Thread GitBox
hanishakoneru commented on a change in pull request #505: HDDS-2956. Handle 
Replay of AllocateBlock request
URL: https://github.com/apache/hadoop-ozone/pull/505#discussion_r373604140
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMAllocateBlockRequest.java
 ##
 @@ -160,71 +160,98 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 Map auditMap = buildKeyArgsAuditMap(keyArgs);
 auditMap.put(OzoneConsts.CLIENT_ID, String.valueOf(clientID));
 
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+String openKeyName = omMetadataManager.getOpenKey(volumeName, bucketName,
+keyName, clientID);
+
 OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
 OzoneManagerProtocolProtos.Type.AllocateBlock).setStatus(
 OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+OMClientResponse omClientResponse = null;
 
+OmKeyInfo openKeyInfo = null;
 IOException exception = null;
-OmKeyInfo omKeyInfo = null;
+
 try {
   // check Acl
   checkKeyAclsInOpenKeyTable(ozoneManager, volumeName, bucketName, keyName,
   IAccessAuthorizer.ACLType.WRITE, allocateBlockRequest.getClientID());
 
-  OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
   validateBucketAndVolume(omMetadataManager, volumeName,
   bucketName);
 
-  String openKey = omMetadataManager.getOpenKey(
-  volumeName, bucketName, keyName, clientID);
-
   // Here we don't acquire bucket/volume lock because for a single client
   // allocateBlock is called in serial fashion.
 
-  omKeyInfo = omMetadataManager.getOpenKeyTable().get(openKey);
-  if (omKeyInfo == null) {
-throw new OMException("Open Key not found " + openKey, KEY_NOT_FOUND);
+  openKeyInfo = omMetadataManager.getOpenKeyTable().get(openKeyName);
+  if (openKeyInfo == null) {
+// Check if this transaction is a replay of ratis logs.
+// If the Key was already committed and this transaction is being
+// replayed, we should ignore this transaction.
+String ozoneKey = omMetadataManager.getOzoneKey(volumeName,
 
 Review comment:
   My intention was to capture the maximum number of cases. If Key is deleted, 
we cannot help it. And this does not affect the regular case when the openKey 
exists. So it is not a performance concern.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on a change in pull request #501: HDDS-2944. Handle replay of KeyCommitRequest and DirectoryCreateRequest

2020-01-31 Thread GitBox
hanishakoneru commented on a change in pull request #501: HDDS-2944. Handle 
replay of KeyCommitRequest and DirectoryCreateRequest
URL: https://github.com/apache/hadoop-ozone/pull/501#discussion_r373602890
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequest.java
 ##
 @@ -126,16 +143,17 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 boolean acquiredLock = false;
 IOException exception = null;
 OMClientResponse omClientResponse = null;
+Result result = null;
 try {
   // check Acl
   checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
   IAccessAuthorizer.ACLType.CREATE, OzoneObj.ResourceType.KEY);
 
   // Check if this is the root of the filesystem.
   if (keyName.length() == 0) {
-return new OMDirectoryCreateResponse(null,
-omResponse.setCreateDirectoryResponse(
-CreateDirectoryResponse.newBuilder()).build());
+throw new OMException("Directory create failed. Cannot create " +
 
 Review comment:
   I thought the previous response was wrong. We should return the correct 
error code, right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on issue #450: HDDS-2893. Handle replay of KeyPurge Request.

2020-01-31 Thread GitBox
hanishakoneru commented on issue #450: HDDS-2893. Handle replay of KeyPurge 
Request.
URL: https://github.com/apache/hadoop-ozone/pull/450#issuecomment-580836450
 
 
   Thanks for the review @bharatviswa504 . Addressed your comments and fixed CI 
issues. Can you please take a look.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2965) Fix TestNodeFailure.java

2020-01-31 Thread Mukul Kumar Singh (Jira)
Mukul Kumar Singh created HDDS-2965:
---

 Summary: Fix TestNodeFailure.java
 Key: HDDS-2965
 URL: https://issues.apache.org/jira/browse/HDDS-2965
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: test
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh


Enable and fix TestNodeFailure.java in the integration test runs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2964) Fix @Ignore-d integration tests

2020-01-31 Thread Attila Doroszlai (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027619#comment-17027619
 ] 

Attila Doroszlai commented on HDDS-2964:


TestCloseContainerHandlingByClient is being enabled in 
https://github.com/apache/hadoop-ozone/pull/507

> Fix @Ignore-d integration tests
> ---
>
> Key: HDDS-2964
> URL: https://issues.apache.org/jira/browse/HDDS-2964
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Marton Elek
>Priority: Major
>
> We marked all the intermittent unit tests with @Ignore to get reliable 
> feedback from CI builds.
> Before HDDS-2833 we had 21 @Ignore annotations, HDDS-2833 introduced 34 new 
> one.
> We need to review all of these tests and either fix, or delete or convert 
> them to real unit tests.
> The current list of ignore tests:
> {code:java}
> hadoop-hdds/server-scm 
> org/apache/hadoop/hdds/scm/node/TestContainerPlacement.java:  @Ignore
> hadoop-hdds/server-scm 
> org/apache/hadoop/hdds/scm/node/TestDeadNodeHandler.java:  @Ignore("Tracked 
> by HDDS-2508.")
> hadoop-hdds/server-scm 
> org/apache/hadoop/hdds/scm/node/TestSCMNodeManager.java:  @Ignore
> hadoop-hdds/server-scm 
> org/apache/hadoop/hdds/scm/node/TestSCMNodeManager.java:  @Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/hdds/scm/container/TestContainerStateManagerIntegration.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/hdds/scm/container/TestContainerStateManagerIntegration.java:
>   @Ignore("TODO:HDDS-1159")
> hadoop-ozone/integration-test 
> org/apache/hadoop/hdds/scm/pipeline/TestNodeFailure.java:  @Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/hdds/scm/pipeline/TestNodeFailure.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineCreateAndDestroy.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/hdds/scm/safemode/TestSCMSafeModeWithPipelineRules.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/client/rpc/Test2WayCommitInRatis.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/client/rpc/TestBlockOutputStreamWithFailures.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/client/rpc/TestCloseContainerHandlingByClient.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/client/rpc/TestCloseContainerHandlingByClient.java:  
> @Ignore // test needs to be fixed after close container is handled for
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/client/rpc/TestCommitWatcher.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/client/rpc/TestContainerReplicationEndToEnd.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/client/rpc/TestContainerStateMachineFailures.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/client/rpc/TestContainerStateMachine.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/client/rpc/TestDeleteWithSlowFollower.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/client/rpc/TestFailureHandlingByClient.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/client/rpc/TestMultiBlockWritesWithDnFailures.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/client/rpc/TestOzoneAtRestEncryption.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/client/rpc/TestOzoneClientRetriesOnException.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java:  @Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java:  
> @Ignore("Debug Jenkins Timeout")
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientForAclAuditLog.java:@Ignore("Fix
>  this after adding audit support for HA Acl code. This will be " +
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientWithRatis.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/client/rpc/TestSecureOzoneRpcClient.java:  
> @Ignore("Needs to be moved out of this class as  client setup is static")
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/client/rpc/TestWatchForCommit.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java:@Ignore
> hadoop-ozone/integration-test 
> org/apache/hadoop/ozone/container/common/transport/server/ratis/TestCSMMetrics.java:@Ignore
> hadoop-ozone/integration-test 
> 

[jira] [Created] (HDDS-2964) Fix @Ignore-d integration tests

2020-01-31 Thread Marton Elek (Jira)
Marton Elek created HDDS-2964:
-

 Summary: Fix @Ignore-d integration tests
 Key: HDDS-2964
 URL: https://issues.apache.org/jira/browse/HDDS-2964
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: test
Reporter: Marton Elek


We marked all the intermittent unit tests with @Ignore to get reliable feedback 
from CI builds.

Before HDDS-2833 we had 21 @Ignore annotations, HDDS-2833 introduced 34 new one.

We need to review all of these tests and either fix, or delete or convert them 
to real unit tests.

The current list of ignore tests:
{code:java}
hadoop-hdds/server-scm 
org/apache/hadoop/hdds/scm/node/TestContainerPlacement.java:  @Ignore
hadoop-hdds/server-scm 
org/apache/hadoop/hdds/scm/node/TestDeadNodeHandler.java:  @Ignore("Tracked by 
HDDS-2508.")
hadoop-hdds/server-scm org/apache/hadoop/hdds/scm/node/TestSCMNodeManager.java: 
 @Ignore
hadoop-hdds/server-scm org/apache/hadoop/hdds/scm/node/TestSCMNodeManager.java: 
 @Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/hdds/scm/container/TestContainerStateManagerIntegration.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/hdds/scm/container/TestContainerStateManagerIntegration.java: 
 @Ignore("TODO:HDDS-1159")
hadoop-ozone/integration-test 
org/apache/hadoop/hdds/scm/pipeline/TestNodeFailure.java:  @Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/hdds/scm/pipeline/TestNodeFailure.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineCreateAndDestroy.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/hdds/scm/safemode/TestSCMSafeModeWithPipelineRules.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/client/rpc/Test2WayCommitInRatis.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/client/rpc/TestBlockOutputStreamWithFailures.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/client/rpc/TestCloseContainerHandlingByClient.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/client/rpc/TestCloseContainerHandlingByClient.java:  
@Ignore // test needs to be fixed after close container is handled for
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/client/rpc/TestCommitWatcher.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/client/rpc/TestContainerReplicationEndToEnd.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/client/rpc/TestContainerStateMachineFailures.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/client/rpc/TestContainerStateMachine.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/client/rpc/TestDeleteWithSlowFollower.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/client/rpc/TestFailureHandlingByClient.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/client/rpc/TestMultiBlockWritesWithDnFailures.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/client/rpc/TestOzoneAtRestEncryption.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/client/rpc/TestOzoneClientRetriesOnException.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java:  @Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java:  
@Ignore("Debug Jenkins Timeout")
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientForAclAuditLog.java:@Ignore("Fix
 this after adding audit support for HA Acl code. This will be " +
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientWithRatis.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/client/rpc/TestSecureOzoneRpcClient.java:  
@Ignore("Needs to be moved out of this class as  client setup is static")
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/client/rpc/TestWatchForCommit.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/container/common/transport/server/ratis/TestCSMMetrics.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java:@Ignore
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainerRatis.java:@Ignore("Disabling
 Ratis tests for pipeline work.")
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainerWithTLS.java:@Ignore("TODO:HDDS-1157")
hadoop-ozone/integration-test 
org/apache/hadoop/ozone/container/ozoneimpl/TestRatisManager.java:@Ignore("Disabling
 Ratis tests for pipeline work.")

[GitHub] [hadoop-ozone] bshashikant opened a new pull request #507: HDDS-2936. Hive queries fail at readFully

2020-01-31 Thread GitBox
bshashikant opened a new pull request #507: HDDS-2936. Hive queries fail at 
readFully
URL: https://github.com/apache/hadoop-ozone/pull/507
 
 
   ## What changes were proposed in this pull request?
   
   It fixes in the retry path of ozone client where length of data written was 
getting updated incorrectly while write in KeyouputStream.
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-2936
   
   
   ## How was this patch tested?
   The existing test "TestCloseContainerHandlingByClient" was failing because 
the issue. All these tests are executing successfully with the fix. The patch 
also tested in a real deployment where HIve workload was run and all hive 
queries succeed now.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant closed pull request #507: HDDS-2936. Hive queries fail at readFully

2020-01-31 Thread GitBox
bshashikant closed pull request #507: HDDS-2936. Hive queries fail at readFully
URL: https://github.com/apache/hadoop-ozone/pull/507
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2942) Putkey : create key table entries for intermediate directories in the key path

2020-01-31 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien reassigned HDDS-2942:
--

Assignee: Supratim Deka  (was: YiSheng Lien)

> Putkey : create key table entries for intermediate directories in the key path
> --
>
> Key: HDDS-2942
> URL: https://issues.apache.org/jira/browse/HDDS-2942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>
> Using path delimiter ('/'), parse the key as a FS file path. then create 
> entries in OM key table for every directory element occurring in the path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 commented on issue #503: HDDS-2850. Handle Create container use case in Recon.

2020-01-31 Thread GitBox
nandakumar131 commented on issue #503: HDDS-2850. Handle Create container use 
case in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/503#issuecomment-580688212
 
 
   @adoroszlai  Thanks for the review. I will take a look at the changes and 
merge it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on issue #503: HDDS-2850. Handle Create container use case in Recon.

2020-01-31 Thread GitBox
adoroszlai commented on issue #503: HDDS-2850. Handle Create container use case 
in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/503#issuecomment-580685993
 
 
   Thanks @avijayanhwx for updating the patch.
   
   @nandakumar131 would you like to review this or can I merge it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on issue #507: HDDS-2936. Hive queries fail at readFully

2020-01-31 Thread GitBox
adoroszlai commented on issue #507: HDDS-2936. Hive queries fail at readFully
URL: https://github.com/apache/hadoop-ozone/pull/507#issuecomment-580685291
 
 
   > I don't see the test is ignored in my branch
   
   @bshashikant It's ignored in `master`, so a merge from `master` is needed, 
then it can be unignored.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai edited a comment on issue #507: HDDS-2936. Hive queries fail at readFully

2020-01-31 Thread GitBox
adoroszlai edited a comment on issue #507: HDDS-2936. Hive queries fail at 
readFully
URL: https://github.com/apache/hadoop-ozone/pull/507#issuecomment-580662088
 
 
   > The issue can be reproduced with 
TestCloseContainerHandlingByClient#testBlockWriteViaRatis.
   
   Thanks @bshashikant for pointing this out.  I can confirm that the test is 
fixed by this patch.
   
   Now that integration tests are enabled, can you please go ahead and remove 
`@Ignore` from `TestCloseContainerHandlingByClient`?
   
   Thanks @adoroszlai. I don't see the test is ignored in my branch ,apart from 
one test which needs to be looked at.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on issue #507: HDDS-2936. Hive queries fail at readFully

2020-01-31 Thread GitBox
adoroszlai commented on issue #507: HDDS-2936. Hive queries fail at readFully
URL: https://github.com/apache/hadoop-ozone/pull/507#issuecomment-580662088
 
 
   > The issue can be reproduced with 
TestCloseContainerHandlingByClient#testBlockWriteViaRatis.
   
   Thanks @bshashikant for pointing this out.  I can confirm that the test is 
fixed by this patch.
   
   Now that integration tests are enabled, can you please go ahead and remove 
`@Ignore` from `TestCloseContainerHandlingByClient`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on issue #416: HDDS-2833. Enable integration tests for github actions

2020-01-31 Thread GitBox
adoroszlai commented on issue #416: HDDS-2833. Enable integration tests for 
github actions
URL: https://github.com/apache/hadoop-ozone/pull/416#issuecomment-580656003
 
 
   Thanks @elek for reviewing and committing it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek closed pull request #416: HDDS-2833. Enable integration tests for github actions

2020-01-31 Thread GitBox
elek closed pull request #416: HDDS-2833. Enable integration tests for github 
actions
URL: https://github.com/apache/hadoop-ozone/pull/416
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2833) Enable integrations tests for github actions

2020-01-31 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-2833:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Enable integrations tests for github actions
> 
>
> Key: HDDS-2833
> URL: https://issues.apache.org/jira/browse/HDDS-2833
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Marton Elek
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When we switched to use github actions the integration tests are disabled due 
> to the flakyness.
> We should disable all the flaky tests and enable the remaining integration 
> tests...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on issue #507: HDDS-2936. Hive queries fail at readFully

2020-01-31 Thread GitBox
bshashikant commented on issue #507: HDDS-2936. Hive queries fail at readFully
URL: https://github.com/apache/hadoop-ozone/pull/507#issuecomment-580648345
 
 
   > @bshashikant , which unit test can reproduce this issue?
   
   The issue can be reproduced with 
TestCloseContainerHandlingByClient#testBlockWriteViaRatis.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on a change in pull request #507: HDDS-2936. Hive queries fail at readFully

2020-01-31 Thread GitBox
bshashikant commented on a change in pull request #507: HDDS-2936. Hive queries 
fail at readFully
URL: https://github.com/apache/hadoop-ozone/pull/507#discussion_r373376009
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyOutputStream.java
 ##
 @@ -356,6 +370,7 @@ private void handleRetry(IOException exception, long len) 
throws IOException {
 msg = "Retry request failed. " + action.reason;
 LOG.error(msg, exception);
   }
+  isException = true;
   throw new IOException(msg, exception);
 
 Review comment:
   Addressed in the latest patch..


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on a change in pull request #507: HDDS-2936. Hive queries fail at readFully

2020-01-31 Thread GitBox
bshashikant commented on a change in pull request #507: HDDS-2936. Hive queries 
fail at readFully
URL: https://github.com/apache/hadoop-ozone/pull/507#discussion_r373375822
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyOutputStream.java
 ##
 @@ -484,6 +499,9 @@ public void close() throws IOException {
 closed = true;
 try {
   handleFlushOrClose(StreamAction.CLOSE);
+  if (!isException) {
+Preconditions.checkArgument(writeOffset == offset);
 
 Review comment:
   I would prefer to have the precondition check intact, as its just in place 
to validate the logic. If its not met, the code is buggy. Also, throwing 
Runtime exception is okay as this exception does not need be caught and handled 
anywhere in general.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on issue #507: HDDS-2936. Hive queries fail at readFully

2020-01-31 Thread GitBox
bshashikant commented on issue #507: HDDS-2936. Hive queries fail at readFully
URL: https://github.com/apache/hadoop-ozone/pull/507#issuecomment-580647274
 
 
   Thanks @fapifta and @mukul1987 for the review comments. I have moved all the 
exception throw logic in a common function where the exception state is set.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] iamabug commented on a change in pull request #491: HDDS-2748. interface/OzoneFS.md translation

2020-01-31 Thread GitBox
iamabug commented on a change in pull request #491: HDDS-2748. 
interface/OzoneFS.md translation
URL: https://github.com/apache/hadoop-ozone/pull/491#discussion_r373365646
 
 

 ##
 File path: hadoop-hdds/docs/content/interface/OzoneFS.zh.md
 ##
 @@ -0,0 +1,139 @@
+---
+title: Ozone 文件系统
+date: 2017-09-14
+weight: 2
+summary: 兼容 Hadoop 的文件系统使得任何使用类 HDFS 接口的应用无需任何修改就可以在 Ozone 上工作,比如 Apache 
Spark、YARN 和 Hive 等框架。
+---
+
+
+兼容 Hadoop 的文件系统接口可以让任意像 Ozone 这样的存储后端轻松地整合进 Hadoop 生态系统,Ozone 文件系统就是一个兼容 
Hadoop 的文件系统。
+
+## 搭建 Ozone 文件系统
+
+要创建一个 ozone 文件系统,我们需要先为它选择一个用来盛放数据的桶,这个桶会被用作 Ozone 
文件系统的后端存储,所有的文件和目录都存储为这个桶中的键。
+
+如果你还没有可用的卷和桶的话,请使用下面的命令创建:
+
+{{< highlight bash >}}
+ozone sh volume create /volume
+ozone sh bucket create /volume/bucket
+{{< /highlight >}}
+
+创建之后,请使用 _list volume_ 或 _list bucket_ 命令来确认桶已存在。
+
+请在 core-site.xml 中添加以下条目:
+
+{{< highlight xml >}}
+
+  fs.o3fs.impl
+  org.apache.hadoop.fs.ozone.OzoneFileSystem
+
+
+  fs.AbstractFileSystem.o3fs.impl
+  org.apache.hadoop.fs.ozone.OzFs
+
+
+  fs.defaultFS
+  o3fs://bucket.volume
+
+{{< /highlight >}}
+
+这样会使指定的桶成为 HDFS 的 dfs 命令的默认文件系统,并且将其注册为了 o3fs 文件系统类型。
+
+你还需要将 ozone-filesystem.jar 文件加入 classpath:
+
+{{< highlight bash >}}
+export 
HADOOP_CLASSPATH=/opt/ozone/share/ozonefs/lib/hadoop-ozone-filesystem-lib-current*.jar:$HADOOP_CLASSPATH
+{{< /highlight >}}
+
+当配置了默认的文件系统之后,用户可以运行 ls、put、mkdir 等命令,比如:
+
+{{< highlight bash >}}
+hdfs dfs -ls /
+{{< /highlight >}}
+
+或
+
+{{< highlight bash >}}
+hdfs dfs -mkdir /users
+{{< /highlight >}}
+
+
+或者 put 命令。换句话说,所有像 Hive、Spark 和 Distcp 的程序都会在这个文件系统上工作。
+请注意,在这个桶里使用 Ozone 文件系统以外的方法来进行键的创建和删除时,最终都会体现为 Ozone 文件系统中的目录和文件的创建和删除。
+
+注意:桶名和卷名不可以包含句点。
+此外,文件系统的 URI 可以由桶名和卷名后跟着 OM 主机的 FQDN 和一个可选的端口组成,比如,你可以同时指定主机和端口:
+
+{{< highlight bash>}}
+hdfs dfs -ls o3fs://bucket.volume.om-host.example.com:5678/key
+{{< /highlight >}}
+
+如果未指定端口,将会尝试从 `ozone.om.address` 配置中获取端口,如果未配置,则使用默认端口 `9862`,比如,我们在 
`ozone-site.xml` 中配置 `ozone.om.address` 如下:
+
+{{< highlight xml >}}
+  
+ozone.om.address
+0.0.0.0:6789
+  
+{{< /highlight >}}
+
+当我们运行下面的命令:
+
+{{< highlight bash>}}
+hdfs dfs -ls o3fs://bucket.volume.om-host.example.com/key
+{{< /highlight >}}
+
+它其实等价于:
+
+{{< highlight bash>}}
+hdfs dfs -ls o3fs://bucket.volume.om-host.example.com:6789/key
+{{< /highlight >}}
+
+注意:在这种情况下,`ozone.om.address` 配置中只有端口号会被用到,主机名是被忽略的。
+
+
+## 兼容旧版本 Hadoop(Legacy jar 和 BasicOzoneFilesystem)
+
+Ozone 文件系统的 jar 包有两种类型,它们都包含了所有的依赖:
+
+ * share/ozone/lib/hadoop-ozone-filesystem-lib-current-VERSION.jar
+ * share/ozone/lib/hadoop-ozone-filesystem-lib-legacy-VERSION.jar
+
+第一种 jar 包包含了在一个版本兼容的 hadoop(hadoop 3.2)中使用 Ozone 文件系统需要的所有依赖。
+
+第二种 jar 包将所有依赖单独放在一个内部的目录,并且这个目录下的类会用一个特殊的类加载器来加载这些类。通过这种方法,旧版本的 hadoop 就可以使用 
hadoop-ozone-filesystem-lib-legacy.jar(比如hadoop 3.1、hadoop 2.7 或者 spark+hadoop 
2.7)。
+
+和依赖的 jar 包类似, OzoneFileSystem 也有两种实现。
+
+对于 Hadoop 3.0 之后的版本,你应当使用 `org.apache.hadoop.fs.ozone.OzoneFileSystem`,它是兼容 
Hadoop 文件系统 API 的完整实现。
+
+对于 Hadoop 2.x 的版本,你应该使用基础版本 
`org.apache.hadoop.fs.ozone.BasicOzoneFileSystem`,两者实现基本相同,但是不包含在 Hadoop 3.0 
中引入的特性和依赖(比如 FS statistics、加密空间等)。
 
 Review comment:
   sounds about right.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm edited a comment on issue #164: HDDS-426. Add field modificationTime for Volume and Bucket

2020-01-31 Thread GitBox
cxorm edited a comment on issue #164: HDDS-426. Add field modificationTime for 
Volume and Bucket
URL: https://github.com/apache/hadoop-ozone/pull/164#issuecomment-580410022
 
 
   Description of this PR was updated and execution snapshots were uploaded in 
[JIRA](https://issues.apache.org/jira/browse/HDDS-426).
   
   @bharatviswa504 , @anuengineer Could you help review this PR if you have a 
time ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org