[ 
https://issues.apache.org/jira/browse/HDDS-1379?focusedWorklogId=222695&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-222695
 ]

ASF GitHub Bot logged work on HDDS-1379:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 03/Apr/19 23:32
            Start Date: 03/Apr/19 23:32
    Worklog Time Spent: 10m 
      Work Description: arp7 commented on pull request #689: HDDS-1379. Convert 
all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r271971268
 
 

 ##########
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
 ##########
 @@ -322,28 +409,56 @@ public void deleteVolume(String volume) throws 
IOException {
       Preconditions.checkState(volume.equals(volumeArgs.getVolume()));
       // delete the volume from the owner list
       // as well as delete the volume entry
-      try (BatchOperation batch = metadataManager.getStore()
-          .initBatchOperation()) {
-        delVolumeFromOwnerList(volume, volumeArgs.getOwnerName(), batch);
-        metadataManager.getVolumeTable().deleteWithBatch(batch, dbVolumeKey);
-        metadataManager.getStore().commitBatchOperation(batch);
+      VolumeList newVolumeList = delVolumeFromOwnerList(volume,
+          volumeArgs.getOwnerName());
+
+      if (!isRatisEnabled) {
+        deleteVolumeCommitToDB(newVolumeList,
+            volume, owner);
       }
-    } catch (RocksDBException| IOException ex) {
+      return new OmDeleteVolumeResponse(volume, owner, newVolumeList);
+    } catch (IOException ex) {
       if (!(ex instanceof OMException)) {
         LOG.error("Delete volume failed for volume:{}", volume, ex);
       }
-      if(ex instanceof RocksDBException) {
-        throw RocksDBStore.toIOException("Volume creation failed.",
-            (RocksDBException) ex);
-      } else {
-        throw (IOException) ex;
-      }
+      throw ex;
     } finally {
       metadataManager.getLock().releaseVolumeLock(volume);
       metadataManager.getLock().releaseUserLock(owner);
     }
   }
 
+  @Override
+  public void applyDeleteVolume(String volume, String owner,
+      VolumeList newVolumeList) throws IOException {
+    try {
+      deleteVolumeCommitToDB(newVolumeList, volume, owner);
+    } catch (IOException ex) {
+      LOG.error("Delete volume failed for volume:{}", volume,
+          ex);
+      throw ex;
+    }
+  }
+
+  private void deleteVolumeCommitToDB(VolumeList newVolumeList,
+      String volume, String owner) throws IOException {
+    try (BatchOperation batch = metadataManager.getStore()
+        .initBatchOperation()) {
+      String dbUserKey = metadataManager.getUserKey(owner);
 
 Review comment:
   Same. Can we pass the userKey from start to apply?
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 222695)
    Time Spent: 1.5h  (was: 1h 20m)

> Convert all OM Volume related operations to HA model
> ----------------------------------------------------
>
>                 Key: HDDS-1379
>                 URL: https://issues.apache.org/jira/browse/HDDS-1379
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>            Reporter: Bharat Viswanadham
>            Assignee: Bharat Viswanadham
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> In this jira, we shall convert all OM related operations to OM HA model, 
> which is a 2 step.
>  # StartTransaction, where we validate request and check for any errors and 
> return the response.
>  # ApplyTransaction, where original OM request will have a response which 
> needs to be applied to OM DB. This step is just to apply response to Om DB.
> In this way, all requests which are failed with like volume not found or some 
> conditions which i have not satisfied like when deleting volume should be 
> empty, these all will be executed during startTransaction, and if it fails 
> these requests will not be written to raft log also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to