bharatviswa504 commented on a change in pull request #369: HDDS-2755. Compare transactionID and updateID of Volume operations to avoid replaying transactions URL: https://github.com/apache/hadoop-ozone/pull/369#discussion_r358981064
########## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeCreateRequest.java ########## @@ -160,9 +164,19 @@ public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager, omResponse.build()); LOG.debug("volume:{} successfully created", omVolumeArgs.getVolume()); } else { - LOG.debug("volume:{} already exists", omVolumeArgs.getVolume()); - throw new OMException("Volume already exists", - OMException.ResultCodes.VOLUME_ALREADY_EXISTS); + // Check if this transaction is a replay of ratis logs. + if (isReplay(dbVolumeArgs.getUpdateID(), transactionLogIndex)) { + // Replay implies the response has already been returned to + // the client. So take no further action and return a dummy + // OMClientResponse. + LOG.debug("Replayed Transaction {} ignored. Request: {}", + transactionLogIndex, createVolumeRequest); + return new OMVolumeCreateResponse(createReplayOMResponse(omResponse)); + } else { + LOG.debug("volume:{} already exists", omVolumeArgs.getVolume()); + throw new OMException("Volume already exists", + OMException.ResultCodes.VOLUME_ALREADY_EXISTS); + } Review comment: Below when an exception is caught, we can use the new OMVolumeCreateResponse constructor which can be used for error responses. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org