lordgamez commented on code in PR #1586:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1586#discussion_r1231548116


##########
extensions/aws/processors/PutS3Object.cpp:
##########
@@ -77,7 +78,31 @@ void PutS3Object::onSchedule(const 
std::shared_ptr<core::ProcessContext> &contex
     use_virtual_addressing_ = !*use_path_style_access;
   }
 
+  context->getProperty(MultipartThreshold.getName(), multipart_threshold_);
+  if (multipart_threshold_ > getMaxUploadSize() || multipart_threshold_ < 
getMinPartSize()) {
+    throw Exception(PROCESS_SCHEDULE_EXCEPTION, "Multipart Threshold is not 
between the valid 5MB and 5GB range!");
+  }
+  logger_->log_debug("PutS3Object: Multipart Threshold %" PRIu64, 
multipart_threshold_);
+  context->getProperty(MultipartPartSize.getName(), multipart_size_);
+  if (multipart_size_ > getMaxUploadSize() || multipart_size_ < 
getMinPartSize()) {
+    throw Exception(PROCESS_SCHEDULE_EXCEPTION, "Multipart Part Size is not 
between the valid 5MB and 5GB range!");
+  }
+  logger_->log_debug("PutS3Object: Multipart Size %" PRIu64, multipart_size_);
+
+
+  multipart_upload_ageoff_interval_ = 
minifi::utils::getRequiredPropertyOrThrow<core::TimePeriodValue>(*context, 
MultipartUploadAgeOffInterval.getName()).getMilliseconds();
+  logger_->log_debug("PutS3Object: Multipart Upload Ageoff Interval %" PRIu64 
" ms", multipart_upload_ageoff_interval_.count());
+
+  multipart_upload_max_age_threshold_ = 
minifi::utils::getRequiredPropertyOrThrow<core::TimePeriodValue>(*context, 
MultipartUploadMaxAgeThreshold.getName()).getMilliseconds();
+  logger_->log_debug("PutS3Object: Multipart Upload Max Age Threshold %" 
PRIu64 " ms", multipart_upload_max_age_threshold_.count());
+
   fillUserMetadata(context);
+
+  std::string multipart_temp_dir;
+  context->getProperty(TemporaryDirectoryMultipartState.getName(), 
multipart_temp_dir);
+
+
+  s3_wrapper_.initailizeMultipartUploadStateStorage(multipart_temp_dir, 
getUUIDStr());

Review Comment:
   First I implemented it with the state manager, but there was a problem with 
that. The state manager only commits the state at session commit and if the 
process is killed or the upload fails and the session is rolled back then the 
states between the part uploads are lost. Even when I tried to commit the state 
manager manually it caused an exception at the end of the trigger because the 
session commit could not commit (or rollback in case of a failure) an already 
commited session. It may be the same issue in NiFi because they also use a 
separate temporary directory for this state management.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to