[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #1170: MINIFICPP-1618 Create the ReplaceText processor

2021-10-29 Thread GitBox


fgerlits commented on a change in pull request #1170:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1170#discussion_r739078055



##
File path: docker/test/integration/MiNiFi_integration_test_driver.py
##
@@ -104,7 +104,7 @@ def 
generate_input_port_for_remote_process_group(remote_process_group, name):
 return input_port_node
 
 def add_test_data(self, path, test_data, file_name=str(uuid.uuid4())):
-self.docker_directory_bindings.put_file_to_docker_path(self.test_id, 
path, file_name, test_data.encode('utf-8'))
+self.docker_directory_bindings.put_file_to_docker_path(self.test_id, 
path, file_name, test_data.replace('\\n', '\n').encode('utf-8'))

Review comment:
   I have separated this change to the first commit 
(1b5937003b4592c119e58ed3d73931b0692517b9), so to merge this PR after #1168, 
whoever is merging should just need to skip 
1b5937003b4592c119e58ed3d73931b0692517b9 when cherry-picking commits.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #1188: MINIFICPP-1651: Added DefragmentText processor

2021-10-29 Thread GitBox


lordgamez commented on a change in pull request #1188:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1188#discussion_r739136640



##
File path: extensions/standard-processors/processors/DefragmentText.cpp
##
@@ -0,0 +1,337 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "DefragmentText.h"
+
+#include 
+
+#include "core/Resource.h"
+#include "serialization/PayloadSerializer.h"
+#include "TextFragmentUtils.h"
+#include "utils/gsl.h"
+
+namespace org::apache::nifi::minifi::processors {
+
+const core::Relationship DefragmentText::Success("success", "Flowfiles that 
have no fragmented messages in them");
+const core::Relationship DefragmentText::Failure("failure", "Flowfiles that 
failed the defragmentation process");
+const core::Relationship DefragmentText::Self("__self__", "Marks the FlowFile 
to be owned by this processor");
+
+const core::Property DefragmentText::Pattern(
+core::PropertyBuilder::createProperty("Pattern")
+->withDescription("A regular expression to match at the start or end 
of messages.")
+->isRequired(true)->build());
+
+const core::Property DefragmentText::PatternLoc(
+core::PropertyBuilder::createProperty("Pattern 
Location")->withDescription("Where to look for the pattern.")

Review comment:
   I'm okay with this.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on pull request #1017: MINIFICPP-1515 - Add integration tests testing different flowfile sizes in a simple flow

2021-10-29 Thread GitBox


fgerlits commented on pull request #1017:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1017#issuecomment-954650671


   This test passes for me locally after rebasing on top of main, so I think 
it's ready for merging.
   
   In order to not increase the running time of the CI job by too much, I 
suggest making the timeout configurable and lower, eg. 1 second for the first 3 
tests and 10 seconds for the last two.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1168: MINIFICPP-1632 - Implement RouteText processor

2021-10-29 Thread GitBox


martinzink commented on a change in pull request #1168:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1168#discussion_r739138508



##
File path: libminifi/include/utils/StringUtils.h
##
@@ -146,22 +160,28 @@ class StringUtils {
 
   static std::string& replaceAll(std::string& source_string, const std::string 
&from_string, const std::string &to_string);
 
-  inline static bool endsWithIgnoreCase(const std::string &value, const 
std::string & endString) {
-if (endString.size() > value.size())
+  inline static bool startsWith(const std::string_view& value, const 
std::string_view& start, bool case_sensitive = true) {
+if (start.length() > value.length()) {
   return false;
-return std::equal(endString.rbegin(), endString.rend(), value.rbegin(), 
[](unsigned char lc, unsigned char rc) {return tolower(lc) == tolower(rc);});
+}
+if (case_sensitive) {
+  return std::equal(start.begin(), start.end(), value.begin());
+}
+return std::equal(start.begin(), start.end(), value.begin(), [](unsigned 
char lc, unsigned char rc) {return tolower(lc) == tolower(rc);});
   }
 
-  inline static bool startsWith(const std::string& value, const std::string& 
start_string) {
-if (start_string.size() > value.size())
+  inline static bool endsWith(const std::string_view& value, const 
std::string_view& end, bool case_sensitive = true) {
+if (end.length() > value.length()) {
   return false;
-return std::equal(start_string.begin(), start_string.end(), value.begin());
+}
+if (case_sensitive) {
+  return std::equal(end.rbegin(), end.rend(), value.rbegin());
+}
+return std::equal(end.rbegin(), end.rend(), value.rbegin(), [](unsigned 
char lc, unsigned char rc) {return tolower(lc) == tolower(rc);});
   }
 
-  inline static bool endsWith(const std::string& value, const std::string& 
end_string) {
-if (end_string.size() > value.size())
-  return false;
-return std::equal(end_string.rbegin(), end_string.rend(), value.rbegin());
+  inline static bool endsWithIgnoreCase(const std::string_view& value, const 
std::string_view& endString) {

Review comment:
   I find it strange that we have this, but dont have startsWithIgnoreCase

##
File path: libminifi/include/utils/ProcessorConfigUtils.h
##
@@ -38,6 +38,19 @@ std::chrono::milliseconds 
parseTimePropertyMSOrThrow(core::ProcessContext* conte
 std::optional getOptionalUintProperty(const core::ProcessContext& 
context, const std::string& property_name);
 std::string parsePropertyWithAllowableValuesOrThrow(const 
core::ProcessContext& context, const std::string& property_name, const 
std::set& allowable_values);
 
+template
+T parseEnumProperty(const core::ProcessContext& context, const core::Property& 
prop) {

Review comment:
   I really like this :+1:
   Could you please add some tests to verify its behaviour?

##
File path: libminifi/include/utils/StringUtils.h
##
@@ -146,22 +160,28 @@ class StringUtils {
 
   static std::string& replaceAll(std::string& source_string, const std::string 
&from_string, const std::string &to_string);
 
-  inline static bool endsWithIgnoreCase(const std::string &value, const 
std::string & endString) {
-if (endString.size() > value.size())
+  inline static bool startsWith(const std::string_view& value, const 
std::string_view& start, bool case_sensitive = true) {

Review comment:
   I think we should also add a couple of case insensitive tests to 
TestStringUtils::startsWith and TestStringUtils::endsWith now that this is a 
feature.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #1188: MINIFICPP-1651: Added DefragmentText processor

2021-10-29 Thread GitBox


fgerlits commented on a change in pull request #1188:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1188#discussion_r739169606



##
File path: extensions/standard-processors/processors/DefragmentText.cpp
##
@@ -0,0 +1,323 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "DefragmentText.h"
+
+#include 
+
+#include "core/Resource.h"
+#include "serialization/PayloadSerializer.h"
+#include "TextFragmentUtils.h"
+#include "utils/gsl.h"
+#include "utils/StringUtils.h"
+
+namespace org::apache::nifi::minifi::processors {
+
+const core::Relationship DefragmentText::Success("success", "Flowfiles that 
have been successfully defragmented");
+const core::Relationship DefragmentText::Failure("failure", "Flowfiles that 
failed the defragmentation process");
+const core::Relationship DefragmentText::Self("__self__", "Marks the FlowFile 
to be owned by this processor");
+
+const core::Property DefragmentText::Pattern(
+core::PropertyBuilder::createProperty("Pattern")
+->withDescription("A regular expression to match at the start or end 
of messages.")
+->isRequired(true)->build());
+
+const core::Property DefragmentText::PatternLoc(
+core::PropertyBuilder::createProperty("Pattern 
Location")->withDescription("Whether the pattern is located at the start or at 
the end of the messages.")
+->withAllowableValues(PatternLocation::values())
+
->withDefaultValue(toString(PatternLocation::START_OF_MESSAGE))->build());
+
+
+const core::Property DefragmentText::MaxBufferSize(
+core::PropertyBuilder::createProperty("Max Buffer Size")
+->withDescription("The maximum buffer size, if the buffer exceeds 
this, it will be transferred to failure. Expected format is  ")
+
->withType(core::StandardValidators::get().DATA_SIZE_VALIDATOR)->build());
+
+const core::Property DefragmentText::MaxBufferAge(
+core::PropertyBuilder::createProperty("Max Buffer Age")->
+withDescription("The maximum age of a buffer after which the buffer 
will be transferred to failure. Expected format is  ")->build());
+
+void DefragmentText::initialize() {
+  setSupportedRelationships({Success, Failure});
+  setSupportedProperties({Pattern, PatternLoc, MaxBufferAge, MaxBufferSize});
+}
+
+void DefragmentText::onSchedule(core::ProcessContext* context, 
core::ProcessSessionFactory*) {
+  gsl_Expects(context);
+
+  std::string max_buffer_age_str;
+  if (context->getProperty(MaxBufferAge.getName(), max_buffer_age_str)) {
+core::TimeUnit unit;
+uint64_t max_buffer_age;
+if (core::Property::StringToTime(max_buffer_age_str, max_buffer_age, unit) 
&& core::Property::ConvertTimeUnitToMS(max_buffer_age, unit, max_buffer_age)) {
+  buffer_.setMaxAge(std::chrono::milliseconds(max_buffer_age));
+  logger_->log_trace("The Buffer maximum age is configured to be %" PRIu64 
" ms", max_buffer_age);
+}
+  }
+
+  std::string max_buffer_size_str;
+  if (context->getProperty(MaxBufferSize.getName(), max_buffer_size_str)) {
+uint64_t max_buffer_size = 
core::DataSizeValue(max_buffer_size_str).getValue();
+if (max_buffer_size > 0) {
+  buffer_.setMaxSize(max_buffer_size);
+  logger_->log_trace("The Buffer maximum size is configured to be %" 
PRIu64 " B", max_buffer_size);
+}
+  }
+
+  context->getProperty(PatternLoc.getName(), pattern_location_);
+
+  std::string pattern_str;
+  if (context->getProperty(Pattern.getName(), pattern_str) && 
!pattern_str.empty()) {
+pattern_ = std::regex(pattern_str);
+logger_->log_trace("The Pattern is configured to be %s", pattern_str);
+  } else {
+throw Exception(PROCESS_SCHEDULE_EXCEPTION, "Pattern property missing or 
invalid");
+  }
+}
+
+void DefragmentText::onTrigger(core::ProcessContext*, core::ProcessSession* 
session) {
+  gsl_Expects(session);
+  auto flowFiles = flow_file_store_.getNewFlowFiles();
+  for (auto& file : flowFiles) {
+processNextFragment(session, file);
+  }
+  std::shared_ptr original_flow_file = session->get();
+  processNextFragment(session, original_flow_file);
+  if (buffer_.maxAgeReached() || buffer_.maxSizeReached()) {
+buffer_.flushAndReplace(session, Failure, nullptr);
+  }

Review comment:
 

[GitHub] [nifi-minifi-cpp] fgerlits commented on pull request #1188: MINIFICPP-1651: Added DefragmentText processor

2021-10-29 Thread GitBox


fgerlits commented on pull request #1188:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1188#issuecomment-954685749


   These new test processors (`ReadFromFlowFileTestProcessor`, 
`WriteToFlowFileTestProcessor`) are super useful, thank you for creating them!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markobean commented on pull request #5307: NIFI-8917: add profiles for excluding minifi, nifi-registry, nifi-too…

2021-10-29 Thread GitBox


markobean commented on pull request #5307:
URL: https://github.com/apache/nifi/pull/5307#issuecomment-954691680


   @greyp9 I like your recommendation better than cluttering up the root 
pom.xml. I am going to close this PR. I'll work on updating the README with 
this information. I'm not sure if I'll open a new PR against the same ticket, 
or create a new one since the solution has a fairly significant change in 
scope. Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markobean closed pull request #5307: NIFI-8917: add profiles for excluding minifi, nifi-registry, nifi-too…

2021-10-29 Thread GitBox


markobean closed pull request #5307:
URL: https://github.com/apache/nifi/pull/5307


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-9348) Enhance PutSMB processor to accept a temporary suffix while copying

2021-10-29 Thread Gabriel Barbu (Jira)
Gabriel Barbu created NIFI-9348:
---

 Summary: Enhance PutSMB processor to accept a temporary suffix 
while copying
 Key: NIFI-9348
 URL: https://issues.apache.org/jira/browse/NIFI-9348
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.14.0
Reporter: Gabriel Barbu


In case there is a consumer on the other end of the network share where the 
PutSMB processor writes the data, the file might be picked up by the consumer 
before the file is fully copied which might lead to errors on the consumer side.

 

To solve this problem we can introduce a "temporary suffix" which will be 
removed upon completing the copy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-9348) Enhance PutSMB processor to accept a temporary suffix while copying

2021-10-29 Thread Gabriel Barbu (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17435971#comment-17435971
 ] 

Gabriel Barbu commented on NIFI-9348:
-

The fix is already done and I also did some refactoring on the original code 
which also fixes NIFI-7863.

> Enhance PutSMB processor to accept a temporary suffix while copying
> ---
>
> Key: NIFI-9348
> URL: https://issues.apache.org/jira/browse/NIFI-9348
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.14.0
>Reporter: Gabriel Barbu
>Priority: Major
>
> In case there is a consumer on the other end of the network share where the 
> PutSMB processor writes the data, the file might be picked up by the consumer 
> before the file is fully copied which might lead to errors on the consumer 
> side.
>  
> To solve this problem we can introduce a "temporary suffix" which will be 
> removed upon completing the copy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] bibistroc opened a new pull request #5495: [NIFI-9348] Added temporary suffix and fixed [NIFI-7863] creation of …

2021-10-29 Thread GitBox


bibistroc opened a new pull request #5495:
URL: https://github.com/apache/nifi/pull/5495


   …the directories
   
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   Enables the functionality of temporary suffix for the PutSMB processor 
(NIFI-9348) and it also fixes the problem with creating missing directories 
from NIFI-7863.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [x ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [x ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [x ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ x] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [x ] Have you ensured that format looks appropriate for the output in 
which it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (MINIFICPP-1677) Add SASL PLAIN mechanism support to Kafka processors

2021-10-29 Thread Jira
Gábor Gyimesi created MINIFICPP-1677:


 Summary: Add SASL PLAIN mechanism support to Kafka processors
 Key: MINIFICPP-1677
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1677
 Project: Apache NiFi MiNiFi C++
  Issue Type: New Feature
Reporter: Gábor Gyimesi
Assignee: Gábor Gyimesi


PublishKafka processor currently supports Kerberos properties for SASL/GSSAPI 
configuration which is the default SASL configuration in librdkafka. We should 
also support SASL/PLAIN configuration with username and password 
authentication. This requires adding these additional username, passwordl, sasl 
mechanism properties. We also need to extend the security protocol options with 
SASL_PLAINTEXT and SASL_SSL to be more precise which protocol is used, as 
previously SASL was implicitly implied when Kerberos properties were set. This 
would also be on par with the NiFi implementation.

The same configuration possibilities should also be implemented in ConsumeKafka 
processor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1188: MINIFICPP-1651: Added DefragmentText processor

2021-10-29 Thread GitBox


szaszm commented on a change in pull request #1188:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1188#discussion_r739164451



##
File path: extensions/standard-processors/processors/DefragmentText.cpp
##
@@ -0,0 +1,323 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "DefragmentText.h"
+
+#include 
+
+#include "core/Resource.h"
+#include "serialization/PayloadSerializer.h"
+#include "TextFragmentUtils.h"
+#include "utils/gsl.h"
+#include "utils/StringUtils.h"
+
+namespace org::apache::nifi::minifi::processors {
+
+const core::Relationship DefragmentText::Success("success", "Flowfiles that 
have been successfully defragmented");
+const core::Relationship DefragmentText::Failure("failure", "Flowfiles that 
failed the defragmentation process");
+const core::Relationship DefragmentText::Self("__self__", "Marks the FlowFile 
to be owned by this processor");
+
+const core::Property DefragmentText::Pattern(
+core::PropertyBuilder::createProperty("Pattern")
+->withDescription("A regular expression to match at the start or end 
of messages.")
+->isRequired(true)->build());
+
+const core::Property DefragmentText::PatternLoc(
+core::PropertyBuilder::createProperty("Pattern 
Location")->withDescription("Whether the pattern is located at the start or at 
the end of the messages.")
+->withAllowableValues(PatternLocation::values())
+
->withDefaultValue(toString(PatternLocation::START_OF_MESSAGE))->build());
+
+
+const core::Property DefragmentText::MaxBufferSize(
+core::PropertyBuilder::createProperty("Max Buffer Size")
+->withDescription("The maximum buffer size, if the buffer exceeds 
this, it will be transferred to failure. Expected format is  ")
+
->withType(core::StandardValidators::get().DATA_SIZE_VALIDATOR)->build());
+
+const core::Property DefragmentText::MaxBufferAge(
+core::PropertyBuilder::createProperty("Max Buffer Age")->
+withDescription("The maximum age of a buffer after which the buffer 
will be transferred to failure. Expected format is  ")->build());
+
+void DefragmentText::initialize() {
+  setSupportedRelationships({Success, Failure});
+  setSupportedProperties({Pattern, PatternLoc, MaxBufferAge, MaxBufferSize});
+}
+
+void DefragmentText::onSchedule(core::ProcessContext* context, 
core::ProcessSessionFactory*) {
+  gsl_Expects(context);
+
+  std::string max_buffer_age_str;
+  if (context->getProperty(MaxBufferAge.getName(), max_buffer_age_str)) {
+core::TimeUnit unit;
+uint64_t max_buffer_age;
+if (core::Property::StringToTime(max_buffer_age_str, max_buffer_age, unit) 
&& core::Property::ConvertTimeUnitToMS(max_buffer_age, unit, max_buffer_age)) {
+  buffer_.setMaxAge(std::chrono::milliseconds(max_buffer_age));
+  logger_->log_trace("The Buffer maximum age is configured to be %" PRIu64 
" ms", max_buffer_age);
+}
+  }
+
+  std::string max_buffer_size_str;
+  if (context->getProperty(MaxBufferSize.getName(), max_buffer_size_str)) {
+uint64_t max_buffer_size = 
core::DataSizeValue(max_buffer_size_str).getValue();
+if (max_buffer_size > 0) {
+  buffer_.setMaxSize(max_buffer_size);
+  logger_->log_trace("The Buffer maximum size is configured to be %" 
PRIu64 " B", max_buffer_size);
+}
+  }
+
+  context->getProperty(PatternLoc.getName(), pattern_location_);
+
+  std::string pattern_str;
+  if (context->getProperty(Pattern.getName(), pattern_str) && 
!pattern_str.empty()) {
+pattern_ = std::regex(pattern_str);
+logger_->log_trace("The Pattern is configured to be %s", pattern_str);
+  } else {
+throw Exception(PROCESS_SCHEDULE_EXCEPTION, "Pattern property missing or 
invalid");
+  }
+}
+
+void DefragmentText::onTrigger(core::ProcessContext*, core::ProcessSession* 
session) {
+  gsl_Expects(session);
+  auto flowFiles = flow_file_store_.getNewFlowFiles();
+  for (auto& file : flowFiles) {
+processNextFragment(session, file);
+  }
+  std::shared_ptr original_flow_file = session->get();
+  processNextFragment(session, original_flow_file);
+  if (buffer_.maxAgeReached() || buffer_.maxSizeReached()) {
+buffer_.flushAndReplace(session, Failure, nullptr);
+  }
+}
+
+void DefragmentTex

[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1188: MINIFICPP-1651: Added DefragmentText processor

2021-10-29 Thread GitBox


szaszm commented on a change in pull request #1188:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1188#discussion_r739225061



##
File path: extensions/standard-processors/tests/unit/DefragmentTextTests.cpp
##
@@ -0,0 +1,273 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "TestBase.h"
+#include "WriteToFlowFileTestProcessor.h"
+#include "ReadFromFlowFileTestProcessor.h"
+#include "UpdateAttribute.h"
+#include "DefragmentText.h"
+#include "TextFragmentUtils.h"
+#include "utils/TestUtils.h"
+#include "serialization/PayloadSerializer.h"
+#include "serialization/FlowFileSerializer.h"
+#include "unit/ContentRepositoryDependentTests.h"
+
+using WriteToFlowFileTestProcessor = 
org::apache::nifi::minifi::processors::WriteToFlowFileTestProcessor;
+using ReadFromFlowFileTestProcessor = 
org::apache::nifi::minifi::processors::ReadFromFlowFileTestProcessor;
+using UpdateAttribute = org::apache::nifi::minifi::processors::UpdateAttribute;
+using DefragmentText = org::apache::nifi::minifi::processors::DefragmentText;
+
+TEST_CASE("DefragTextFlowFilesNoMultilinePatternAtStartTest", 
"[defragmenttextnomultilinepatternatstarttest]") {
+  TestController testController;
+  std::shared_ptr plan = testController.createPlan();
+  std::shared_ptr write_to_flow_file = 
std::dynamic_pointer_cast(
+  plan->addProcessor("WriteToFlowFileTestProcessor", 
"write_to_flow_file"));
+  std::shared_ptr defrag_text_flow_files =  
std::dynamic_pointer_cast(
+  plan->addProcessor("DefragmentText", "defrag_text_flow_files", 
core::Relationship("success", "description"), true));
+  std::shared_ptr read_from_flow_file = 
std::dynamic_pointer_cast(
+  plan->addProcessor("ReadFromFlowFileTestProcessor", 
"read_from_flow_file", DefragmentText::Success, true));
+  plan->setProperty(defrag_text_flow_files, DefragmentText::Pattern.getName(), 
"<[0-9]+>");
+
+
+  write_to_flow_file->setContent("<1> Foo");
+  testController.runSession(plan);
+  CHECK(read_from_flow_file->numberOfFlowFilesRead() == 0);
+  write_to_flow_file->setContent("<2> Bar");
+  plan->reset();
+  testController.runSession(plan);
+  CHECK(read_from_flow_file->readFlowFileWithContent("<1> Foo"));
+  write_to_flow_file->setContent("<3> Baz");
+  plan->reset();
+  testController.runSession(plan);
+  CHECK(read_from_flow_file->readFlowFileWithContent("<2> Bar"));
+}
+
+TEST_CASE("DefragmentTextEmptyPattern", "[defragmenttextemptypattern]") {
+  TestController testController;
+  std::shared_ptr plan = testController.createPlan();
+  std::shared_ptr write_to_flow_file = 
std::dynamic_pointer_cast(
+  plan->addProcessor("WriteToFlowFileTestProcessor", 
"write_to_flow_file"));
+  std::shared_ptr defrag_text_flow_files =  
std::dynamic_pointer_cast(
+  plan->addProcessor("DefragmentText", "defrag_text_flow_files", 
core::Relationship("success", "description"), true));
+  std::shared_ptr read_from_flow_file = 
std::dynamic_pointer_cast(
+  plan->addProcessor("ReadFromFlowFileTestProcessor", 
"read_from_flow_file", DefragmentText::Success, true));
+  plan->setProperty(defrag_text_flow_files, DefragmentText::Pattern.getName(), 
"");
+  plan->setProperty(defrag_text_flow_files, 
DefragmentText::PatternLoc.getName(), 
toString(DefragmentText::PatternLocation::END_OF_MESSAGE));
+
+  REQUIRE_THROWS_WITH(testController.runSession(plan), "Process Schedule 
Operation: Pattern property missing or invalid");
+}
+
+TEST_CASE("DefragmentTextNoMultilinePatternAtEndTest", 
"[defragmenttextnomultilinepatternatendtest]") {
+  TestController testController;
+  std::shared_ptr plan = testController.createPlan();
+  std::shared_ptr write_to_flow_file = 
std::dynamic_pointer_cast(
+  plan->addProcessor("WriteToFlowFileTestProcessor", 
"write_to_flow_file"));
+  std::shared_ptr defrag_text_flow_files =  
std::dynamic_pointer_cast(
+  plan->addProcessor("DefragmentText", "defrag_text_flow_files", 
core::Relationship("success", "description"), true));
+  std::shared_ptr read_from_flow_file = 
std::dynamic_pointer_cast(
+  plan->addProcessor("ReadFromFlowFileTestProcessor", 
"read_from_flow_file", DefragmentText::Success, true));

Review comment:
   What do you think about 
[SECTIO

[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #1188: MINIFICPP-1651: Added DefragmentText processor

2021-10-29 Thread GitBox


fgerlits commented on a change in pull request #1188:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1188#discussion_r739234364



##
File path: extensions/standard-processors/tests/unit/DefragmentTextTests.cpp
##
@@ -0,0 +1,273 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "TestBase.h"
+#include "WriteToFlowFileTestProcessor.h"
+#include "ReadFromFlowFileTestProcessor.h"
+#include "UpdateAttribute.h"
+#include "DefragmentText.h"
+#include "TextFragmentUtils.h"
+#include "utils/TestUtils.h"
+#include "serialization/PayloadSerializer.h"
+#include "serialization/FlowFileSerializer.h"
+#include "unit/ContentRepositoryDependentTests.h"
+
+using WriteToFlowFileTestProcessor = 
org::apache::nifi::minifi::processors::WriteToFlowFileTestProcessor;
+using ReadFromFlowFileTestProcessor = 
org::apache::nifi::minifi::processors::ReadFromFlowFileTestProcessor;
+using UpdateAttribute = org::apache::nifi::minifi::processors::UpdateAttribute;
+using DefragmentText = org::apache::nifi::minifi::processors::DefragmentText;
+
+TEST_CASE("DefragTextFlowFilesNoMultilinePatternAtStartTest", 
"[defragmenttextnomultilinepatternatstarttest]") {
+  TestController testController;
+  std::shared_ptr plan = testController.createPlan();
+  std::shared_ptr write_to_flow_file = 
std::dynamic_pointer_cast(
+  plan->addProcessor("WriteToFlowFileTestProcessor", 
"write_to_flow_file"));
+  std::shared_ptr defrag_text_flow_files =  
std::dynamic_pointer_cast(
+  plan->addProcessor("DefragmentText", "defrag_text_flow_files", 
core::Relationship("success", "description"), true));
+  std::shared_ptr read_from_flow_file = 
std::dynamic_pointer_cast(
+  plan->addProcessor("ReadFromFlowFileTestProcessor", 
"read_from_flow_file", DefragmentText::Success, true));
+  plan->setProperty(defrag_text_flow_files, DefragmentText::Pattern.getName(), 
"<[0-9]+>");
+
+
+  write_to_flow_file->setContent("<1> Foo");
+  testController.runSession(plan);
+  CHECK(read_from_flow_file->numberOfFlowFilesRead() == 0);
+  write_to_flow_file->setContent("<2> Bar");
+  plan->reset();
+  testController.runSession(plan);
+  CHECK(read_from_flow_file->readFlowFileWithContent("<1> Foo"));
+  write_to_flow_file->setContent("<3> Baz");
+  plan->reset();
+  testController.runSession(plan);
+  CHECK(read_from_flow_file->readFlowFileWithContent("<2> Bar"));
+}
+
+TEST_CASE("DefragmentTextEmptyPattern", "[defragmenttextemptypattern]") {
+  TestController testController;
+  std::shared_ptr plan = testController.createPlan();
+  std::shared_ptr write_to_flow_file = 
std::dynamic_pointer_cast(
+  plan->addProcessor("WriteToFlowFileTestProcessor", 
"write_to_flow_file"));
+  std::shared_ptr defrag_text_flow_files =  
std::dynamic_pointer_cast(
+  plan->addProcessor("DefragmentText", "defrag_text_flow_files", 
core::Relationship("success", "description"), true));
+  std::shared_ptr read_from_flow_file = 
std::dynamic_pointer_cast(
+  plan->addProcessor("ReadFromFlowFileTestProcessor", 
"read_from_flow_file", DefragmentText::Success, true));
+  plan->setProperty(defrag_text_flow_files, DefragmentText::Pattern.getName(), 
"");
+  plan->setProperty(defrag_text_flow_files, 
DefragmentText::PatternLoc.getName(), 
toString(DefragmentText::PatternLocation::END_OF_MESSAGE));
+
+  REQUIRE_THROWS_WITH(testController.runSession(plan), "Process Schedule 
Operation: Pattern property missing or invalid");
+}
+
+TEST_CASE("DefragmentTextNoMultilinePatternAtEndTest", 
"[defragmenttextnomultilinepatternatendtest]") {
+  TestController testController;
+  std::shared_ptr plan = testController.createPlan();
+  std::shared_ptr write_to_flow_file = 
std::dynamic_pointer_cast(
+  plan->addProcessor("WriteToFlowFileTestProcessor", 
"write_to_flow_file"));
+  std::shared_ptr defrag_text_flow_files =  
std::dynamic_pointer_cast(
+  plan->addProcessor("DefragmentText", "defrag_text_flow_files", 
core::Relationship("success", "description"), true));
+  std::shared_ptr read_from_flow_file = 
std::dynamic_pointer_cast(
+  plan->addProcessor("ReadFromFlowFileTestProcessor", 
"read_from_flow_file", DefragmentText::Success, true));

Review comment:
   Using sections would require t

[jira] [Updated] (MINIFICPP-1677) Add SASL PLAIN mechanism support to Kafka processors

2021-10-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gábor Gyimesi updated MINIFICPP-1677:
-
Description: 
PublishKafka processor currently supports Kerberos properties for SASL/GSSAPI 
configuration which is the default SASL configuration in librdkafka. We should 
also support SASL/PLAIN configuration with username and password 
authentication. This requires adding these additional username, password, sasl 
mechanism properties.

We may also need to extend the security protocol options with SASL_PLAINTEXT 
and SASL_SSL to be more precise which protocol is used, as previously SASL was 
implicitly implied when Kerberos properties were set. This would also be on par 
with the NiFi implementation. Another option would be to prefer setting the 
security protocol depending on the Kerberos or plain username password 
properties and the configured plaintext/ssl option. The latter option would 
provide backward compatiblity.

The same configuration possibilities should also be implemented in ConsumeKafka 
processor.

  was:
PublishKafka processor currently supports Kerberos properties for SASL/GSSAPI 
configuration which is the default SASL configuration in librdkafka. We should 
also support SASL/PLAIN configuration with username and password 
authentication. This requires adding these additional username, password, sasl 
mechanism properties.

We may also need to extend the security protocol options with SASL_PLAINTEXT 
and SASL_SSL to be more precise which protocol is used, as previously SASL was 
implicitly implied when Kerberos properties were set. This would also be on par 
with the NiFi implementation. Another option would be to prefer setting the 
security protocol depending on the Kerberos or plain username password 
properties and the configured plaintext/ssl option.

The same configuration possibilities should also be implemented in ConsumeKafka 
processor.


> Add SASL PLAIN mechanism support to Kafka processors
> 
>
> Key: MINIFICPP-1677
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1677
> Project: Apache NiFi MiNiFi C++
>  Issue Type: New Feature
>Reporter: Gábor Gyimesi
>Assignee: Gábor Gyimesi
>Priority: Major
>
> PublishKafka processor currently supports Kerberos properties for SASL/GSSAPI 
> configuration which is the default SASL configuration in librdkafka. We 
> should also support SASL/PLAIN configuration with username and password 
> authentication. This requires adding these additional username, password, 
> sasl mechanism properties.
> We may also need to extend the security protocol options with SASL_PLAINTEXT 
> and SASL_SSL to be more precise which protocol is used, as previously SASL 
> was implicitly implied when Kerberos properties were set. This would also be 
> on par with the NiFi implementation. Another option would be to prefer 
> setting the security protocol depending on the Kerberos or plain username 
> password properties and the configured plaintext/ssl option. The latter 
> option would provide backward compatiblity.
> The same configuration possibilities should also be implemented in 
> ConsumeKafka processor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1677) Add SASL PLAIN mechanism support to Kafka processors

2021-10-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gábor Gyimesi updated MINIFICPP-1677:
-
Description: 
PublishKafka processor currently supports Kerberos properties for SASL/GSSAPI 
configuration which is the default SASL configuration in librdkafka. We should 
also support SASL/PLAIN configuration with username and password 
authentication. This requires adding these additional username, password, sasl 
mechanism properties.

We may also need to extend the security protocol options with SASL_PLAINTEXT 
and SASL_SSL to be more precise which protocol is used, as previously SASL was 
implicitly implied when Kerberos properties were set. This would also be on par 
with the NiFi implementation. Another option would be to prefer setting the 
security protocol depending on the Kerberos or plain username password 
properties and the configured plaintext/ssl option.

The same configuration possibilities should also be implemented in ConsumeKafka 
processor.

  was:
PublishKafka processor currently supports Kerberos properties for SASL/GSSAPI 
configuration which is the default SASL configuration in librdkafka. We should 
also support SASL/PLAIN configuration with username and password 
authentication. This requires adding these additional username, passwordl, sasl 
mechanism properties. We also need to extend the security protocol options with 
SASL_PLAINTEXT and SASL_SSL to be more precise which protocol is used, as 
previously SASL was implicitly implied when Kerberos properties were set. This 
would also be on par with the NiFi implementation.

The same configuration possibilities should also be implemented in ConsumeKafka 
processor.


> Add SASL PLAIN mechanism support to Kafka processors
> 
>
> Key: MINIFICPP-1677
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1677
> Project: Apache NiFi MiNiFi C++
>  Issue Type: New Feature
>Reporter: Gábor Gyimesi
>Assignee: Gábor Gyimesi
>Priority: Major
>
> PublishKafka processor currently supports Kerberos properties for SASL/GSSAPI 
> configuration which is the default SASL configuration in librdkafka. We 
> should also support SASL/PLAIN configuration with username and password 
> authentication. This requires adding these additional username, password, 
> sasl mechanism properties.
> We may also need to extend the security protocol options with SASL_PLAINTEXT 
> and SASL_SSL to be more precise which protocol is used, as previously SASL 
> was implicitly implied when Kerberos properties were set. This would also be 
> on par with the NiFi implementation. Another option would be to prefer 
> setting the security protocol depending on the Kerberos or plain username 
> password properties and the configured plaintext/ssl option.
> The same configuration possibilities should also be implemented in 
> ConsumeKafka processor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1677) Add SASL PLAIN mechanism support to Kafka processors

2021-10-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gábor Gyimesi updated MINIFICPP-1677:
-
Description: 
PublishKafka processor currently supports Kerberos properties for SASL/GSSAPI 
configuration which is the default SASL configuration in librdkafka. We should 
also support SASL/PLAIN configuration with username and password 
authentication. This requires adding these additional username, password, sasl 
mechanism properties.

We may also need to extend the security protocol options with SASL_PLAINTEXT 
and SASL_SSL to be more precise which protocol is used, as previously SASL was 
implicitly implied when Kerberos properties were set. This would also be on par 
with the NiFi implementation. Another option would be to prefer setting the 
security protocol depending on the Kerberos or plain username password 
properties and the configured plaintext/ssl option. The latter option would 
provide backward compatibility.

The same configuration possibilities should also be implemented in ConsumeKafka 
processor.

  was:
PublishKafka processor currently supports Kerberos properties for SASL/GSSAPI 
configuration which is the default SASL configuration in librdkafka. We should 
also support SASL/PLAIN configuration with username and password 
authentication. This requires adding these additional username, password, sasl 
mechanism properties.

We may also need to extend the security protocol options with SASL_PLAINTEXT 
and SASL_SSL to be more precise which protocol is used, as previously SASL was 
implicitly implied when Kerberos properties were set. This would also be on par 
with the NiFi implementation. Another option would be to prefer setting the 
security protocol depending on the Kerberos or plain username password 
properties and the configured plaintext/ssl option. The latter option would 
provide backward compatiblity.

The same configuration possibilities should also be implemented in ConsumeKafka 
processor.


> Add SASL PLAIN mechanism support to Kafka processors
> 
>
> Key: MINIFICPP-1677
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1677
> Project: Apache NiFi MiNiFi C++
>  Issue Type: New Feature
>Reporter: Gábor Gyimesi
>Assignee: Gábor Gyimesi
>Priority: Major
>
> PublishKafka processor currently supports Kerberos properties for SASL/GSSAPI 
> configuration which is the default SASL configuration in librdkafka. We 
> should also support SASL/PLAIN configuration with username and password 
> authentication. This requires adding these additional username, password, 
> sasl mechanism properties.
> We may also need to extend the security protocol options with SASL_PLAINTEXT 
> and SASL_SSL to be more precise which protocol is used, as previously SASL 
> was implicitly implied when Kerberos properties were set. This would also be 
> on par with the NiFi implementation. Another option would be to prefer 
> setting the security protocol depending on the Kerberos or plain username 
> password properties and the configured plaintext/ssl option. The latter 
> option would provide backward compatibility.
> The same configuration possibilities should also be implemented in 
> ConsumeKafka processor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1168: MINIFICPP-1632 - Implement RouteText processor

2021-10-29 Thread GitBox


adamdebreceni commented on a change in pull request #1168:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1168#discussion_r739256275



##
File path: libminifi/include/utils/ProcessorConfigUtils.h
##
@@ -38,6 +38,19 @@ std::chrono::milliseconds 
parseTimePropertyMSOrThrow(core::ProcessContext* conte
 std::optional getOptionalUintProperty(const core::ProcessContext& 
context, const std::string& property_name);
 std::string parsePropertyWithAllowableValuesOrThrow(const 
core::ProcessContext& context, const std::string& property_name, const 
std::set& allowable_values);
 
+template
+T parseEnumProperty(const core::ProcessContext& context, const core::Property& 
prop) {

Review comment:
   added tests




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1168: MINIFICPP-1632 - Implement RouteText processor

2021-10-29 Thread GitBox


adamdebreceni commented on a change in pull request #1168:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1168#discussion_r739256508



##
File path: libminifi/include/utils/StringUtils.h
##
@@ -146,22 +160,28 @@ class StringUtils {
 
   static std::string& replaceAll(std::string& source_string, const std::string 
&from_string, const std::string &to_string);
 
-  inline static bool endsWithIgnoreCase(const std::string &value, const 
std::string & endString) {
-if (endString.size() > value.size())
+  inline static bool startsWith(const std::string_view& value, const 
std::string_view& start, bool case_sensitive = true) {
+if (start.length() > value.length()) {
   return false;
-return std::equal(endString.rbegin(), endString.rend(), value.rbegin(), 
[](unsigned char lc, unsigned char rc) {return tolower(lc) == tolower(rc);});
+}
+if (case_sensitive) {
+  return std::equal(start.begin(), start.end(), value.begin());
+}
+return std::equal(start.begin(), start.end(), value.begin(), [](unsigned 
char lc, unsigned char rc) {return tolower(lc) == tolower(rc);});
   }
 
-  inline static bool startsWith(const std::string& value, const std::string& 
start_string) {
-if (start_string.size() > value.size())
+  inline static bool endsWith(const std::string_view& value, const 
std::string_view& end, bool case_sensitive = true) {
+if (end.length() > value.length()) {
   return false;
-return std::equal(start_string.begin(), start_string.end(), value.begin());
+}
+if (case_sensitive) {
+  return std::equal(end.rbegin(), end.rend(), value.rbegin());
+}
+return std::equal(end.rbegin(), end.rend(), value.rbegin(), [](unsigned 
char lc, unsigned char rc) {return tolower(lc) == tolower(rc);});
   }
 
-  inline static bool endsWith(const std::string& value, const std::string& 
end_string) {
-if (end_string.size() > value.size())
-  return false;
-return std::equal(end_string.rbegin(), end_string.rend(), value.rbegin());
+  inline static bool endsWithIgnoreCase(const std::string_view& value, const 
std::string_view& endString) {

Review comment:
   removed it in favor of the more frequently used `endsWith`

##
File path: libminifi/include/utils/StringUtils.h
##
@@ -146,22 +160,28 @@ class StringUtils {
 
   static std::string& replaceAll(std::string& source_string, const std::string 
&from_string, const std::string &to_string);
 
-  inline static bool endsWithIgnoreCase(const std::string &value, const 
std::string & endString) {
-if (endString.size() > value.size())
+  inline static bool startsWith(const std::string_view& value, const 
std::string_view& start, bool case_sensitive = true) {

Review comment:
   added tests




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-9349) InvokeHTTP - add attribute with call duration

2021-10-29 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-9349:


 Summary: InvokeHTTP - add attribute with call duration
 Key: NIFI-9349
 URL: https://issues.apache.org/jira/browse/NIFI-9349
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Pierre Villard


Add an attribute to the FlowFile like "invokehttp.request.duration" to provide 
in milliseconds the response time of the remote endpoint. At this time, this 
can be retrieved by looking at the Event Duration value of the corresponding 
provenance event.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mattyb149 commented on pull request #5366: NIFI-9194: Upsert for Oracle12+

2021-10-29 Thread GitBox


mattyb149 commented on pull request #5366:
URL: https://github.com/apache/nifi/pull/5366#issuecomment-954804033


   There are lots of tab characters in here, the Github Action builds will fail 
for this reason. Please replace them with spaces and run your local Maven build 
from the nifi-standard-processors module with the `-Pcontrib-check` flag. This 
will let you know if there are any Checkstyle errors and such.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1168: MINIFICPP-1632 - Implement RouteText processor

2021-10-29 Thread GitBox


szaszm commented on a change in pull request #1168:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1168#discussion_r739287077



##
File path: libminifi/include/core/ProcessContext.h
##
@@ -126,6 +126,23 @@ class ProcessContext : public 
controller::ControllerServiceLookup, public core::
   virtual bool getDynamicProperty(const Property &property, std::string 
&value, const std::shared_ptr& /*flow_file*/) {
 return getDynamicProperty(property.getName(), value);
   }
+  bool getDynamicProperty(const Property &property, std::string &value, const 
std::shared_ptr& flow_file, const std::map& 
variables) {
+std::map> original_attributes;
+for (const auto& var : variables) {
+  original_attributes[var.first] = flow_file->getAttribute(var.first);
+  flow_file->setAttribute(var.first, var.second);
+}

Review comment:
   Consider using structured binding.

##
File path: cmake/BuildTests.cmake
##
@@ -111,6 +111,11 @@ FOREACH(testfile ${UNIT_TESTS})
   target_link_libraries(${testfilename} ${CATCH_MAIN_LIB})
   MATH(EXPR UNIT_TEST_COUNT "${UNIT_TEST_COUNT}+1")
   add_test(NAME "${testfilename}" COMMAND "${testfilename}" WORKING_DIRECTORY 
${TEST_DIR})
+  if (WIN32)
+target_compile_options(${testfilename} PRIVATE "/W")
+  else()
+target_compile_options(${testfilename} PRIVATE "-w")
+  endif()

Review comment:
   I don't think disabling all warnings for tests is a good idea.

##
File path: libminifi/src/utils/StringUtils.cpp
##
@@ -52,6 +52,20 @@ std::string StringUtils::trim(const std::string& s) {
   return trimRight(trimLeft(s));
 }
 
+std::string_view StringUtils::trim(std::string_view sv) {
+  auto begin = std::find_if(sv.begin(), sv.end(), [](unsigned char c) -> bool 
{ return !isspace(c); });
+  auto end = std::find_if(sv.rbegin(), std::reverse_iterator(begin), 
[](unsigned char c) -> bool { return !isspace(c); }).base();
+  // c++20 iterator constructor
+  // return std::string_view(begin, end);
+  // but for now
+  // on windows std::string_view::const_iterator is not a const char*
+  return std::string_view(sv.data() + std::distance(sv.begin(), begin), 
std::distance(begin, end));

Review comment:
   Avoid pointer arithmetic if possible.
   ```suggestion
 return sv.substr(std::distance(sv.begin(), begin), std::distance(begin, 
end));
   ```

##
File path: extensions/standard-processors/processors/RouteText.cpp
##
@@ -0,0 +1,474 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "RouteText.h"
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#ifdef __APPLE__
+#include 
+template
+using boyer_moore_searcher = std::experimental::boyer_moore_searcher;
+#else
+#include 
+template
+using boyer_moore_searcher = std::boyer_moore_searcher;
+#endif
+
+#include "logging/LoggerConfiguration.h"
+#include "utils/ProcessorConfigUtils.h"
+#include "utils/OptionalUtils.h"
+#include "range/v3/view/transform.hpp"
+#include "range/v3/range/conversion.hpp"
+#include "range/v3/view/tail.hpp"
+#include "range/v3/view/join.hpp"
+#include "range/v3/view/cache1.hpp"
+#include "core/Resource.h"
+
+namespace org::apache::nifi::minifi::processors {
+
+const core::Property RouteText::RoutingStrategy(
+core::PropertyBuilder::createProperty("Routing Strategy")
+->withDescription("Specifies how to determine which Relationship(s) to use 
when evaluating the segments "
+  "of incoming text against the 'Matching Strategy' and 
user-defined properties. "
+  "'Dynamic Routing' routes to all the matching dynamic 
relationships (or 'unmatched' if none matches). "
+  "'Route On All' routes to 'matched' iff all dynamic 
relationships match. "
+  "'Route On Any' routes to 'matched' iff any of the 
dynamic relationships match. ")
+->isRequired(true)
+->withDefaultValue(toString(Routing::DYNAMIC))
+->withAllowableValues(Routing::values())
+->build());
+
+const core::Property RouteText::MatchingStrategy(
+core::PropertyBuilder::createProperty("Matching Strategy")
+->withDescription("Specifies how to evaluate each segment of incoming text 
against the

[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1168: MINIFICPP-1632 - Implement RouteText processor

2021-10-29 Thread GitBox


szaszm commented on a change in pull request #1168:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1168#discussion_r739271908



##
File path: cmake/BuildTests.cmake
##
@@ -111,6 +111,11 @@ FOREACH(testfile ${UNIT_TESTS})
   target_link_libraries(${testfilename} ${CATCH_MAIN_LIB})
   MATH(EXPR UNIT_TEST_COUNT "${UNIT_TEST_COUNT}+1")
   add_test(NAME "${testfilename}" COMMAND "${testfilename}" WORKING_DIRECTORY 
${TEST_DIR})
+  if (WIN32)
+target_compile_options(${testfilename} PRIVATE "/W")
+  else()
+target_compile_options(${testfilename} PRIVATE "-w")
+  endif()

Review comment:
   I don't think disabling all warnings for tests is a good idea. What was 
your goal?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (NIFI-9316) Sort by label should be "Update (newest)" not "Newest (update)"

2021-10-29 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reassigned NIFI-9316:
--

Assignee: Matt Burgess

> Sort by label should be "Update (newest)" not "Newest (update)"
> ---
>
> Key: NIFI-9316
> URL: https://issues.apache.org/jira/browse/NIFI-9316
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: NiFi Registry
>Reporter: Andrew M. Lim
>Assignee: Matt Burgess
>Priority: Minor
> Attachments: sort_by_update.png
>
>
> Current labels are "Newest (update) and "Oldest (update)".
> !sort_by_update.png!
> But should be "Last Updated (newest)" and "Last Updated (oldest)"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm closed pull request #1203: MINIFICPP-1672 - Configurable msi

2021-10-29 Thread GitBox


szaszm closed pull request #1203:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1203


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-9344) Conduct 1.15 Release

2021-10-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17436097#comment-17436097
 ] 

ASF subversion and git services commented on NIFI-9344:
---

Commit 5f1169bd03f1dfadf23b84583dc93b12e6ff6706 in nifi's branch 
refs/heads/NIFI-9344-RC1 from Joe Witt
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=5f1169b ]

NIFI-9344-RC1 prepare for next development iteration


> Conduct 1.15 Release
> 
>
> Key: NIFI-9344
> URL: https://issues.apache.org/jira/browse/NIFI-9344
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
> Fix For: 1.15.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-9344) Conduct 1.15 Release

2021-10-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17436096#comment-17436096
 ] 

ASF subversion and git services commented on NIFI-9344:
---

Commit 73278e2aa0f8673792d069dbf3407faf981adc7c in nifi's branch 
refs/heads/NIFI-9344-RC1 from Joe Witt
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=73278e2 ]

NIFI-9344-RC1 prepare release nifi-1.15.0-RC1


> Conduct 1.15 Release
> 
>
> Key: NIFI-9344
> URL: https://issues.apache.org/jira/browse/NIFI-9344
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
> Fix For: 1.15.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-9316) Sort by label should be "Update (newest)" not "Newest (update)"

2021-10-29 Thread Matt Burgess (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17436131#comment-17436131
 ] 

Matt Burgess commented on NIFI-9316:


I see the same terminology used in nifi-registry-web-ui in NiFi, I assume we 
need the changes in both places?

> Sort by label should be "Update (newest)" not "Newest (update)"
> ---
>
> Key: NIFI-9316
> URL: https://issues.apache.org/jira/browse/NIFI-9316
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: NiFi Registry
>Reporter: Andrew M. Lim
>Assignee: Matt Burgess
>Priority: Minor
> Attachments: sort_by_update.png
>
>
> Current labels are "Newest (update) and "Oldest (update)".
> !sort_by_update.png!
> But should be "Last Updated (newest)" and "Last Updated (oldest)"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (NIFI-9316) Sort by label should be "Update (newest)" not "Newest (update)"

2021-10-29 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-9316:
---
Comment: was deleted

(was: I see the same terminology used in nifi-registry-web-ui in NiFi, I assume 
we need the changes in both places?)

> Sort by label should be "Update (newest)" not "Newest (update)"
> ---
>
> Key: NIFI-9316
> URL: https://issues.apache.org/jira/browse/NIFI-9316
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: NiFi Registry
>Reporter: Andrew M. Lim
>Assignee: Matt Burgess
>Priority: Minor
> Attachments: sort_by_update.png
>
>
> Current labels are "Newest (update) and "Oldest (update)".
> !sort_by_update.png!
> But should be "Last Updated (newest)" and "Last Updated (oldest)"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mattyb149 opened a new pull request #5496: NIFI-9316: Registry Sort by label should be 'Last Updated (newest)' not 'Newest (update)'

2021-10-29 Thread GitBox


mattyb149 opened a new pull request #5496:
URL: https://github.com/apache/nifi/pull/5496


   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   Changes wording in Registry UI for clarity and consistency
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [x] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-9316) Sort by label should be "Update (newest)" not "Newest (update)"

2021-10-29 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-9316:
---
Status: Patch Available  (was: In Progress)

> Sort by label should be "Update (newest)" not "Newest (update)"
> ---
>
> Key: NIFI-9316
> URL: https://issues.apache.org/jira/browse/NIFI-9316
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: NiFi Registry
>Reporter: Andrew M. Lim
>Assignee: Matt Burgess
>Priority: Minor
> Attachments: sort_by_update.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Current labels are "Newest (update) and "Oldest (update)".
> !sort_by_update.png!
> But should be "Last Updated (newest)" and "Last Updated (oldest)"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-9350) Add NiFi Registry NarProvider

2021-10-29 Thread Bryan Bende (Jira)
Bryan Bende created NIFI-9350:
-

 Summary: Add NiFi Registry NarProvider
 Key: NIFI-9350
 URL: https://issues.apache.org/jira/browse/NIFI-9350
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Bryan Bende
Assignee: Bryan Bende


With the introduction of the NarProvider concept, we can provide an 
implementation that retrieves NARs from NiFi Registry.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] bbende opened a new pull request #5497: NIFI-9350 Add NiFi Registry NarProvider implementation

2021-10-29 Thread GitBox


bbende opened a new pull request #5497:
URL: https://github.com/apache/nifi/pull/5497


   To test this...
   
   - Start NiFi Registry
   - Create a few buckets
   - Use NiFi CLI to upload at least one NAR to a bucket in registry, you'll 
need to the UUID of the bucket
 - `registry upload-bundle -b  -ebt nifi-nar -ebf `
   - Configure nifi.properties with the following
 - 
nifi.nar.library.provider.nifi-registry.implementation=org.apache.nifi.registry.extension.NiFiRegistryNarProvider
 - nifi.nar.library.provider.nifi-registry.url=http://localhost:18080
   - Start nifi and verify that the NAR from registry gets downloaded to 
./extensions
   
   In a secure setup, all bundles from authorized buckets will be retrieved, 
meaning if the NiFi server user does not have READ on a bucket in registry, 
then it won't pull those, but generally the NiFi server user always has read on 
all buckets.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-9350) Add NiFi Registry NarProvider

2021-10-29 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-9350:
--
Status: Patch Available  (was: Open)

> Add NiFi Registry NarProvider
> -
>
> Key: NIFI-9350
> URL: https://issues.apache.org/jira/browse/NIFI-9350
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> With the introduction of the NarProvider concept, we can provide an 
> implementation that retrieves NARs from NiFi Registry.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mattyb149 commented on pull request #5444: NIFI-9286: Add expression language to JOLT processors and fixes the custom module implementation to use custom jars in the processor

2021-10-29 Thread GitBox


mattyb149 commented on pull request #5444:
URL: https://github.com/apache/nifi/pull/5444#issuecomment-954987092


   There's a Checkstyle error but I'll fix that on merge, going to run my tests 
one more time


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] pgyori commented on a change in pull request #5381: NIFI-9206: Add RemoveRecordField processor and implement the ability …

2021-10-29 Thread GitBox


pgyori commented on a change in pull request #5381:
URL: https://github.com/apache/nifi/pull/5381#discussion_r739498158



##
File path: 
nifi-commons/nifi-record/src/test/java/org/apache/nifi/serialization/TestSimpleRecordSchema.java
##
@@ -93,21 +93,142 @@ public void 
testHashCodeAndEqualsWithSelfReferencingSchema() {
 }
 
 @Test
-public void testFieldsArentCheckedInEqualsIfNameAndNamespaceMatch() {
-final RecordField testField = new RecordField("test", 
RecordFieldType.STRING.getDataType());
+public void testEqualsSimpleSchema() {
+// GIVEN
+final String nameOfField1 = "field1";
+final String nameOfField2 = "field2";
+final DataType typeOfField1 = RecordFieldType.INT.getDataType();
+final DataType typeOfField2 = RecordFieldType.STRING.getDataType();
+final String schemaName = "schemaName";
+final String namespace = "namespace";
+
+final SimpleRecordSchema schema1 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField2, schemaName, namespace);
+final SimpleRecordSchema schema2 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField2, schemaName, namespace);
+
+// WHEN, THEN
+assertTrue(schema1.equals(schema2));
+}
 
-final SimpleRecordSchema schema1 = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
-schema1.setSchemaName("name");
-schema1.setSchemaNamespace("namespace");
-schema1.setFields(Collections.singletonList(testField));
+@Test
+public void 
testEqualsSimpleSchemaEvenIfSchemaNameAndNameSpaceAreDifferent() {
+// GIVEN
+final String nameOfField1 = "field1";
+final String nameOfField2 = "field2";
+final DataType typeOfField1 = RecordFieldType.INT.getDataType();
+final DataType typeOfField2 = RecordFieldType.STRING.getDataType();
+final String schemaName1 = "schemaName1";
+final String schemaName2 = "schemaName2";
+final String namespace1 = "namespace1";
+final String namespace2 = "namespace2";
+
+final SimpleRecordSchema schema1 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField2, schemaName1, namespace1);
+final SimpleRecordSchema schema2 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField2, schemaName2, namespace2);
+
+// WHEN, THEN
+assertTrue(schema1.equals(schema2));
+}
 
-SimpleRecordSchema schema2 = Mockito.spy(new 
SimpleRecordSchema(SchemaIdentifier.EMPTY));
-schema2.setSchemaName("name");
-schema2.setSchemaNamespace("namespace");
-schema2.setFields(Collections.singletonList(testField));
+@Test
+public void testNotEqualsSimpleSchemaDifferentTypes() {
+// GIVEN
+final String nameOfField1 = "field1";
+final String nameOfField2 = "field2";
+final DataType typeOfField1 = RecordFieldType.INT.getDataType();
+final DataType typeOfField2 = RecordFieldType.STRING.getDataType();
+final String schemaName = "schemaName";
+final String namespace = "namespace";
+
+final SimpleRecordSchema schema1 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField1, schemaName, namespace);
+final SimpleRecordSchema schema2 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField2, schemaName, namespace);
+
+// WHEN, THEN
+assertFalse(schema1.equals(schema2));
+}
+
+@Test
+public void testNotEqualsSimpleSchemaDifferentFieldNames() {
+// GIVEN
+final String nameOfField1 = "field1";
+final String nameOfField2 = "field2";
+final String nameOfField3 = "field3";
+final DataType typeOfField1 = RecordFieldType.INT.getDataType();
+final DataType typeOfField2 = RecordFieldType.STRING.getDataType();
+final String schemaName = "schemaName";
+final String namespace = "namespace";
+
+final SimpleRecordSchema schema1 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField2, schemaName, namespace);
+final SimpleRecordSchema schema2 = 
createSchemaWithTwoFields(nameOfField1, nameOfField3, typeOfField1, 
typeOfField2, schemaName, namespace);
+
+// WHEN, THEN
+assertFalse(schema1.equals(schema2));
+}
+
+@Test
+public void testEqualsRecursiveSchema() {
+final String field1 = "field1";
+final String field2 = "field2";
+final String schemaName = "schemaName";
+final String namespace = "namespace";
+
+final SimpleRecordSchema schema1 = createRecursiveSchema(field1, 
field2, schemaName, namespace);
+final SimpleRecordSchema schema2 = createRecursiveSchema(field1, 
field2, schemaName, namespace);
 
 assertTrue(schema1.equals(schema2));
-Mockito.verify(schem

[GitHub] [nifi] pgyori commented on a change in pull request #5381: NIFI-9206: Add RemoveRecordField processor and implement the ability …

2021-10-29 Thread GitBox


pgyori commented on a change in pull request #5381:
URL: https://github.com/apache/nifi/pull/5381#discussion_r739498445



##
File path: 
nifi-commons/nifi-record/src/test/java/org/apache/nifi/serialization/TestSimpleRecordSchema.java
##
@@ -93,21 +93,142 @@ public void 
testHashCodeAndEqualsWithSelfReferencingSchema() {
 }
 
 @Test
-public void testFieldsArentCheckedInEqualsIfNameAndNamespaceMatch() {
-final RecordField testField = new RecordField("test", 
RecordFieldType.STRING.getDataType());
+public void testEqualsSimpleSchema() {
+// GIVEN
+final String nameOfField1 = "field1";
+final String nameOfField2 = "field2";
+final DataType typeOfField1 = RecordFieldType.INT.getDataType();
+final DataType typeOfField2 = RecordFieldType.STRING.getDataType();
+final String schemaName = "schemaName";
+final String namespace = "namespace";
+
+final SimpleRecordSchema schema1 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField2, schemaName, namespace);
+final SimpleRecordSchema schema2 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField2, schemaName, namespace);
+
+// WHEN, THEN
+assertTrue(schema1.equals(schema2));
+}
 
-final SimpleRecordSchema schema1 = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
-schema1.setSchemaName("name");
-schema1.setSchemaNamespace("namespace");
-schema1.setFields(Collections.singletonList(testField));
+@Test
+public void 
testEqualsSimpleSchemaEvenIfSchemaNameAndNameSpaceAreDifferent() {
+// GIVEN
+final String nameOfField1 = "field1";
+final String nameOfField2 = "field2";
+final DataType typeOfField1 = RecordFieldType.INT.getDataType();
+final DataType typeOfField2 = RecordFieldType.STRING.getDataType();
+final String schemaName1 = "schemaName1";
+final String schemaName2 = "schemaName2";
+final String namespace1 = "namespace1";
+final String namespace2 = "namespace2";
+
+final SimpleRecordSchema schema1 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField2, schemaName1, namespace1);
+final SimpleRecordSchema schema2 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField2, schemaName2, namespace2);
+
+// WHEN, THEN
+assertTrue(schema1.equals(schema2));
+}
 
-SimpleRecordSchema schema2 = Mockito.spy(new 
SimpleRecordSchema(SchemaIdentifier.EMPTY));
-schema2.setSchemaName("name");
-schema2.setSchemaNamespace("namespace");
-schema2.setFields(Collections.singletonList(testField));
+@Test
+public void testNotEqualsSimpleSchemaDifferentTypes() {
+// GIVEN
+final String nameOfField1 = "field1";
+final String nameOfField2 = "field2";
+final DataType typeOfField1 = RecordFieldType.INT.getDataType();
+final DataType typeOfField2 = RecordFieldType.STRING.getDataType();
+final String schemaName = "schemaName";
+final String namespace = "namespace";
+
+final SimpleRecordSchema schema1 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField1, schemaName, namespace);
+final SimpleRecordSchema schema2 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField2, schemaName, namespace);
+
+// WHEN, THEN
+assertFalse(schema1.equals(schema2));
+}
+
+@Test
+public void testNotEqualsSimpleSchemaDifferentFieldNames() {
+// GIVEN
+final String nameOfField1 = "field1";
+final String nameOfField2 = "field2";
+final String nameOfField3 = "field3";
+final DataType typeOfField1 = RecordFieldType.INT.getDataType();
+final DataType typeOfField2 = RecordFieldType.STRING.getDataType();
+final String schemaName = "schemaName";
+final String namespace = "namespace";
+
+final SimpleRecordSchema schema1 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField2, schemaName, namespace);
+final SimpleRecordSchema schema2 = 
createSchemaWithTwoFields(nameOfField1, nameOfField3, typeOfField1, 
typeOfField2, schemaName, namespace);
+
+// WHEN, THEN
+assertFalse(schema1.equals(schema2));
+}
+
+@Test
+public void testEqualsRecursiveSchema() {
+final String field1 = "field1";
+final String field2 = "field2";
+final String schemaName = "schemaName";
+final String namespace = "namespace";
+
+final SimpleRecordSchema schema1 = createRecursiveSchema(field1, 
field2, schemaName, namespace);
+final SimpleRecordSchema schema2 = createRecursiveSchema(field1, 
field2, schemaName, namespace);
 
 assertTrue(schema1.equals(schema2));
-Mockito.verify(schem

[GitHub] [nifi] pgyori commented on a change in pull request #5381: NIFI-9206: Add RemoveRecordField processor and implement the ability …

2021-10-29 Thread GitBox


pgyori commented on a change in pull request #5381:
URL: https://github.com/apache/nifi/pull/5381#discussion_r739498158



##
File path: 
nifi-commons/nifi-record/src/test/java/org/apache/nifi/serialization/TestSimpleRecordSchema.java
##
@@ -93,21 +93,142 @@ public void 
testHashCodeAndEqualsWithSelfReferencingSchema() {
 }
 
 @Test
-public void testFieldsArentCheckedInEqualsIfNameAndNamespaceMatch() {
-final RecordField testField = new RecordField("test", 
RecordFieldType.STRING.getDataType());
+public void testEqualsSimpleSchema() {
+// GIVEN
+final String nameOfField1 = "field1";
+final String nameOfField2 = "field2";
+final DataType typeOfField1 = RecordFieldType.INT.getDataType();
+final DataType typeOfField2 = RecordFieldType.STRING.getDataType();
+final String schemaName = "schemaName";
+final String namespace = "namespace";
+
+final SimpleRecordSchema schema1 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField2, schemaName, namespace);
+final SimpleRecordSchema schema2 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField2, schemaName, namespace);
+
+// WHEN, THEN
+assertTrue(schema1.equals(schema2));
+}
 
-final SimpleRecordSchema schema1 = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
-schema1.setSchemaName("name");
-schema1.setSchemaNamespace("namespace");
-schema1.setFields(Collections.singletonList(testField));
+@Test
+public void 
testEqualsSimpleSchemaEvenIfSchemaNameAndNameSpaceAreDifferent() {
+// GIVEN
+final String nameOfField1 = "field1";
+final String nameOfField2 = "field2";
+final DataType typeOfField1 = RecordFieldType.INT.getDataType();
+final DataType typeOfField2 = RecordFieldType.STRING.getDataType();
+final String schemaName1 = "schemaName1";
+final String schemaName2 = "schemaName2";
+final String namespace1 = "namespace1";
+final String namespace2 = "namespace2";
+
+final SimpleRecordSchema schema1 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField2, schemaName1, namespace1);
+final SimpleRecordSchema schema2 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField2, schemaName2, namespace2);
+
+// WHEN, THEN
+assertTrue(schema1.equals(schema2));
+}
 
-SimpleRecordSchema schema2 = Mockito.spy(new 
SimpleRecordSchema(SchemaIdentifier.EMPTY));
-schema2.setSchemaName("name");
-schema2.setSchemaNamespace("namespace");
-schema2.setFields(Collections.singletonList(testField));
+@Test
+public void testNotEqualsSimpleSchemaDifferentTypes() {
+// GIVEN
+final String nameOfField1 = "field1";
+final String nameOfField2 = "field2";
+final DataType typeOfField1 = RecordFieldType.INT.getDataType();
+final DataType typeOfField2 = RecordFieldType.STRING.getDataType();
+final String schemaName = "schemaName";
+final String namespace = "namespace";
+
+final SimpleRecordSchema schema1 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField1, schemaName, namespace);
+final SimpleRecordSchema schema2 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField2, schemaName, namespace);
+
+// WHEN, THEN
+assertFalse(schema1.equals(schema2));
+}
+
+@Test
+public void testNotEqualsSimpleSchemaDifferentFieldNames() {
+// GIVEN
+final String nameOfField1 = "field1";
+final String nameOfField2 = "field2";
+final String nameOfField3 = "field3";
+final DataType typeOfField1 = RecordFieldType.INT.getDataType();
+final DataType typeOfField2 = RecordFieldType.STRING.getDataType();
+final String schemaName = "schemaName";
+final String namespace = "namespace";
+
+final SimpleRecordSchema schema1 = 
createSchemaWithTwoFields(nameOfField1, nameOfField2, typeOfField1, 
typeOfField2, schemaName, namespace);
+final SimpleRecordSchema schema2 = 
createSchemaWithTwoFields(nameOfField1, nameOfField3, typeOfField1, 
typeOfField2, schemaName, namespace);
+
+// WHEN, THEN
+assertFalse(schema1.equals(schema2));
+}
+
+@Test
+public void testEqualsRecursiveSchema() {
+final String field1 = "field1";
+final String field2 = "field2";
+final String schemaName = "schemaName";
+final String namespace = "namespace";
+
+final SimpleRecordSchema schema1 = createRecursiveSchema(field1, 
field2, schemaName, namespace);
+final SimpleRecordSchema schema2 = createRecursiveSchema(field1, 
field2, schemaName, namespace);
 
 assertTrue(schema1.equals(schema2));
-Mockito.verify(schem

[jira] [Created] (NIFI-9351) Scripting NAR includes Test Libraries

2021-10-29 Thread David Handermann (Jira)
David Handermann created NIFI-9351:
--

 Summary: Scripting NAR includes Test Libraries
 Key: NIFI-9351
 URL: https://issues.apache.org/jira/browse/NIFI-9351
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: David Handermann
Assignee: David Handermann


The {{nifi-scripting-nar}} bundles several JUnit and TestNG libraries as a 
result of depending on {{groovy-all}}.  These dependencies should be excluded 
in the Maven configuration to avoid including unnecessary libraries in 
{{nifi-scripting-nar}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] exceptionfactory opened a new pull request #5498: NIFI-9351 Exclude test dependencies from nifi-scripting-nar

2021-10-29 Thread GitBox


exceptionfactory opened a new pull request #5498:
URL: https://github.com/apache/nifi/pull/5498


    Description of PR
   
   NIFI-9351 Excludes JUnit and TestNG dependencies from the `groovy-all` 
dependency declaration in `nifi-scripting-nar`. This change avoids including 
unnecessary test dependencies during runtime processing.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [X] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [X] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [X] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [X] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-9351) Scripting NAR includes Test Libraries

2021-10-29 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-9351:
---
Affects Version/s: 1.10.0
   1.11.0
   1.12.0
   1.13.0
   1.14.0
   Status: Patch Available  (was: Open)

> Scripting NAR includes Test Libraries
> -
>
> Key: NIFI-9351
> URL: https://issues.apache.org/jira/browse/NIFI-9351
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.14.0, 1.13.0, 1.12.0, 1.11.0, 1.10.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{nifi-scripting-nar}} bundles several JUnit and TestNG libraries as a 
> result of depending on {{groovy-all}}.  These dependencies should be excluded 
> in the Maven configuration to avoid including unnecessary libraries in 
> {{nifi-scripting-nar}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] turcsanyip commented on a change in pull request #5486: NIFI-9338: Add Azure Blob processors using Azure Blob Storage client …

2021-10-29 Thread GitBox


turcsanyip commented on a change in pull request #5486:
URL: https://github.com/apache/nifi/pull/5486#discussion_r739537444



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/DeleteAzureBlobStorage_v12.java
##
@@ -0,0 +1,135 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.azure.storage;
+
+import com.azure.storage.blob.BlobClient;
+import com.azure.storage.blob.BlobContainerClient;
+import com.azure.storage.blob.BlobServiceClient;
+import com.azure.storage.blob.models.DeleteSnapshotsOptionType;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.azure.AbstractAzureBlobProcessor_v12;
+import org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils;
+
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+@Tags({ "azure", "microsoft", "cloud", "storage", "blob" })
+@SeeAlso({ ListAzureBlobStorage_v12.class, FetchAzureBlobStorage_v12.class, 
PutAzureBlobStorage_v12.class})
+@CapabilityDescription("Deletes the specified blob from Azure Blob Storage. 
The processor uses Azure Blob Storage client library v12.")
+@InputRequirement(Requirement.INPUT_REQUIRED)
+public class DeleteAzureBlobStorage_v12 extends AbstractAzureBlobProcessor_v12 
{
+
+public static final AllowableValue DELETE_SNAPSHOTS_NONE = new 
AllowableValue("NONE", "None", "Delete the blob only.");
+
+public static final AllowableValue DELETE_SNAPSHOTS_ALSO = new 
AllowableValue(DeleteSnapshotsOptionType.INCLUDE.name(), "Include Snapshots", 
"Delete the blob and its snapshots.");
+
+public static final AllowableValue DELETE_SNAPSHOTS_ONLY = new 
AllowableValue(DeleteSnapshotsOptionType.ONLY.name(), "Delete Snapshots Only", 
"Delete only the blob's snapshots.");
+
+public static final PropertyDescriptor DELETE_SNAPSHOTS_OPTION = new 
PropertyDescriptor.Builder()
+.name("delete-snapshots-option")
+.displayName("Delete Snapshots Option")
+.description("Specifies the snapshot deletion options to be used 
when deleting a blob.")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.allowableValues(DELETE_SNAPSHOTS_NONE, DELETE_SNAPSHOTS_ALSO, 
DELETE_SNAPSHOTS_ONLY)
+.defaultValue(DELETE_SNAPSHOTS_NONE.getValue())
+.required(true)
+.build();
+
+private static final List PROPERTIES = 
Collections.unmodifiableList(Arrays.asList(
+STORAGE_CREDENTIALS_SERVICE,
+AzureStorageUtils.CONTAINER,
+BLOB_NAME,
+DELETE_SNAPSHOTS_OPTION
+));
+
+@Override
+public List getSupportedPropertyDescriptors() {
+return PROPERTIES;
+}
+
+@Override
+public void onTrigger(ProcessContext context, ProcessSession session) 
throws ProcessException {
+FlowFile flowFile = session.get();
+if (flowFile == null) {
+return;
+}
+
+String containerName = 
context.getProperty(AzureStorageUtils.CONTAINER).evaluateAttributeExpressions(flowFile).getValue();
+String blobName = 
context.getProperty(BLOB_NAME).evaluateAttributeExpressions(flowFile).getValue();
+String deleteSnapshotsOption = 
context.getProperty(DELETE_SNAPSHOTS_OPTION).getValue();
+
+long startNanos = System.nanoTime();
+try {
+BlobServiceClient storageClient = getStorageClient();
+BlobContainerClient containerClien

[GitHub] [nifi] turcsanyip commented on a change in pull request #5486: NIFI-9338: Add Azure Blob processors using Azure Blob Storage client …

2021-10-29 Thread GitBox


turcsanyip commented on a change in pull request #5486:
URL: https://github.com/apache/nifi/pull/5486#discussion_r739537848



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/services/azure/storage/AzureStorageCredentialsControllerService_v12.java
##
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.services.azure.storage;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils;
+
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * Provides credentials details for Azure Blob processors
+ *
+ * @see AbstractControllerService
+ */
+@Tags({"azure", "microsoft", "cloud", "storage", "blob", "credentials"})
+@CapabilityDescription("Provides credentials for Azure Blob processors using 
Azure Blob Storage client library v12.")
+public class AzureStorageCredentialsControllerService_v12 extends 
AbstractControllerService implements AzureStorageCredentialsService_v12 {
+
+public static final String DEFAULT_ENDPOINT_SUFFIX = 
"blob.core.windows.net";

Review comment:
   Added AzureServiceEndpoints to provide endpoint suffix constants and 
methods to get endpoint urls.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] turcsanyip commented on a change in pull request #5486: NIFI-9338: Add Azure Blob processors using Azure Blob Storage client …

2021-10-29 Thread GitBox


turcsanyip commented on a change in pull request #5486:
URL: https://github.com/apache/nifi/pull/5486#discussion_r739538440



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/services/azure/storage/AzureStorageCredentialsControllerService_v12.java
##
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.services.azure.storage;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils;
+
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * Provides credentials details for Azure Blob processors
+ *
+ * @see AbstractControllerService
+ */
+@Tags({"azure", "microsoft", "cloud", "storage", "blob", "credentials"})
+@CapabilityDescription("Provides credentials for Azure Blob processors using 
Azure Blob Storage client library v12.")
+public class AzureStorageCredentialsControllerService_v12 extends 
AbstractControllerService implements AzureStorageCredentialsService_v12 {
+
+public static final String DEFAULT_ENDPOINT_SUFFIX = 
"blob.core.windows.net";
+
+public static final PropertyDescriptor ACCOUNT_NAME = new 
PropertyDescriptor.Builder()
+.fromPropertyDescriptor(AzureStorageUtils.ACCOUNT_NAME)
+.description(AzureStorageUtils.ACCOUNT_NAME_BASE_DESCRIPTION)
+.required(true)
+.expressionLanguageSupported(ExpressionLanguageScope.NONE)
+.build();
+
+public static final PropertyDescriptor ENDPOINT_SUFFIX = new 
PropertyDescriptor.Builder()
+.fromPropertyDescriptor(AzureStorageUtils.ENDPOINT_SUFFIX)
+.displayName("Endpoint Suffix")
+.description("Storage accounts in public Azure always use a common 
FQDN suffix. " +
+"Override this endpoint suffix with a different suffix in 
certain circumstances (like Azure Stack or non-public Azure regions).")
+.required(true)
+.defaultValue(DEFAULT_ENDPOINT_SUFFIX)
+.expressionLanguageSupported(ExpressionLanguageScope.NONE)
+.build();
+
+public static final PropertyDescriptor CREDENTIALS_TYPE = new 
PropertyDescriptor.Builder()
+.name("credentials-type")
+.displayName("Credentials Type")
+.description("Credentials type to be used for authenticating to 
Azure")
+.required(true)
+.allowableValues(AzureStorageCredentialsType.getAllowableValues())
+.defaultValue(AzureStorageCredentialsType.ACCOUNT_KEY.name())

Review comment:
   I would vote for SAS Token and have set it. Managed Identity is a more 
specific case when NiFi is running on an Azure VM.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] turcsanyip commented on a change in pull request #5486: NIFI-9338: Add Azure Blob processors using Azure Blob Storage client …

2021-10-29 Thread GitBox


turcsanyip commented on a change in pull request #5486:
URL: https://github.com/apache/nifi/pull/5486#discussion_r739539239



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/test/java/org/apache/nifi/processors/azure/storage/AbstractAzureBlobStorage_v12IT.java
##
@@ -0,0 +1,137 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.azure.storage;
+
+import com.azure.storage.blob.BlobClient;
+import com.azure.storage.blob.BlobContainerClient;
+import com.azure.storage.blob.BlobServiceClient;
+import com.azure.storage.blob.BlobServiceClientBuilder;
+import com.azure.storage.blob.models.BlobType;
+import com.azure.storage.common.StorageSharedKeyCredential;
+import org.apache.nifi.processors.azure.AbstractAzureBlobProcessor_v12;
+import org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils;
+import org.apache.nifi.processors.azure.storage.utils.BlobAttributes;
+import 
org.apache.nifi.services.azure.storage.AzureStorageCredentialsControllerService_v12;
+import 
org.apache.nifi.services.azure.storage.AzureStorageCredentialsService_v12;
+import org.apache.nifi.util.MockFlowFile;
+import org.junit.After;
+import org.junit.Before;
+
+import java.io.ByteArrayInputStream;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.UUID;
+
+public abstract class AbstractAzureBlobStorage_v12IT extends 
AbstractAzureStorageIT {
+
+protected static final String BLOB_NAME = "blob1";
+protected static final byte[] BLOB_DATA = "0123456789".getBytes();
+
+protected static final String EL_CONTAINER_NAME = "az.containername";
+protected static final String EL_BLOB_NAME = "az.blobname";
+
+protected static final byte[] EMPTY_CONTENT = new byte[0];
+
+private static final String TEST_CONTAINER_NAME_PREFIX = 
"nifi-test-container";
+
+private BlobServiceClient storageClient;
+private BlobContainerClient containerClient;
+private String containerName;
+
+@Override
+protected void setUpCredentials() throws Exception {
+String serviceId = "credentials-service";
+AzureStorageCredentialsService_v12 service = new 
AzureStorageCredentialsControllerService_v12();
+runner.addControllerService(serviceId, service);
+runner.setProperty(service, 
AzureStorageCredentialsControllerService_v12.ACCOUNT_NAME, getAccountName());
+runner.setProperty(service, 
AzureStorageCredentialsControllerService_v12.ACCOUNT_KEY, getAccountKey());
+runner.enableControllerService(service);
+
+
runner.setProperty(AbstractAzureBlobProcessor_v12.STORAGE_CREDENTIALS_SERVICE, 
serviceId);
+}
+
+@Before
+public void setUpAzureBlobStorage_v12IT() {
+containerName = generateContainerName();
+
+runner.setProperty(AzureStorageUtils.CONTAINER, containerName);
+
+storageClient = createStorageClient();
+containerClient = storageClient.createBlobContainer(containerName);
+}
+
+@After
+public void tearDownAzureBlobStorage_v12IT() {
+containerClient.delete();
+}
+
+protected String generateContainerName() {
+return String.format("%s-%s", TEST_CONTAINER_NAME_PREFIX, 
UUID.randomUUID());
+}
+
+protected BlobServiceClient getStorageClient() {
+return storageClient;
+}
+
+protected BlobContainerClient getContainerClient() {
+return containerClient;
+}
+
+protected String getContainerName() {
+return containerName;
+}
+
+private BlobServiceClient createStorageClient() {
+return new BlobServiceClientBuilder()
+.endpoint("https://"; + getAccountName() + 
".blob.core.windows.net")

Review comment:
   "endpointSuffix" property can now be configured in 
azure-credentials.PROPERTIES for the IT tests. If not specified, the default 
endpoint will be used.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 commented on pull request #5498: NIFI-9351 Exclude test dependencies from nifi-scripting-nar

2021-10-29 Thread GitBox


mattyb149 commented on pull request #5498:
URL: https://github.com/apache/nifi/pull/5498#issuecomment-955058044


   +1 LGTM, ran mvn clean install -Pcontrib-check then dependency:tree on both 
bundles, verified there are no test dependencies other than in `test` scope. 
Thanks for the fix and the quick turnaround! Merging to main 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 closed pull request #5498: NIFI-9351 Exclude test dependencies from nifi-scripting-nar

2021-10-29 Thread GitBox


mattyb149 closed pull request #5498:
URL: https://github.com/apache/nifi/pull/5498


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-9351) Scripting NAR includes Test Libraries

2021-10-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17436175#comment-17436175
 ] 

ASF subversion and git services commented on NIFI-9351:
---

Commit 1c4ee93e687c2396ca07c22ee97bd11af78f68c9 in nifi's branch 
refs/heads/main from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=1c4ee93 ]

NIFI-9351 Excluded test dependencies from nifi-scripting-nar

NIFI-9351 Excluded Groovy test dependencies

- Updated nifi-scripting-nar
- Updated nifi-groovyx-nar

Signed-off-by: Matthew Burgess 

This closes #5498


> Scripting NAR includes Test Libraries
> -
>
> Key: NIFI-9351
> URL: https://issues.apache.org/jira/browse/NIFI-9351
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.10.0, 1.11.0, 1.12.0, 1.13.0, 1.14.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The {{nifi-scripting-nar}} bundles several JUnit and TestNG libraries as a 
> result of depending on {{groovy-all}}.  These dependencies should be excluded 
> in the Maven configuration to avoid including unnecessary libraries in 
> {{nifi-scripting-nar}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-9351) Scripting NAR includes Test Libraries

2021-10-29 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-9351:
---
Fix Version/s: 1.15.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Scripting NAR includes Test Libraries
> -
>
> Key: NIFI-9351
> URL: https://issues.apache.org/jira/browse/NIFI-9351
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.10.0, 1.11.0, 1.12.0, 1.13.0, 1.14.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 1.15.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The {{nifi-scripting-nar}} bundles several JUnit and TestNG libraries as a 
> result of depending on {{groovy-all}}.  These dependencies should be excluded 
> in the Maven configuration to avoid including unnecessary libraries in 
> {{nifi-scripting-nar}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] turcsanyip commented on a change in pull request #5486: NIFI-9338: Add Azure Blob processors using Azure Blob Storage client …

2021-10-29 Thread GitBox


turcsanyip commented on a change in pull request #5486:
URL: https://github.com/apache/nifi/pull/5486#discussion_r739545995



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/AbstractAzureBlobProcessor_v12.java
##
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.azure;
+
+import com.azure.core.credential.AzureSasCredential;
+import com.azure.identity.ClientSecretCredentialBuilder;
+import com.azure.identity.ManagedIdentityCredentialBuilder;
+import com.azure.storage.blob.BlobClient;
+import com.azure.storage.blob.BlobServiceClient;
+import com.azure.storage.blob.BlobServiceClientBuilder;
+import com.azure.storage.blob.models.BlobProperties;
+import com.azure.storage.common.StorageSharedKeyCredential;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.context.PropertyContext;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+import 
org.apache.nifi.services.azure.storage.AzureStorageCredentialsDetails_v12;
+import 
org.apache.nifi.services.azure.storage.AzureStorageCredentialsService_v12;
+
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+
+import static 
org.apache.nifi.processors.azure.storage.utils.BlobAttributes.ATTR_NAME_BLOBNAME;
+import static 
org.apache.nifi.processors.azure.storage.utils.BlobAttributes.ATTR_NAME_BLOBTYPE;
+import static 
org.apache.nifi.processors.azure.storage.utils.BlobAttributes.ATTR_NAME_CONTAINER;
+import static 
org.apache.nifi.processors.azure.storage.utils.BlobAttributes.ATTR_NAME_ETAG;
+import static 
org.apache.nifi.processors.azure.storage.utils.BlobAttributes.ATTR_NAME_LANG;
+import static 
org.apache.nifi.processors.azure.storage.utils.BlobAttributes.ATTR_NAME_LENGTH;
+import static 
org.apache.nifi.processors.azure.storage.utils.BlobAttributes.ATTR_NAME_MIME_TYPE;
+import static 
org.apache.nifi.processors.azure.storage.utils.BlobAttributes.ATTR_NAME_PRIMARY_URI;
+import static 
org.apache.nifi.processors.azure.storage.utils.BlobAttributes.ATTR_NAME_TIMESTAMP;
+
+public abstract class AbstractAzureBlobProcessor_v12 extends AbstractProcessor 
{
+
+public static final PropertyDescriptor STORAGE_CREDENTIALS_SERVICE = new 
PropertyDescriptor.Builder()
+.name("storage-credentials-service")
+.displayName("Storage Credentials")
+.description("Controller Service used to obtain Azure Blob Storage 
Credentials.")
+
.identifiesControllerService(AzureStorageCredentialsService_v12.class)
+.required(true)
+.build();
+
+public static final PropertyDescriptor BLOB_NAME = new 
PropertyDescriptor.Builder()
+.name("blob-name")
+.displayName("Blob Name")
+.description("The full name of the blob")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.required(true)
+.build();
+
+public static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("All successfully processed FlowFiles are routed to 
this relationship")
+.build();
+public static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("Unsuccessful operations will be transferred to the 
failure relationship.")
+.build();
+
+private static final Set RELATIONSHIPS = 
Collections.unmodifiableSet(new HashSet<>(Arrays.asList(
+REL_SUCCESS,
+REL_FAILURE
+)));
+
+private BlobServiceClient storageClient;
+
+@Override
+public Set getRelationshi

[GitHub] [nifi] turcsanyip commented on a change in pull request #5486: NIFI-9338: Add Azure Blob processors using Azure Blob Storage client …

2021-10-29 Thread GitBox


turcsanyip commented on a change in pull request #5486:
URL: https://github.com/apache/nifi/pull/5486#discussion_r739545995



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/AbstractAzureBlobProcessor_v12.java
##
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.azure;
+
+import com.azure.core.credential.AzureSasCredential;
+import com.azure.identity.ClientSecretCredentialBuilder;
+import com.azure.identity.ManagedIdentityCredentialBuilder;
+import com.azure.storage.blob.BlobClient;
+import com.azure.storage.blob.BlobServiceClient;
+import com.azure.storage.blob.BlobServiceClientBuilder;
+import com.azure.storage.blob.models.BlobProperties;
+import com.azure.storage.common.StorageSharedKeyCredential;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.context.PropertyContext;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+import 
org.apache.nifi.services.azure.storage.AzureStorageCredentialsDetails_v12;
+import 
org.apache.nifi.services.azure.storage.AzureStorageCredentialsService_v12;
+
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+
+import static 
org.apache.nifi.processors.azure.storage.utils.BlobAttributes.ATTR_NAME_BLOBNAME;
+import static 
org.apache.nifi.processors.azure.storage.utils.BlobAttributes.ATTR_NAME_BLOBTYPE;
+import static 
org.apache.nifi.processors.azure.storage.utils.BlobAttributes.ATTR_NAME_CONTAINER;
+import static 
org.apache.nifi.processors.azure.storage.utils.BlobAttributes.ATTR_NAME_ETAG;
+import static 
org.apache.nifi.processors.azure.storage.utils.BlobAttributes.ATTR_NAME_LANG;
+import static 
org.apache.nifi.processors.azure.storage.utils.BlobAttributes.ATTR_NAME_LENGTH;
+import static 
org.apache.nifi.processors.azure.storage.utils.BlobAttributes.ATTR_NAME_MIME_TYPE;
+import static 
org.apache.nifi.processors.azure.storage.utils.BlobAttributes.ATTR_NAME_PRIMARY_URI;
+import static 
org.apache.nifi.processors.azure.storage.utils.BlobAttributes.ATTR_NAME_TIMESTAMP;
+
+public abstract class AbstractAzureBlobProcessor_v12 extends AbstractProcessor 
{
+
+public static final PropertyDescriptor STORAGE_CREDENTIALS_SERVICE = new 
PropertyDescriptor.Builder()
+.name("storage-credentials-service")
+.displayName("Storage Credentials")
+.description("Controller Service used to obtain Azure Blob Storage 
Credentials.")
+
.identifiesControllerService(AzureStorageCredentialsService_v12.class)
+.required(true)
+.build();
+
+public static final PropertyDescriptor BLOB_NAME = new 
PropertyDescriptor.Builder()
+.name("blob-name")
+.displayName("Blob Name")
+.description("The full name of the blob")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.required(true)
+.build();
+
+public static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("All successfully processed FlowFiles are routed to 
this relationship")
+.build();
+public static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("Unsuccessful operations will be transferred to the 
failure relationship.")
+.build();
+
+private static final Set RELATIONSHIPS = 
Collections.unmodifiableSet(new HashSet<>(Arrays.asList(
+REL_SUCCESS,
+REL_FAILURE
+)));
+
+private BlobServiceClient storageClient;
+
+@Override
+public Set getRelationshi

[jira] [Updated] (NIFI-7322) Add SignContentPGP and VerifyContentPGP Processors

2021-10-29 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-7322:
---
Issue Type: New Feature  (was: Improvement)

> Add SignContentPGP and VerifyContentPGP Processors
> --
>
> Key: NIFI-7322
> URL: https://issues.apache.org/jira/browse/NIFI-7322
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions, Security
>Reporter: David Margolis
>Assignee: David Handermann
>Priority: Major
>  Labels: encryption, pgp, signing
> Fix For: 1.15.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Users have requested the capability to 
> [sign|https://www.gnupg.org/gph/en/manual/r606.html] content directly with 
> pgp in addition to storing the signature in an attribute 
> (SignContentAttributePGP). There should be options to 
> [clearsign|https://www.gnupg.org/gph/en/manual/r684.html] and 
> [armor|https://www.gnupg.org/gph/en/manual/r1290.html] the content. There 
> should be an option to produce the 
> [detached|https://www.gnupg.org/gph/en/manual/r622.html] signature as it's 
> own flowfile.
> Pairing with this processor, users have requested the capability to 
> [verify|https://www.gnupg.org/gph/en/manual/r697.html] signed content with 
> pgp in addition to verifying the signature in an attribute 
> (VerifyContentAttributePGP). There should be options to verify clearsigned 
> and armored content also.
> Finally, the DecryptContentPGP processor should be able to 
> [decrypt|https://www.gnupg.org/gph/en/manual/r669.html] the signed content, 
> so that just the unsigned content remains.
> These processors should use the PGPKeyMaterialService.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)