[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1538: MINIFICPP-2073 Separate docker build from docker tests in CI

2023-03-27 Thread via GitHub


szaszm commented on code in PR #1538:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1538#discussion_r1150109878


##
.github/workflows/ci.yml:
##


Review Comment:
   I think it would be better to run the 4 parallel jobs on the same VM, to 
save cloud resources. With this change, we save time by only having to wait 30 
minutes instead of 2 hours, but the ASF still pays for 2 hours worth of 
executor time.
   Maybe we could play with buffering the output to avoid the interleaving of 
tests, or leave them interleaved and tag each line with a thread number.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1538: MINIFICPP-2073 Separate docker build from docker tests in CI

2023-03-27 Thread via GitHub


szaszm commented on code in PR #1538:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1538#discussion_r1150109878


##
.github/workflows/ci.yml:
##


Review Comment:
   I think it would be better to run the 4 parallel jobs on the same VM, to 
save cloud resources. Maybe we could play with buffering the output to avoid 
the interleaving of tests, or leave them interleaved and tag each line with a 
thread number.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1538: MINIFICPP-2073 Separate docker build from docker tests in CI

2023-03-27 Thread via GitHub


szaszm commented on code in PR #1538:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1538#discussion_r1150109878


##
.github/workflows/ci.yml:
##


Review Comment:
   I think it would be better to run the 4 parallel jobs on the same VM, to 
save cloud resources. Maybe we could play with buffering the output to avoid 
the interleaving of tests, or tag each line with a thread number.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1540: MINIFICPP-2082 Move RocksDB stats to RepositoryMetrics

2023-03-27 Thread via GitHub


szaszm commented on code in PR #1540:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1540#discussion_r1150095747


##
extensions/rocksdb-repos/RocksDbRepository.cpp:
##
@@ -20,22 +20,22 @@ using namespace std::literals::chrono_literals;
 
 namespace org::apache::nifi::minifi::core::repository {
 
-void RocksDbRepository::printStats() {
+std::optional 
RocksDbRepository::getRocksDbStats() const {
+  RocksDbStats stats;
   auto opendb = db_->open();
   if (!opendb) {
-return;
+return stats;
   }
-  std::string key_count;
-  opendb->GetProperty("rocksdb.estimate-num-keys", &key_count);
 
   std::string table_readers;
   opendb->GetProperty("rocksdb.estimate-table-readers-mem", &table_readers);
+  stats.table_readers_size = std::stoull(table_readers);

Review Comment:
   Are the possible exceptions thrown by `std::stoull` properly handled?



##
libminifi/include/core/state/nodes/RepositoryMetricsSourceStore.h:
##


Review Comment:
   What's the purpose of extracting the functionality from RepositoryMetrics to 
this new class?



##
extensions/rocksdb-repos/RocksDbRepository.cpp:
##
@@ -20,22 +20,22 @@ using namespace std::literals::chrono_literals;
 
 namespace org::apache::nifi::minifi::core::repository {
 
-void RocksDbRepository::printStats() {
+std::optional 
RocksDbRepository::getRocksDbStats() const {
+  RocksDbStats stats;
   auto opendb = db_->open();
   if (!opendb) {
-return;
+return stats;

Review Comment:
   The return type is optional. Didn't you mean to return an empty optional 
here, instead of a default-constructed RocksDbStats object?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1541: MINIFICPP-2084 Fix flaky Reverse DNS timeout test

2023-03-27 Thread via GitHub


szaszm commented on code in PR #1541:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1541#discussion_r1150091534


##
extensions/expression-language/tests/ExpressionLanguageTests.cpp:
##
@@ -1342,13 +1342,18 @@ TEST_CASE("Reverse DNS lookup with valid timeout 
parameter", "[ExpressionLanguag
 
   SECTION("Should timeout") {
 auto reverse_lookup_expr_0ms = 
expression::compile("${reverseDnsLookup(${ip_addr}, 0)}");
-
REQUIRE_NOTHROW(reverse_lookup_expr_0ms(expression::Parameters{flow_file_a}).asString()
 == "8.8.8.8");
+std::string reverse_lookup_result = "dns.google";
+// Occasionally it doesn't time out even with 0ms timeout because it 
finishes before the timeout-thread starts
+for (auto number_of_tries = 1; number_of_tries <= 5 && 
reverse_lookup_result == "dns.google"; ++number_of_tries) {
+  reverse_lookup_result = 
reverse_lookup_expr_0ms(expression::Parameters{flow_file_a}).asString();
+}

Review Comment:
   If the timeout doesn't happen deterministically, then I'd rather not test it 
at all in a unit test. I also wouldn't want to short-circuit 0ms to always time 
out if it wouldn't normally happen, since that would degrade the functionality.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1543: MINIFICPP-2074 Fix time-period/integer validated properties during lo…

2023-03-27 Thread via GitHub


szaszm commented on code in PR #1543:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1543#discussion_r1150080892


##
minifi_main/MiNiFiMain.cpp:
##
@@ -263,6 +263,7 @@ int main(int argc, char **argv) {
 configure->setHome(minifiHome);
 configure->loadConfigureFile(DEFAULT_NIFI_PROPERTIES_FILE);
 
+configure->commitChanges();

Review Comment:
   Can we skip this commit, and still have the changed/mapped properties appear 
as part of the manifest?
   
   It feels wrong to write the config right after reading it, and we will do 
the mapping of the values anyway while reading. I think persisting them in a 
batch with the first normal persist is enough, and we can rely on in-memory 
mapping integers to milliseconds.



##
libminifi/src/properties/Properties.cpp:
##
@@ -87,13 +128,17 @@ void Properties::loadConfigureFile(const 
std::filesystem::path& configuration_fi
 return;
   }
   properties_.clear();
+  dirty_ = false;
   for (const auto& line : PropertiesFile{file}) {
+auto key = line.getKey();
 auto persisted_value = line.getValue();
 auto value = 
utils::StringUtils::replaceEnvironmentVariables(persisted_value);
-properties_[line.getKey()] = {persisted_value, value, false};
+bool need_to_persist_new_value = false;
+formatConfigurationProperty(key, persisted_value, value, 
need_to_persist_new_value);
+dirty_ |= need_to_persist_new_value;

Review Comment:
   This operator does bitwise OR, not logical OR. It happens to work out fine, 
due to the memory representation of booleans, but I'd prefer using logical 
operators with booleans.



##
libminifi/src/properties/Properties.cpp:
##
@@ -62,6 +63,46 @@ int Properties::getInt(const std::string &key, int 
default_value) const {
   return it != properties_.end() ? std::stoi(it->second.active_value) : 
default_value;
 }
 
+namespace {
+void ensureTimePeriodValidatedPropertyHasExplicitUnit(const 
core::PropertyValidator* const validator, std::string& persisted_value, 
std::string& value, bool& need_to_persist_new_value) {
+  if (validator != core::StandardValidators::get().TIME_PERIOD_VALIDATOR.get())
+return;
+  if (value.empty() || !std::all_of(value.begin(), value.end(), ::isdigit))
+return;
+
+  value += " ms";
+  persisted_value = value;
+  need_to_persist_new_value = true;
+}
+
+bool integerValidatedProperty(const core::PropertyValidator* const validator) {
+  return validator == core::StandardValidators::get().INTEGER_VALIDATOR.get()
+  || validator == 
core::StandardValidators::get().UNSIGNED_INT_VALIDATOR.get()
+  || validator == core::StandardValidators::get().LONG_VALIDATOR.get()
+  || validator == 
core::StandardValidators::get().UNSIGNED_LONG_VALIDATOR.get();
+}
+
+void ensureIntegerValidatedPropertyHasNoUnit(const core::PropertyValidator* 
const validator, std::string& persisted_value, std::string& value, bool& 
need_to_persist_new_value) {
+  if (!integerValidatedProperty(validator))
+return;
+
+  if (auto parsed_time = 
utils::timeutils::StringToDuration(value)) {
+value = fmt::format("{}", parsed_time->count());
+persisted_value = value;
+need_to_persist_new_value = true;
+  }
+}
+
+void formatConfigurationProperty(std::string_view key, std::string& 
persisted_value, std::string& value, bool& need_to_persist_new_value) {
+  auto configuration_property = 
Configuration::CONFIGURATION_PROPERTIES.find(key);
+  if (configuration_property == Configuration::CONFIGURATION_PROPERTIES.end())
+return;
+
+  
ensureTimePeriodValidatedPropertyHasExplicitUnit(configuration_property->second,
 persisted_value, value, need_to_persist_new_value);
+  ensureIntegerValidatedPropertyHasNoUnit(configuration_property->second, 
persisted_value, value, need_to_persist_new_value);
+}
+}  // namespace
+

Review Comment:
   Could you extend the comment above `loadConfigureFile` to specify the kind 
of mapping that's happening here during load, and the motivation? Just to avoid 
confusing future readers of the code.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-11349) Upgrade HBase to 2.5.3

2023-03-27 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-11349:

Fix Version/s: 1.latest
   2.latest
   Status: Patch Available  (was: Open)

> Upgrade HBase to 2.5.3
> --
>
> Key: NIFI-11349
> URL: https://issues.apache.org/jira/browse/NIFI-11349
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
>  Labels: dependency-upgrade
> Fix For: 1.latest, 2.latest
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Apache HBase [2.5.3|https://github.com/apache/hbase/releases/tag/rel%2F2.5.3] 
> includes a large number of improvements over version 2.2.2 specified in 
> {{nifi-hbase2_client-service}}. Recent versions of HBase 2 include a variant 
> that depends on Hadoop 3 instead of Hadoop 2, which aligns with current 
> Hadoop dependencies in NiFi.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] exceptionfactory opened a new pull request, #7091: NIFI-11349 Upgrade HBase from 2.2.2 to 2.5.3 with Hadoop 3

2023-03-27 Thread via GitHub


exceptionfactory opened a new pull request, #7091:
URL: https://github.com/apache/nifi/pull/7091

   # Summary
   
   [NIFI-11349](https://issues.apache.org/jira/browse/NIFI-11349) Upgrades 
HBase 2 dependencies from 2.2.2 to 
[2.5.3](https://github.com/apache/hbase/releases/tag/rel%2F2.5.3) with the 
Hadoop 3 variant, aligning with other components that also depend on Hadoop 3. 
Changes also incorporated excluding [reload4j](https://reload4j.qos.ch/), which 
is not necessary due to existing Log4j 1 to SLF4J bridge libraries in the NiFi 
framework.
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [X] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [X] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [X] Pull Request based on current revision of the `main` branch
   - [X] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [X] Build completed using `mvn clean install -P contrib-check`
 - [X] JDK 11
 - [ ] JDK 17
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-11349) Upgrade HBase to 2.5.3

2023-03-27 Thread David Handermann (Jira)
David Handermann created NIFI-11349:
---

 Summary: Upgrade HBase to 2.5.3
 Key: NIFI-11349
 URL: https://issues.apache.org/jira/browse/NIFI-11349
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: David Handermann
Assignee: David Handermann


Apache HBase [2.5.3|https://github.com/apache/hbase/releases/tag/rel%2F2.5.3] 
includes a large number of improvements over version 2.2.2 specified in 
{{nifi-hbase2_client-service}}. Recent versions of HBase 2 include a variant 
that depends on Hadoop 3 instead of Hadoop 2, which aligns with current Hadoop 
dependencies in NiFi.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11348) Upgrade JRuby to 9.4.2.0

2023-03-27 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-11348:

Status: Patch Available  (was: Open)

> Upgrade JRuby to 9.4.2.0
> 
>
> Key: NIFI-11348
> URL: https://issues.apache.org/jira/browse/NIFI-11348
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 1.latest, 2.latest
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> JRuby [9.4.2.0|https://github.com/jruby/jruby/releases/tag/9.4.2.0] 
> incorporates support for Ruby 3.1 and includes a number of other bug fixes 
> and transitive dependency upgrades.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] exceptionfactory opened a new pull request, #7090: NIFI-11348 Upgrade JRuby from 9.3.9.0 to 9.4.2.0

2023-03-27 Thread via GitHub


exceptionfactory opened a new pull request, #7090:
URL: https://github.com/apache/nifi/pull/7090

   # Summary
   
   [NIFI-11348](https://issues.apache.org/jira/browse/NIFI-11348) Upgrades 
Scripting bundle dependency on JRuby from 9.3.9.0 to 
[9.4.2.0](https://github.com/jruby/jruby/releases/tag/9.4.2.0). JRuby 
[9.4.0.0](https://github.com/jruby/jruby/releases/tag/9.4.0.0) introduced 
support for Ruby 3.1, and additional changes since that version have included a 
number of bug fixes and transitive dependency upgrades.
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [X] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [X] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [X] Pull Request based on current revision of the `main` branch
   - [X] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [X] Build completed using `mvn clean install -P contrib-check`
 - [X] JDK 11
 - [ ] JDK 17
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-11348) Upgrade JRuby to 9.4.2.0

2023-03-27 Thread David Handermann (Jira)
David Handermann created NIFI-11348:
---

 Summary: Upgrade JRuby to 9.4.2.0
 Key: NIFI-11348
 URL: https://issues.apache.org/jira/browse/NIFI-11348
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: David Handermann
Assignee: David Handermann
 Fix For: 1.latest, 2.latest


JRuby [9.4.2.0|https://github.com/jruby/jruby/releases/tag/9.4.2.0] 
incorporates support for Ruby 3.1 and includes a number of other bug fixes and 
transitive dependency upgrades.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11345) TestPutIcebergWithHiveCatalog Runs Longer than 60 Seconds

2023-03-27 Thread Nandor Soma Abonyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandor Soma Abonyi updated NIFI-11345:
--
Fix Version/s: 2.0.0
   1.21.0
   (was: 1.latest)
   (was: 2.latest)
   Resolution: Resolved
   Status: Resolved  (was: Patch Available)

> TestPutIcebergWithHiveCatalog Runs Longer than 60 Seconds
> -
>
> Key: NIFI-11345
> URL: https://issues.apache.org/jira/browse/NIFI-11345
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.20.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 2.0.0, 1.21.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The {{TestPutIcebergWithHiveCatalog}} runs multiple parameterized test 
> methods that exercise three supported formats: Avro, ORC, and Parquet. Each 
> test run is expensive, taking around 5 seconds, resulting in the entire class 
> taking over 60 seconds to complete under optimal circumstances. Instead of 
> running each method with all three formats, each method should be limited to 
> one format to reduce the overall runtime.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11345) TestPutIcebergWithHiveCatalog Runs Longer than 60 Seconds

2023-03-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705736#comment-17705736
 ] 

ASF subversion and git services commented on NIFI-11345:


Commit 588e0d74abcd8197a891864d91911c3c4c462cf0 in nifi's branch 
refs/heads/support/nifi-1.x from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=588e0d74ab ]

NIFI-11345 Adjusted Iceberg test to avoid expensive duplicative runs

This closes #7086

Signed-off-by: Nandor Soma Abonyi 


> TestPutIcebergWithHiveCatalog Runs Longer than 60 Seconds
> -
>
> Key: NIFI-11345
> URL: https://issues.apache.org/jira/browse/NIFI-11345
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.20.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 1.latest, 2.latest
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The {{TestPutIcebergWithHiveCatalog}} runs multiple parameterized test 
> methods that exercise three supported formats: Avro, ORC, and Parquet. Each 
> test run is expensive, taking around 5 seconds, resulting in the entire class 
> taking over 60 seconds to complete under optimal circumstances. Instead of 
> running each method with all three formats, each method should be limited to 
> one format to reduce the overall runtime.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11345) TestPutIcebergWithHiveCatalog Runs Longer than 60 Seconds

2023-03-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705732#comment-17705732
 ] 

ASF subversion and git services commented on NIFI-11345:


Commit 623bcfd500edfe9ec0b7608f0aa88e736f1bcc3f in nifi's branch 
refs/heads/main from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=623bcfd500 ]

NIFI-11345 Adjusted Iceberg test to avoid expensive duplicative runs

This closes #7086

Signed-off-by: Nandor Soma Abonyi 


> TestPutIcebergWithHiveCatalog Runs Longer than 60 Seconds
> -
>
> Key: NIFI-11345
> URL: https://issues.apache.org/jira/browse/NIFI-11345
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.20.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 1.latest, 2.latest
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{TestPutIcebergWithHiveCatalog}} runs multiple parameterized test 
> methods that exercise three supported formats: Avro, ORC, and Parquet. Each 
> test run is expensive, taking around 5 seconds, resulting in the entire class 
> taking over 60 seconds to complete under optimal circumstances. Instead of 
> running each method with all three formats, each method should be limited to 
> one format to reduce the overall runtime.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] asfgit closed pull request #7086: NIFI-11345 Adjust Iceberg test to avoid expensive duplicative runs

2023-03-27 Thread via GitHub


asfgit closed pull request #7086: NIFI-11345 Adjust Iceberg test to avoid 
expensive duplicative runs
URL: https://github.com/apache/nifi/pull/7086


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-11347) Upgrade OWASP Dependency Check to 8.2.1

2023-03-27 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-11347:

Fix Version/s: 1.latest
   2.latest
   Status: Patch Available  (was: Open)

> Upgrade OWASP Dependency Check to 8.2.1
> ---
>
> Key: NIFI-11347
> URL: https://issues.apache.org/jira/browse/NIFI-11347
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 1.latest, 2.latest
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> OWASP Dependency Check 
> [8.2.1|https://github.com/jeremylong/DependencyCheck/releases/tag/v8.2.1] 
> corrects a number of false positives related to JSON libraries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] exceptionfactory opened a new pull request, #7089: NIFI-11347 Upgrade OWASP Dependency Check from 8.0.2 to 8.2.1

2023-03-27 Thread via GitHub


exceptionfactory opened a new pull request, #7089:
URL: https://github.com/apache/nifi/pull/7089

   # Summary
   
   [NIFI-11347](https://issues.apache.org/jira/browse/NIFI-11347) Upgrades the 
OWASP Dependency Check Plugin from 8.0.2 to 8.2.1.
   
   Changes include updating the vulnerability suppression configuration, 
excluding Apache Ivy and Groovy from several modules, and upgrading Apache Solr 
dependencies for Ranger from 8.6.3 to 8.11.1.
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [X] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [X] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [X] Pull Request based on current revision of the `main` branch
   - [X] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [ ] Build completed using `mvn clean install -P contrib-check`
 - [ ] JDK 11
 - [ ] JDK 17
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-11347) Upgrade OWASP Dependency Check to 8.2.1

2023-03-27 Thread David Handermann (Jira)
David Handermann created NIFI-11347:
---

 Summary: Upgrade OWASP Dependency Check to 8.2.1
 Key: NIFI-11347
 URL: https://issues.apache.org/jira/browse/NIFI-11347
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Tools and Build
Reporter: David Handermann
Assignee: David Handermann


OWASP Dependency Check 
[8.2.1|https://github.com/jeremylong/DependencyCheck/releases/tag/v8.2.1] 
corrects a number of false positives related to JSON libraries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11346) Upgrade Parquet to 1.12.3

2023-03-27 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-11346:

Fix Version/s: 1.latest
   2.latest
   Status: Patch Available  (was: Open)

> Upgrade Parquet to 1.12.3
> -
>
> Key: NIFI-11346
> URL: https://issues.apache.org/jira/browse/NIFI-11346
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
>  Labels: dependency-upgrade
> Fix For: 1.latest, 2.latest
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Several extension components depend on Apache Parquet for formatting input 
> and output files. Some versions of Parquet older than 1.12.2 and 1.11.2 are 
> vulnerable to input validation resource exhaustion as described in 
> CVE-2021-41561.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] exceptionfactory opened a new pull request, #7088: NIFI-11346 Upgrade Parquet from 1.12.0 to 1.12.3

2023-03-27 Thread via GitHub


exceptionfactory opened a new pull request, #7088:
URL: https://github.com/apache/nifi/pull/7088

   # Summary
   
   [NIFI-11346](https://issues.apache.org/jira/browse/NIFI-11346) Upgrades 
Apache Parquet dependencies from 1.12.0 to 1.12.3 in `nifi-parquet-processors` 
and also upgrades Parquet transitive dependencies from version 1.10.2 in Hive 
modules. The upgrade mitigates potential vulnerabilities in Parquet input 
validation that could lead to resource exhaustion as described in 
CVE-2021-41561.
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [X] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [X] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [X] Pull Request based on current revision of the `main` branch
   - [X] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [ ] Build completed using `mvn clean install -P contrib-check`
 - [ ] JDK 11
 - [ ] JDK 17
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-11333) Disable removing components unless all nodes connected

2023-03-27 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-11333:

Fix Version/s: 2.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Disable removing components unless all nodes connected
> --
>
> Key: NIFI-11333
> URL: https://issues.apache.org/jira/browse/NIFI-11333
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 2.0.0, 1.21.0, 2.latest
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In 1.16 we allows users to start updating flows while nodes are disconnected. 
> This has been greatly helpful. However, it can lead to a problem: when a user 
> removes a connection and there's data queued on a disconnected node, that 
> disconnected node can no longer rejoin the cluster. Instead, it remains 
> disconnected; and if the node is shutdown, it cannot be restarted without 
> manually changing nifi.properties to change it from a clustered not to a 
> standalone node, then restarting, and bleeding the data out, shutting down, 
> manually updating properties to make it a clustered node again; and 
> restarting.
> This is painful. Instead, we should simply disallow the removal of any 
> component unless all nodes in the cluster are connected. Components can still 
> be added, started, stopped, and disabled. Just not removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11333) Disable removing components unless all nodes connected

2023-03-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705700#comment-17705700
 ] 

ASF subversion and git services commented on NIFI-11333:


Commit 94ae926c42bf8d09422216b9bc86664e04d92fe6 in nifi's branch 
refs/heads/main from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=94ae926c42 ]

NIFI-11333: Do not allow components to be removed while a node is disconnected

This closes #7085

Signed-off-by: David Handermann 


> Disable removing components unless all nodes connected
> --
>
> Key: NIFI-11333
> URL: https://issues.apache.org/jira/browse/NIFI-11333
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.21.0, 2.latest
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In 1.16 we allows users to start updating flows while nodes are disconnected. 
> This has been greatly helpful. However, it can lead to a problem: when a user 
> removes a connection and there's data queued on a disconnected node, that 
> disconnected node can no longer rejoin the cluster. Instead, it remains 
> disconnected; and if the node is shutdown, it cannot be restarted without 
> manually changing nifi.properties to change it from a clustered not to a 
> standalone node, then restarting, and bleeding the data out, shutting down, 
> manually updating properties to make it a clustered node again; and 
> restarting.
> This is painful. Instead, we should simply disallow the removal of any 
> component unless all nodes in the cluster are connected. Components can still 
> be added, started, stopped, and disabled. Just not removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11333) Disable removing components unless all nodes connected

2023-03-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705701#comment-17705701
 ] 

ASF subversion and git services commented on NIFI-11333:


Commit d2fdff7b93cb63879b67d5c98932bde6d424c09a in nifi's branch 
refs/heads/support/nifi-1.x from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=d2fdff7b93 ]

NIFI-11333: Do not allow components to be removed while a node is disconnected

This closes #7085

Signed-off-by: David Handermann 


> Disable removing components unless all nodes connected
> --
>
> Key: NIFI-11333
> URL: https://issues.apache.org/jira/browse/NIFI-11333
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.21.0, 2.latest
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In 1.16 we allows users to start updating flows while nodes are disconnected. 
> This has been greatly helpful. However, it can lead to a problem: when a user 
> removes a connection and there's data queued on a disconnected node, that 
> disconnected node can no longer rejoin the cluster. Instead, it remains 
> disconnected; and if the node is shutdown, it cannot be restarted without 
> manually changing nifi.properties to change it from a clustered not to a 
> standalone node, then restarting, and bleeding the data out, shutting down, 
> manually updating properties to make it a clustered node again; and 
> restarting.
> This is painful. Instead, we should simply disallow the removal of any 
> component unless all nodes in the cluster are connected. Components can still 
> be added, started, stopped, and disabled. Just not removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] exceptionfactory closed pull request #7085: NIFI-11333: Do not allow components to be removed while a node is dis…

2023-03-27 Thread via GitHub


exceptionfactory closed pull request #7085: NIFI-11333: Do not allow components 
to be removed while a node is dis…
URL: https://github.com/apache/nifi/pull/7085


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-11346) Upgrade Parquet to 1.12.3

2023-03-27 Thread David Handermann (Jira)
David Handermann created NIFI-11346:
---

 Summary: Upgrade Parquet to 1.12.3
 Key: NIFI-11346
 URL: https://issues.apache.org/jira/browse/NIFI-11346
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: David Handermann
Assignee: David Handermann


Several extension components depend on Apache Parquet for formatting input and 
output files. Some versions of Parquet older than 1.12.2 and 1.11.2 are 
vulnerable to input validation resource exhaustion as described in 
CVE-2021-41561.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11341) ListenUDPRecord produces invalid FlowFiles due to Content Repository issues

2023-03-27 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-11341:

Summary: ListenUDPRecord produces invalid FlowFiles due to Content 
Repository issues  (was: ListenUDPRecord truncating data)

> ListenUDPRecord produces invalid FlowFiles due to Content Repository issues
> ---
>
> Key: NIFI-11341
> URL: https://issues.apache.org/jira/browse/NIFI-11341
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.20.0
>Reporter: Peter Kimberley
>Assignee: Mark Payne
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
> Attachments: NiFi_Flow.json, image.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In our environment, we use {{ListenUDPRecord}} to collect Syslog messages. 
> This processor is followed by a {{PartitionRecord}} processor that populates 
> an attribute for routing.In release {*}1.19.0{*}, this flow worked without 
> issue. In *1.20.0* though, I am seeing intermittent message truncation in 
> {{{}PartitionRecord{}}}, with bulletin messages like the following appearing 
> regularly:
> {noformat}
> PartitionRecord[id=03ea67a7-0b9c-1c9f--8d7e5185] Failed to partition 
> FlowFile[filename=ca9c3e11-9365-4ff9-9499-29522fc0cab7]: 
> com.fasterxml.jackson.core.JsonParseException: Unexpected character (',' 
> (code 44)): expected a value
> at [Source: (org.apache.nifi.stream.io.NonCloseableInputStream); line: 1, 
> column: 381]{noformat}
>  
> An example message (note the absence of a Syslog header):
> {noformat}
> itor] [Unit test] Alarm check cfg warning threshold=75 critical threshold=85 
> warning alarm <...>{noformat}
> {{ListenUDPRecord}} properties are attached.
> h3. Reproduction
> The attached minimal flow illustrates this setup.
>  
> To reproduce this issue, generate improperly-formatted syslog and send to 
> {{ListenUDPRecord}}.
>  
> In my environment, I have two syslog sources feeding this test cluster. 
> Scenario is as follows:
>  # First source (compliant Syslog format) feeds in.
>  # Flow is OK - no bulletins.
>  # Activate second source, which is of an invalid Syslog format and flows to 
> the {{parse.failure}} relationship of {{ListenUDPRecord}}. This is expected - 
> I deal with this gracefully.
>  # Bulletins start firing in {{PartitionRecord}} and the first source starts 
> getting truncated randomly.
> Overall, the majority of messages from the well-formed source make it 
> through. However I'm seeing roughly 1 bulletin every few seconds, which 
> indicates a small proportion of messages are getting truncated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-11232) FlowFileAccessException using ContentClaimInputStream

2023-03-27 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann resolved NIFI-11232.
-
  Assignee: Christian Wahl
Resolution: Fixed

> FlowFileAccessException using ContentClaimInputStream
> -
>
> Key: NIFI-11232
> URL: https://issues.apache.org/jira/browse/NIFI-11232
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.20.0
>Reporter: Christian Wahl
>Assignee: Christian Wahl
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
> Attachments: TestContentClaimInputStream.java
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> NIFI-10888 introduced a BufferedInputStream inside of the 
> ContentClaimInputStream to speed up rewinding in small flow files (<1MB).
> Under some circumstances it can happen in reset that the delegate stream is 
> closed and a new delegate stream is created, but the bufferedIn is not 
> recreated with the new delegate.
> During the next read this leads to a situation where it tries to read from 
> bufferedIn and bufferedIn in turn tries to read from the old and closed 
> delegate stream causing an IOException or FlowFileAccessException.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-11341) ListenUDPRecord truncating data

2023-03-27 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann resolved NIFI-11341.
-
  Assignee: Mark Payne
Resolution: Fixed

> ListenUDPRecord truncating data
> ---
>
> Key: NIFI-11341
> URL: https://issues.apache.org/jira/browse/NIFI-11341
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.20.0
>Reporter: Peter Kimberley
>Assignee: Mark Payne
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
> Attachments: NiFi_Flow.json, image.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In our environment, we use {{ListenUDPRecord}} to collect Syslog messages. 
> This processor is followed by a {{PartitionRecord}} processor that populates 
> an attribute for routing.In release {*}1.19.0{*}, this flow worked without 
> issue. In *1.20.0* though, I am seeing intermittent message truncation in 
> {{{}PartitionRecord{}}}, with bulletin messages like the following appearing 
> regularly:
> {noformat}
> PartitionRecord[id=03ea67a7-0b9c-1c9f--8d7e5185] Failed to partition 
> FlowFile[filename=ca9c3e11-9365-4ff9-9499-29522fc0cab7]: 
> com.fasterxml.jackson.core.JsonParseException: Unexpected character (',' 
> (code 44)): expected a value
> at [Source: (org.apache.nifi.stream.io.NonCloseableInputStream); line: 1, 
> column: 381]{noformat}
>  
> An example message (note the absence of a Syslog header):
> {noformat}
> itor] [Unit test] Alarm check cfg warning threshold=75 critical threshold=85 
> warning alarm <...>{noformat}
> {{ListenUDPRecord}} properties are attached.
> h3. Reproduction
> The attached minimal flow illustrates this setup.
>  
> To reproduce this issue, generate improperly-formatted syslog and send to 
> {{ListenUDPRecord}}.
>  
> In my environment, I have two syslog sources feeding this test cluster. 
> Scenario is as follows:
>  # First source (compliant Syslog format) feeds in.
>  # Flow is OK - no bulletins.
>  # Activate second source, which is of an invalid Syslog format and flows to 
> the {{parse.failure}} relationship of {{ListenUDPRecord}}. This is expected - 
> I deal with this gracefully.
>  # Bulletins start firing in {{PartitionRecord}} and the first source starts 
> getting truncated randomly.
> Overall, the majority of messages from the well-formed source make it 
> through. However I'm seeing roughly 1 bulletin every few seconds, which 
> indicates a small proportion of messages are getting truncated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11232) FlowFileAccessException using ContentClaimInputStream

2023-03-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705680#comment-17705680
 ] 

ASF subversion and git services commented on NIFI-11232:


Commit 06b00931308c2f3ee6d433950828e09089bc9100 in nifi's branch 
refs/heads/support/nifi-1.x from Christian Wahl
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=06b0093130 ]

NIFI-11232 Fixed buffer handling in ContentClaimInputStream

This closes #6996

Signed-off-by: David Handermann 


> FlowFileAccessException using ContentClaimInputStream
> -
>
> Key: NIFI-11232
> URL: https://issues.apache.org/jira/browse/NIFI-11232
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.20.0
>Reporter: Christian Wahl
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
> Attachments: TestContentClaimInputStream.java
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> NIFI-10888 introduced a BufferedInputStream inside of the 
> ContentClaimInputStream to speed up rewinding in small flow files (<1MB).
> Under some circumstances it can happen in reset that the delegate stream is 
> closed and a new delegate stream is created, but the bufferedIn is not 
> recreated with the new delegate.
> During the next read this leads to a situation where it tries to read from 
> bufferedIn and bufferedIn in turn tries to read from the old and closed 
> delegate stream causing an IOException or FlowFileAccessException.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11341) ListenUDPRecord truncating data

2023-03-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705681#comment-17705681
 ] 

ASF subversion and git services commented on NIFI-11341:


Commit 474e12466750d377b7d4868c9ee5fe203f0b8f5a in nifi's branch 
refs/heads/support/nifi-1.x from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=474e124667 ]

NIFI-11341 Fixed OutputStream.close() handling for Content Claims

Fixed issue in StandardContentClaimWriteCache in which inner OutputStream class 
did not have an idempotent close() method; as a result, the stream could be 
written to while already in use for another active FlowFile; fixed bug in 
ContentClaimInputStream in which skip() method ignored its own 
BufferedInputStream - this was discovered because it was causing failures in 
StandardProcessSessionIT; fixed bug in StandardProcessSessionIT in which the 
length of StandardContentClaim was being doubled because the OutputStream was 
setting the claim length but that is already handled at a lower level.

This closes #7087

Signed-off-by: David Handermann 


> ListenUDPRecord truncating data
> ---
>
> Key: NIFI-11341
> URL: https://issues.apache.org/jira/browse/NIFI-11341
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.20.0
>Reporter: Peter Kimberley
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
> Attachments: NiFi_Flow.json, image.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In our environment, we use {{ListenUDPRecord}} to collect Syslog messages. 
> This processor is followed by a {{PartitionRecord}} processor that populates 
> an attribute for routing.In release {*}1.19.0{*}, this flow worked without 
> issue. In *1.20.0* though, I am seeing intermittent message truncation in 
> {{{}PartitionRecord{}}}, with bulletin messages like the following appearing 
> regularly:
> {noformat}
> PartitionRecord[id=03ea67a7-0b9c-1c9f--8d7e5185] Failed to partition 
> FlowFile[filename=ca9c3e11-9365-4ff9-9499-29522fc0cab7]: 
> com.fasterxml.jackson.core.JsonParseException: Unexpected character (',' 
> (code 44)): expected a value
> at [Source: (org.apache.nifi.stream.io.NonCloseableInputStream); line: 1, 
> column: 381]{noformat}
>  
> An example message (note the absence of a Syslog header):
> {noformat}
> itor] [Unit test] Alarm check cfg warning threshold=75 critical threshold=85 
> warning alarm <...>{noformat}
> {{ListenUDPRecord}} properties are attached.
> h3. Reproduction
> The attached minimal flow illustrates this setup.
>  
> To reproduce this issue, generate improperly-formatted syslog and send to 
> {{ListenUDPRecord}}.
>  
> In my environment, I have two syslog sources feeding this test cluster. 
> Scenario is as follows:
>  # First source (compliant Syslog format) feeds in.
>  # Flow is OK - no bulletins.
>  # Activate second source, which is of an invalid Syslog format and flows to 
> the {{parse.failure}} relationship of {{ListenUDPRecord}}. This is expected - 
> I deal with this gracefully.
>  # Bulletins start firing in {{PartitionRecord}} and the first source starts 
> getting truncated randomly.
> Overall, the majority of messages from the well-formed source make it 
> through. However I'm seeing roughly 1 bulletin every few seconds, which 
> indicates a small proportion of messages are getting truncated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11341) ListenUDPRecord truncating data

2023-03-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705679#comment-17705679
 ] 

ASF subversion and git services commented on NIFI-11341:


Commit 969fc50778bda82965fa60d02efa85537a6edc56 in nifi's branch 
refs/heads/main from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=969fc50778 ]

NIFI-11341 Fixed OutputStream.close() handling for Content Claims

Fixed issue in StandardContentClaimWriteCache in which inner OutputStream class 
did not have an idempotent close() method; as a result, the stream could be 
written to while already in use for another active FlowFile; fixed bug in 
ContentClaimInputStream in which skip() method ignored its own 
BufferedInputStream - this was discovered because it was causing failures in 
StandardProcessSessionIT; fixed bug in StandardProcessSessionIT in which the 
length of StandardContentClaim was being doubled because the OutputStream was 
setting the claim length but that is already handled at a lower level.

This closes #7087

Signed-off-by: David Handermann 


> ListenUDPRecord truncating data
> ---
>
> Key: NIFI-11341
> URL: https://issues.apache.org/jira/browse/NIFI-11341
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.20.0
>Reporter: Peter Kimberley
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
> Attachments: NiFi_Flow.json, image.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In our environment, we use {{ListenUDPRecord}} to collect Syslog messages. 
> This processor is followed by a {{PartitionRecord}} processor that populates 
> an attribute for routing.In release {*}1.19.0{*}, this flow worked without 
> issue. In *1.20.0* though, I am seeing intermittent message truncation in 
> {{{}PartitionRecord{}}}, with bulletin messages like the following appearing 
> regularly:
> {noformat}
> PartitionRecord[id=03ea67a7-0b9c-1c9f--8d7e5185] Failed to partition 
> FlowFile[filename=ca9c3e11-9365-4ff9-9499-29522fc0cab7]: 
> com.fasterxml.jackson.core.JsonParseException: Unexpected character (',' 
> (code 44)): expected a value
> at [Source: (org.apache.nifi.stream.io.NonCloseableInputStream); line: 1, 
> column: 381]{noformat}
>  
> An example message (note the absence of a Syslog header):
> {noformat}
> itor] [Unit test] Alarm check cfg warning threshold=75 critical threshold=85 
> warning alarm <...>{noformat}
> {{ListenUDPRecord}} properties are attached.
> h3. Reproduction
> The attached minimal flow illustrates this setup.
>  
> To reproduce this issue, generate improperly-formatted syslog and send to 
> {{ListenUDPRecord}}.
>  
> In my environment, I have two syslog sources feeding this test cluster. 
> Scenario is as follows:
>  # First source (compliant Syslog format) feeds in.
>  # Flow is OK - no bulletins.
>  # Activate second source, which is of an invalid Syslog format and flows to 
> the {{parse.failure}} relationship of {{ListenUDPRecord}}. This is expected - 
> I deal with this gracefully.
>  # Bulletins start firing in {{PartitionRecord}} and the first source starts 
> getting truncated randomly.
> Overall, the majority of messages from the well-formed source make it 
> through. However I'm seeing roughly 1 bulletin every few seconds, which 
> indicates a small proportion of messages are getting truncated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11232) FlowFileAccessException using ContentClaimInputStream

2023-03-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705678#comment-17705678
 ] 

ASF subversion and git services commented on NIFI-11232:


Commit 6bd893da16a3db4a65feb40cb70ed3894d147b5e in nifi's branch 
refs/heads/main from Christian Wahl
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=6bd893da16 ]

NIFI-11232 Fixed buffer handling in ContentClaimInputStream

This closes #6996

Signed-off-by: David Handermann 


> FlowFileAccessException using ContentClaimInputStream
> -
>
> Key: NIFI-11232
> URL: https://issues.apache.org/jira/browse/NIFI-11232
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.20.0
>Reporter: Christian Wahl
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
> Attachments: TestContentClaimInputStream.java
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> NIFI-10888 introduced a BufferedInputStream inside of the 
> ContentClaimInputStream to speed up rewinding in small flow files (<1MB).
> Under some circumstances it can happen in reset that the delegate stream is 
> closed and a new delegate stream is created, but the bufferedIn is not 
> recreated with the new delegate.
> During the next read this leads to a situation where it tries to read from 
> bufferedIn and bufferedIn in turn tries to read from the old and closed 
> delegate stream causing an IOException or FlowFileAccessException.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] exceptionfactory closed pull request #7087: NIFI-11341: Fixed issue in StandardContentClaimWriteCache in which in…

2023-03-27 Thread via GitHub


exceptionfactory closed pull request #7087: NIFI-11341: Fixed issue in 
StandardContentClaimWriteCache in which in…
URL: https://github.com/apache/nifi/pull/7087


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] exceptionfactory closed pull request #6996: NIFI-11232 Fix buffer handling in ContentClaimInputStream

2023-03-27 Thread via GitHub


exceptionfactory closed pull request #6996: NIFI-11232 Fix buffer handling in 
ContentClaimInputStream
URL: https://github.com/apache/nifi/pull/6996


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] exceptionfactory commented on pull request #6996: NIFI-11232 Fix buffer handling in ContentClaimInputStream

2023-03-27 Thread via GitHub


exceptionfactory commented on PR #6996:
URL: https://github.com/apache/nifi/pull/6996#issuecomment-1485851273

   Thanks for the confirmation @markap14!
   
   Thanks again for the analysis and solution @Chrzi, nice work! As my 
suggestions were related to style and formatting, I pushed a commit to the 
branch implementing the changes and plan on merging.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] exceptionfactory commented on pull request #7013: NIFI-4890 Refactor OIDC with support for Refresh Tokens

2023-03-27 Thread via GitHub


exceptionfactory commented on PR #7013:
URL: https://github.com/apache/nifi/pull/7013#issuecomment-1485828462

   Thanks for the feedback and testing @mtien-apache and @mcgilman! I pushed 
one more update correcting some spelling and naming issues. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] markap14 commented on pull request #6996: NIFI-11232 Fix buffer handling in ContentClaimInputStream

2023-03-27 Thread via GitHub


markap14 commented on PR #6996:
URL: https://github.com/apache/nifi/pull/6996#issuecomment-1485823880

   Thanks for the fix @Chrzi !


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] markap14 commented on pull request #6996: NIFI-11232 Fix buffer handling in ContentClaimInputStream

2023-03-27 Thread via GitHub


markap14 commented on PR #6996:
URL: https://github.com/apache/nifi/pull/6996#issuecomment-1485821692

   Aside from the deviations from the standard style that we tend to use in 
NiFi, which @exceptionfactory noted, I am a +1


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-10982) Update org.springframework_spring-web to 6.0.0

2023-03-27 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705644#comment-17705644
 ] 

David Handermann commented on NIFI-10982:
-

Thanks for the reply [~philiplee]. For reference, Apache NiFi is not subject to 
CVE-2016-127 because it does not use the HttpInvokerServiceExporter 
mentioned in [Spring issue 
24434|https://github.com/spring-projects/spring-framework/issues/24434].

> Update org.springframework_spring-web to 6.0.0
> --
>
> Key: NIFI-10982
> URL: https://issues.apache.org/jira/browse/NIFI-10982
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.19.1
>Reporter: Phil Lee
>Priority: Major
>
> Update org.springframework_spring-web from 5.3.24 to 6.0.0.  This will 
> remediate [CVE-2016-127|https://nvd.nist.gov/vuln/detail/CVE-2016-127]
> Twistlock scan reported this as critical severity vulnerability in NiFi 
> Toolkit (which is included in NiFi version 1.19.1).
> Impacted versions: <6.0.0
> Discovered: 2 days ago
> Published: more than 2 years ago
> Pivotal Spring Framework through 5.3.16 suffers from a potential remote code 
> execution (RCE) issue if used for Java deserialization of untrusted data. 
> Depending on how the library is implemented within a product, this issue may 
> or not occur, and authentication may be required. NOTE: the vendor\'s 
> position is that untrusted data is not an intended use case. The product\'s 
> behavior will not be changed because some users rely on deserialization of 
> trusted data.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] markap14 opened a new pull request, #7087: NIFI-11341: Fixed issue in StandardContentClaimWriteCache in which in…

2023-03-27 Thread via GitHub


markap14 opened a new pull request, #7087:
URL: https://github.com/apache/nifi/pull/7087

   …ner OutputStream class did not have an idempotent close() method; as a 
result, the stream could be written to while already in use for another active 
FlowFile; fixed bug in ContentClaimInputStream in which skip() method ignored 
its own BufferedInputStream - this was discovered because it was causing 
failures in StandardProcessSessionIT; fixed bug in StandardProcessSessionIT in 
which the length of StandardContentClaim was being doubled because the 
OutputStream was setting the claim length but that is already handled at a 
lower level.
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   # Summary
   
   [NIFI-0](https://issues.apache.org/jira/browse/NIFI-0)
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [ ] Pull Request based on current revision of the `main` branch
   - [ ] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [ ] Build completed using `mvn clean install -P contrib-check`
 - [ ] JDK 11
 - [ ] JDK 17
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] exceptionfactory commented on a diff in pull request #6996: NIFI-11232 Fix buffer handling in ContentClaimInputStream

2023-03-27 Thread via GitHub


exceptionfactory commented on code in PR #6996:
URL: https://github.com/apache/nifi/pull/6996#discussion_r1149511090


##
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/controller/repository/io/ContentClaimInputStream.java:
##
@@ -81,14 +81,13 @@ public long getCurrentOffset() {
 
 @Override
 public int read() throws IOException {
-int value = -1;
+int value;

Review Comment:
   With this change, it looks like `value` can be marked `final`:
   ```suggestion
   final int value;
   ```



##
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/controller/repository/io/ContentClaimInputStream.java:
##
@@ -117,11 +115,10 @@ public int read(final byte[] b) throws IOException {
 
 @Override
 public int read(final byte[] b, final int off, final int len) throws 
IOException {
-int count = -1;
+int count;

Review Comment:
   ```suggestion
   final int count;
   ```



##
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/controller/repository/io/ContentClaimInputStream.java:
##
@@ -158,26 +155,47 @@ public boolean markSupported() {
 return true;
 }
 
+/**
+ * Marks the current position. Can be returned to with {@code reset()}.
+ *
+ * @see ContentClaimInputStream#reset()
+ * @param readlimit   hint on how much data should be buffered.
+ */
 @Override
 public void mark(final int readlimit) {
 markOffset = currentOffset;
 markReadLimit = readlimit;
-if (bufferedIn != null) {
-bufferedIn.mark(readlimit);
+if (bufferedIn == null) {
+try {
+bufferedIn = new BufferedInputStream(getDelegate());
+} catch (IOException ex) {
+throw new RuntimeException("Failed to read content claim!", 
ex);

Review Comment:
   Instead of throwing a `RuntimeException`, it looks like an 
`UncheckedIOException` would be more appropriate.
   
   As a general rule, exclamation points should be avoided in error messages, 
so see the following suggested wording:
   ```suggestion
   } catch (IOException e) {
   throw new UncheckedIOException("Failed to read repository 
Content Claim", e);
   ```



##
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/controller/repository/io/ContentClaimInputStream.java:
##
@@ -200,6 +218,10 @@ public void reset() throws IOException {
 
 @Override
 public void close() throws IOException {
+if  (bufferedIn != null) {

Review Comment:
   Minor spacing issue:
   ```suggestion
   if (bufferedIn != null) {
   ```



##
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/controller/repository/io/ContentClaimInputStream.java:
##
@@ -215,15 +237,6 @@ private void formDelegate() throws IOException {
 delegate = new 
PerformanceTrackingInputStream(contentRepository.read(contentClaim), 
performanceTracker);
 StreamUtils.skip(delegate, claimOffset);
 currentOffset = claimOffset;
-
-if (markReadLimit > 0) {
-final int limitLeft = (int) (markReadLimit - currentOffset);
-if (limitLeft > 0) {
-final InputStream limitedIn = new 
LimitedInputStream(delegate, limitLeft);
-bufferedIn = new BufferedInputStream(limitedIn);
-bufferedIn.mark(limitLeft);
-}
-}

Review Comment:
   For clarification, is this removed so that the buffer is only created when 
calling `mark()`?



##
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/controller/repository/io/ContentClaimInputStream.java:
##
@@ -158,26 +155,47 @@ public boolean markSupported() {
 return true;
 }
 
+/**
+ * Marks the current position. Can be returned to with {@code reset()}.
+ *
+ * @see ContentClaimInputStream#reset()
+ * @param readlimit   hint on how much data should be buffered.
+ */
 @Override
 public void mark(final int readlimit) {
 markOffset = currentOffset;
 markReadLimit = readlimit;
-if (bufferedIn != null) {
-bufferedIn.mark(readlimit);
+if (bufferedIn == null) {
+try {
+bufferedIn = new BufferedInputStream(getDelegate());
+} catch (IOException ex) {
+throw new RuntimeException("Failed to read content claim!", 
ex);
+}
 }
+
+bufferedIn.mark(readlimit);
 }
 
+/**
+ * Resets to the last marked position.
+ *
+ * @see C

[jira] [Updated] (NIFI-11266) PutGoogleDrive, ListGoogleDrive, FetchGoogleDrive can't access a SharedDrive

2023-03-27 Thread Nandor Soma Abonyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandor Soma Abonyi updated NIFI-11266:
--
Resolution: Resolved
Status: Resolved  (was: Patch Available)

> PutGoogleDrive, ListGoogleDrive, FetchGoogleDrive can't access a SharedDrive
> 
>
> Key: NIFI-11266
> URL: https://issues.apache.org/jira/browse/NIFI-11266
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Baptiste Moisson
>Assignee: Zsihovszki Krisztina
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> It seems that Google Drive Put, Fetch and List processor are not able to 
> perform action on SharedDrive. 
> Regarding the Google Drive API 
> ([https://developers.google.com/drive/api/v3/reference/files/list?apix_params=%7B%22includeTeamDriveItems%22%3Afalse%2C%22supportsTeamDrives%22%3Afalse%7D)]
>  , it seems that options like corpora,  driveId , includeItemsFromAllDrives, 
> supportsAllDrives should be use to perform an SharedDrive access. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11266) PutGoogleDrive, ListGoogleDrive, FetchGoogleDrive can't access a SharedDrive

2023-03-27 Thread Nandor Soma Abonyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandor Soma Abonyi updated NIFI-11266:
--
Fix Version/s: 2.0.0
   1.21.0
   (was: 1.latest)
   (was: 2.latest)

> PutGoogleDrive, ListGoogleDrive, FetchGoogleDrive can't access a SharedDrive
> 
>
> Key: NIFI-11266
> URL: https://issues.apache.org/jira/browse/NIFI-11266
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Baptiste Moisson
>Assignee: Zsihovszki Krisztina
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> It seems that Google Drive Put, Fetch and List processor are not able to 
> perform action on SharedDrive. 
> Regarding the Google Drive API 
> ([https://developers.google.com/drive/api/v3/reference/files/list?apix_params=%7B%22includeTeamDriveItems%22%3Afalse%2C%22supportsTeamDrives%22%3Afalse%7D)]
>  , it seems that options like corpora,  driveId , includeItemsFromAllDrives, 
> supportsAllDrives should be use to perform an SharedDrive access. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11266) PutGoogleDrive, ListGoogleDrive, FetchGoogleDrive can't access a SharedDrive

2023-03-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705638#comment-17705638
 ] 

ASF subversion and git services commented on NIFI-11266:


Commit 0b33ad8053b3bc33ea7c6b5b4d719781bcf2225f in nifi's branch 
refs/heads/support/nifi-1.x from krisztina-zsihovszki
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=0b33ad8053 ]

NIFI-11266 PutGoogleDrive, ListGoogleDrive, FetchGoogleDrive can't access a 
SharedDrive

This closes #7058

Reviewed-by: Mark Bathori 

Signed-off-by: Nandor Soma Abonyi 


> PutGoogleDrive, ListGoogleDrive, FetchGoogleDrive can't access a SharedDrive
> 
>
> Key: NIFI-11266
> URL: https://issues.apache.org/jira/browse/NIFI-11266
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Baptiste Moisson
>Assignee: Zsihovszki Krisztina
>Priority: Major
> Fix For: 1.latest, 2.latest
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> It seems that Google Drive Put, Fetch and List processor are not able to 
> perform action on SharedDrive. 
> Regarding the Google Drive API 
> ([https://developers.google.com/drive/api/v3/reference/files/list?apix_params=%7B%22includeTeamDriveItems%22%3Afalse%2C%22supportsTeamDrives%22%3Afalse%7D)]
>  , it seems that options like corpora,  driveId , includeItemsFromAllDrives, 
> supportsAllDrives should be use to perform an SharedDrive access. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11266) PutGoogleDrive, ListGoogleDrive, FetchGoogleDrive can't access a SharedDrive

2023-03-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705469#comment-17705469
 ] 

ASF subversion and git services commented on NIFI-11266:


Commit fe2721786cdff9a3bdff59c6f318ac286887595a in nifi's branch 
refs/heads/main from krisztina-zsihovszki
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=fe2721786c ]

NIFI-11266 PutGoogleDrive, ListGoogleDrive, FetchGoogleDrive can't access a 
SharedDrive

This closes #7058

Reviewed-by: Mark Bathori 

Signed-off-by: Nandor Soma Abonyi 


> PutGoogleDrive, ListGoogleDrive, FetchGoogleDrive can't access a SharedDrive
> 
>
> Key: NIFI-11266
> URL: https://issues.apache.org/jira/browse/NIFI-11266
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Baptiste Moisson
>Assignee: Zsihovszki Krisztina
>Priority: Major
> Fix For: 1.latest, 2.latest
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> It seems that Google Drive Put, Fetch and List processor are not able to 
> perform action on SharedDrive. 
> Regarding the Google Drive API 
> ([https://developers.google.com/drive/api/v3/reference/files/list?apix_params=%7B%22includeTeamDriveItems%22%3Afalse%2C%22supportsTeamDrives%22%3Afalse%7D)]
>  , it seems that options like corpora,  driveId , includeItemsFromAllDrives, 
> supportsAllDrives should be use to perform an SharedDrive access. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] asfgit closed pull request #7058: NIFI-11266 PutGoogleDrive, ListGoogleDrive, FetchGoogleDrive can't ac…

2023-03-27 Thread via GitHub


asfgit closed pull request #7058: NIFI-11266 PutGoogleDrive, ListGoogleDrive, 
FetchGoogleDrive can't ac…
URL: https://github.com/apache/nifi/pull/7058


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-10982) Update org.springframework_spring-web to 6.0.0

2023-03-27 Thread Phil Lee (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705458#comment-17705458
 ] 

Phil Lee commented on NIFI-10982:
-

Thanks. Will pass that info to cybersecurity people.  We filled out waiver for 
this [CVE-2016-127|https://nvd.nist.gov/vuln/detail/CVE-2016-127] since 
it triggered twistlock gating for nifi docker build and other team asked me to 
reach out to you guys.

> Update org.springframework_spring-web to 6.0.0
> --
>
> Key: NIFI-10982
> URL: https://issues.apache.org/jira/browse/NIFI-10982
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.19.1
>Reporter: Phil Lee
>Priority: Major
>
> Update org.springframework_spring-web from 5.3.24 to 6.0.0.  This will 
> remediate [CVE-2016-127|https://nvd.nist.gov/vuln/detail/CVE-2016-127]
> Twistlock scan reported this as critical severity vulnerability in NiFi 
> Toolkit (which is included in NiFi version 1.19.1).
> Impacted versions: <6.0.0
> Discovered: 2 days ago
> Published: more than 2 years ago
> Pivotal Spring Framework through 5.3.16 suffers from a potential remote code 
> execution (RCE) issue if used for Java deserialization of untrusted data. 
> Depending on how the library is implemented within a product, this issue may 
> or not occur, and authentication may be required. NOTE: the vendor\'s 
> position is that untrusted data is not an intended use case. The product\'s 
> behavior will not be changed because some users rely on deserialization of 
> trusted data.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-10982) Update org.springframework_spring-web to 6.0.0

2023-03-27 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705456#comment-17705456
 ] 

David Handermann commented on NIFI-10982:
-

Spring 6 requires Java 17 as a minimum version, so at this time it is not 
targeted for inclusion in NiFi 2.0, given that Java 11 is targeted as the 
minimum version.

> Update org.springframework_spring-web to 6.0.0
> --
>
> Key: NIFI-10982
> URL: https://issues.apache.org/jira/browse/NIFI-10982
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.19.1
>Reporter: Phil Lee
>Priority: Major
>
> Update org.springframework_spring-web from 5.3.24 to 6.0.0.  This will 
> remediate [CVE-2016-127|https://nvd.nist.gov/vuln/detail/CVE-2016-127]
> Twistlock scan reported this as critical severity vulnerability in NiFi 
> Toolkit (which is included in NiFi version 1.19.1).
> Impacted versions: <6.0.0
> Discovered: 2 days ago
> Published: more than 2 years ago
> Pivotal Spring Framework through 5.3.16 suffers from a potential remote code 
> execution (RCE) issue if used for Java deserialization of untrusted data. 
> Depending on how the library is implemented within a product, this issue may 
> or not occur, and authentication may be required. NOTE: the vendor\'s 
> position is that untrusted data is not an intended use case. The product\'s 
> behavior will not be changed because some users rely on deserialization of 
> trusted data.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-10982) Update org.springframework_spring-web to 6.0.0

2023-03-27 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705455#comment-17705455
 ] 

Joe Witt commented on NIFI-10982:
-

[~philiplee] If our only objection to moving to a base of Spring 6.latest was 
needing Java 11 then you can expect it will happen in the 2.x line.  Whether it 
shows up in the 2.0.0 release itself will be a function of when we think that 
release is ready and whether someone has done this.  I dont think we're seeing 
this/treating it like a blocker for 2.0 nifi at this point but generally 
speaking all such major important dependencies we've shown we take pretty 
seriously.

> Update org.springframework_spring-web to 6.0.0
> --
>
> Key: NIFI-10982
> URL: https://issues.apache.org/jira/browse/NIFI-10982
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.19.1
>Reporter: Phil Lee
>Priority: Major
>
> Update org.springframework_spring-web from 5.3.24 to 6.0.0.  This will 
> remediate [CVE-2016-127|https://nvd.nist.gov/vuln/detail/CVE-2016-127]
> Twistlock scan reported this as critical severity vulnerability in NiFi 
> Toolkit (which is included in NiFi version 1.19.1).
> Impacted versions: <6.0.0
> Discovered: 2 days ago
> Published: more than 2 years ago
> Pivotal Spring Framework through 5.3.16 suffers from a potential remote code 
> execution (RCE) issue if used for Java deserialization of untrusted data. 
> Depending on how the library is implemented within a product, this issue may 
> or not occur, and authentication may be required. NOTE: the vendor\'s 
> position is that untrusted data is not an intended use case. The product\'s 
> behavior will not be changed because some users rely on deserialization of 
> trusted data.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-10982) Update org.springframework_spring-web to 6.0.0

2023-03-27 Thread Phil Lee (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705454#comment-17705454
 ] 

Phil Lee commented on NIFI-10982:
-

So when nifi 2.0 gets release, can I expect nifi will move to Spring 6?

> Update org.springframework_spring-web to 6.0.0
> --
>
> Key: NIFI-10982
> URL: https://issues.apache.org/jira/browse/NIFI-10982
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.19.1
>Reporter: Phil Lee
>Priority: Major
>
> Update org.springframework_spring-web from 5.3.24 to 6.0.0.  This will 
> remediate [CVE-2016-127|https://nvd.nist.gov/vuln/detail/CVE-2016-127]
> Twistlock scan reported this as critical severity vulnerability in NiFi 
> Toolkit (which is included in NiFi version 1.19.1).
> Impacted versions: <6.0.0
> Discovered: 2 days ago
> Published: more than 2 years ago
> Pivotal Spring Framework through 5.3.16 suffers from a potential remote code 
> execution (RCE) issue if used for Java deserialization of untrusted data. 
> Depending on how the library is implemented within a product, this issue may 
> or not occur, and authentication may be required. NOTE: the vendor\'s 
> position is that untrusted data is not an intended use case. The product\'s 
> behavior will not be changed because some users rely on deserialization of 
> trusted data.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] exceptionfactory commented on a diff in pull request #7085: NIFI-11333: Do not allow components to be removed while a node is dis…

2023-03-27 Thread via GitHub


exceptionfactory commented on code in PR #7085:
URL: https://github.com/apache/nifi/pull/7085#discussion_r1149531073


##
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/replication/ThreadPoolRequestReplicator.java:
##
@@ -638,21 +668,37 @@ private boolean isMutableRequest(final String method) {
 }
 }
 
-private boolean isDeleteConnection(final String method, final String 
uriPath) {
+private boolean isDeleteComponent(final String method, final String 
uriPath) {
 if (!HttpMethod.DELETE.equalsIgnoreCase(method)) {
 return false;
 }
 
-final boolean isConnectionUri = 
ConnectionEndpointMerger.CONNECTION_URI_PATTERN.matcher(uriPath).matches();
-return isConnectionUri;
+// Check if the URI indicates that a component should be deleted.
+// We cannot simply make our decision based on the fact that the 
request is a DELETE request.
+// This is because we do need to allow deletion of asynchronous 
requests, such as updating parameters, querying provenance, etc.
+// which create a request, poll until the request completes, and then 
deletes it. Additionally, we want to allow terminating
+// Processors, which is done by issuing a request to DELETE 
/processors//threads

Review Comment:
   On further consideration, this approach seems the most maintainable, since 
it is rare to add new types of components to the framework.
   
   After reviewing, it looks like Labels, Parameter Providers, and Flow 
Registry Clients should also be added to the list of components to check.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-11331) InvokeHTTP: Add a property for the HTTP Body that can be marked sensitive

2023-03-27 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705437#comment-17705437
 ] 

David Handermann commented on NIFI-11331:
-

Thanks for the clarification [~v1d3o], that is helpful, and makes sense given 
the other constraints. I agree that any potential implementation of NIFI-9894 
should consider handling sensitive values as well, based on your description.

> InvokeHTTP: Add a property for the HTTP Body that can be marked sensitive
> -
>
> Key: NIFI-11331
> URL: https://issues.apache.org/jira/browse/NIFI-11331
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.20.0
>Reporter: Vince Lombardo
>Priority: Major
>
> This request is for adding a property in the InvokeHTTP processor that can be 
> marked as sensitive.
> The use case for this is that some APIs require username and password 
> credentials to be sent as part of the body. Since the only current way to 
> populate the body is through the flowfile, this means that there is no way to 
> have the values be treated as sensitive by NiFi.
> I envision a new property, Body Content, with the default being that if no 
> value is set, then it uses the flowfile as the processor currently does. If 
> possible, then this property will be allowed to optionally be made sensitive. 
> Not sure if that is possible to make a built in property optionally 
> sensitive. Otherwise there may need to be two properties, one for body 
> content that is sensitive and a plain body content that can have EL in it. 
> Either can be set independently, but if they are both set, they are appended 
> together. Lastly a third property would be a dropdown that lets you indicate 
> whether those values are used instead of or append to the flowfile. So that 
> dropdown is only considered when there is data within either of the body 
> contents.
> I am aware of the fact that there are other related requests that have been 
> closed in favor of issue NIFI-9894, but I created this issue separately 
> because I believe the whole sensitivity issue is a large need and I did not 
> see any of the other issues address that.  So this issue could be 
> consolidated into NIFI-9894, with hopefully a final solution that can capture 
> the needs from this issue allong with the others.
> Actually, the only reason I included having both a non-sensitive and 
> sensitive property is to help with those needs of the other issues. If for 
> some reason, NIFI-9894 cannot be done because of the stated problem of 
> potential memory consumption issues, my need is really only for having a 
> Sensitive Body Content attribute that, if populated, is used instead of the 
> flowfile. For once I am able to log in using that, the rest of my uses for 
> InvokeHTTP are met by the current implementation of the processor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11345) TestPutIcebergWithHiveCatalog Runs Longer than 60 Seconds

2023-03-27 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-11345:

Fix Version/s: 1.latest
   2.latest
   Status: Patch Available  (was: Open)

> TestPutIcebergWithHiveCatalog Runs Longer than 60 Seconds
> -
>
> Key: NIFI-11345
> URL: https://issues.apache.org/jira/browse/NIFI-11345
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.20.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 1.latest, 2.latest
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{TestPutIcebergWithHiveCatalog}} runs multiple parameterized test 
> methods that exercise three supported formats: Avro, ORC, and Parquet. Each 
> test run is expensive, taking around 5 seconds, resulting in the entire class 
> taking over 60 seconds to complete under optimal circumstances. Instead of 
> running each method with all three formats, each method should be limited to 
> one format to reduce the overall runtime.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11331) InvokeHTTP: Add a property for the HTTP Body that can be marked sensitive

2023-03-27 Thread Vince Lombardo (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705434#comment-17705434
 ] 

Vince Lombardo commented on NIFI-11331:
---

Yes, I am not meaning the flowfile would be sensitive (I was trying to make it 
as flexible as possible in case someone needed the non-sensitive flowfile 
content to also be used in conjunction with the sensitive attribute - however, 
I myself do not need that functionality and so to make it simpler, you can 
disregard anything above about concatenating the flowfile and just concentrate 
on the sensitive attribute part of the request)

So, using an attribute that can be marked sensitive and fill the body would do 
what I need and so in that regard, if NIFI-9894 is helping to fulfill that, 
then, yes, this would depend upon NIFI-9894.

> InvokeHTTP: Add a property for the HTTP Body that can be marked sensitive
> -
>
> Key: NIFI-11331
> URL: https://issues.apache.org/jira/browse/NIFI-11331
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.20.0
>Reporter: Vince Lombardo
>Priority: Major
>
> This request is for adding a property in the InvokeHTTP processor that can be 
> marked as sensitive.
> The use case for this is that some APIs require username and password 
> credentials to be sent as part of the body. Since the only current way to 
> populate the body is through the flowfile, this means that there is no way to 
> have the values be treated as sensitive by NiFi.
> I envision a new property, Body Content, with the default being that if no 
> value is set, then it uses the flowfile as the processor currently does. If 
> possible, then this property will be allowed to optionally be made sensitive. 
> Not sure if that is possible to make a built in property optionally 
> sensitive. Otherwise there may need to be two properties, one for body 
> content that is sensitive and a plain body content that can have EL in it. 
> Either can be set independently, but if they are both set, they are appended 
> together. Lastly a third property would be a dropdown that lets you indicate 
> whether those values are used instead of or append to the flowfile. So that 
> dropdown is only considered when there is data within either of the body 
> contents.
> I am aware of the fact that there are other related requests that have been 
> closed in favor of issue NIFI-9894, but I created this issue separately 
> because I believe the whole sensitivity issue is a large need and I did not 
> see any of the other issues address that.  So this issue could be 
> consolidated into NIFI-9894, with hopefully a final solution that can capture 
> the needs from this issue allong with the others.
> Actually, the only reason I included having both a non-sensitive and 
> sensitive property is to help with those needs of the other issues. If for 
> some reason, NIFI-9894 cannot be done because of the stated problem of 
> potential memory consumption issues, my need is really only for having a 
> Sensitive Body Content attribute that, if populated, is used instead of the 
> flowfile. For once I am able to log in using that, the rest of my uses for 
> InvokeHTTP are met by the current implementation of the processor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] exceptionfactory opened a new pull request, #7086: NIFI-11345 Adjust Iceberg test to avoid expensive duplicative runs

2023-03-27 Thread via GitHub


exceptionfactory opened a new pull request, #7086:
URL: https://github.com/apache/nifi/pull/7086

   # Summary
   
   [NIFI-11345](https://issues.apache.org/jira/browse/NIFI-11345) Adjusts 
`TestPutIcebergWithHiveCatalog` to reduce the total running time from over 60 
seconds to around 20 seconds. The Iceberg module is one of the slowest 
extension modules for unit tests due to this test class. Instead of running the 
same test method for all three supported formats (Avro, ORC, and Parquet), the 
changes use a single format for a single method. This ensures the same basic 
test behavior while exercising each format across different test methods.
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [X] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [X] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [X] Pull Request based on current revision of the `main` branch
   - [X] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [ ] Build completed using `mvn clean install -P contrib-check`
 - [ ] JDK 11
 - [ ] JDK 17
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-11232) FlowFileAccessException using ContentClaimInputStream

2023-03-27 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-11232:

Fix Version/s: 2.0.0
   1.21.0

> FlowFileAccessException using ContentClaimInputStream
> -
>
> Key: NIFI-11232
> URL: https://issues.apache.org/jira/browse/NIFI-11232
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.20.0
>Reporter: Christian Wahl
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
> Attachments: TestContentClaimInputStream.java
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> NIFI-10888 introduced a BufferedInputStream inside of the 
> ContentClaimInputStream to speed up rewinding in small flow files (<1MB).
> Under some circumstances it can happen in reset that the delegate stream is 
> closed and a new delegate stream is created, but the bufferedIn is not 
> recreated with the new delegate.
> During the next read this leads to a situation where it tries to read from 
> bufferedIn and bufferedIn in turn tries to read from the old and closed 
> delegate stream causing an IOException or FlowFileAccessException.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-11345) TestPutIcebergWithHiveCatalog Runs Longer than 60 Seconds

2023-03-27 Thread David Handermann (Jira)
David Handermann created NIFI-11345:
---

 Summary: TestPutIcebergWithHiveCatalog Runs Longer than 60 Seconds
 Key: NIFI-11345
 URL: https://issues.apache.org/jira/browse/NIFI-11345
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.20.0
Reporter: David Handermann
Assignee: David Handermann


The {{TestPutIcebergWithHiveCatalog}} runs multiple parameterized test methods 
that exercise three supported formats: Avro, ORC, and Parquet. Each test run is 
expensive, taking around 5 seconds, resulting in the entire class taking over 
60 seconds to complete under optimal circumstances. Instead of running each 
method with all three formats, each method should be limited to one format to 
reduce the overall runtime.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11341) ListenUDPRecord truncating data

2023-03-27 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705425#comment-17705425
 ] 

Joe Witt commented on NIFI-11341:
-

talked with [~markap14].  He is working on this one and it is related to change 
made in https://issues.apache.org/jira/browse/NIFI-10887

> ListenUDPRecord truncating data
> ---
>
> Key: NIFI-11341
> URL: https://issues.apache.org/jira/browse/NIFI-11341
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.20.0
>Reporter: Peter Kimberley
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
> Attachments: NiFi_Flow.json, image.png
>
>
> In our environment, we use {{ListenUDPRecord}} to collect Syslog messages. 
> This processor is followed by a {{PartitionRecord}} processor that populates 
> an attribute for routing.In release {*}1.19.0{*}, this flow worked without 
> issue. In *1.20.0* though, I am seeing intermittent message truncation in 
> {{{}PartitionRecord{}}}, with bulletin messages like the following appearing 
> regularly:
> {noformat}
> PartitionRecord[id=03ea67a7-0b9c-1c9f--8d7e5185] Failed to partition 
> FlowFile[filename=ca9c3e11-9365-4ff9-9499-29522fc0cab7]: 
> com.fasterxml.jackson.core.JsonParseException: Unexpected character (',' 
> (code 44)): expected a value
> at [Source: (org.apache.nifi.stream.io.NonCloseableInputStream); line: 1, 
> column: 381]{noformat}
>  
> An example message (note the absence of a Syslog header):
> {noformat}
> itor] [Unit test] Alarm check cfg warning threshold=75 critical threshold=85 
> warning alarm <...>{noformat}
> {{ListenUDPRecord}} properties are attached.
> h3. Reproduction
> The attached minimal flow illustrates this setup.
>  
> To reproduce this issue, generate improperly-formatted syslog and send to 
> {{ListenUDPRecord}}.
>  
> In my environment, I have two syslog sources feeding this test cluster. 
> Scenario is as follows:
>  # First source (compliant Syslog format) feeds in.
>  # Flow is OK - no bulletins.
>  # Activate second source, which is of an invalid Syslog format and flows to 
> the {{parse.failure}} relationship of {{ListenUDPRecord}}. This is expected - 
> I deal with this gracefully.
>  # Bulletins start firing in {{PartitionRecord}} and the first source starts 
> getting truncated randomly.
> Overall, the majority of messages from the well-formed source make it 
> through. However I'm seeing roughly 1 bulletin every few seconds, which 
> indicates a small proportion of messages are getting truncated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11341) ListenUDPRecord truncating data

2023-03-27 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-11341:

Fix Version/s: 2.0.0
   1.21.0

> ListenUDPRecord truncating data
> ---
>
> Key: NIFI-11341
> URL: https://issues.apache.org/jira/browse/NIFI-11341
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.20.0
>Reporter: Peter Kimberley
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
> Attachments: NiFi_Flow.json, image.png
>
>
> In our environment, we use {{ListenUDPRecord}} to collect Syslog messages. 
> This processor is followed by a {{PartitionRecord}} processor that populates 
> an attribute for routing.In release {*}1.19.0{*}, this flow worked without 
> issue. In *1.20.0* though, I am seeing intermittent message truncation in 
> {{{}PartitionRecord{}}}, with bulletin messages like the following appearing 
> regularly:
> {noformat}
> PartitionRecord[id=03ea67a7-0b9c-1c9f--8d7e5185] Failed to partition 
> FlowFile[filename=ca9c3e11-9365-4ff9-9499-29522fc0cab7]: 
> com.fasterxml.jackson.core.JsonParseException: Unexpected character (',' 
> (code 44)): expected a value
> at [Source: (org.apache.nifi.stream.io.NonCloseableInputStream); line: 1, 
> column: 381]{noformat}
>  
> An example message (note the absence of a Syslog header):
> {noformat}
> itor] [Unit test] Alarm check cfg warning threshold=75 critical threshold=85 
> warning alarm <...>{noformat}
> {{ListenUDPRecord}} properties are attached.
> h3. Reproduction
> The attached minimal flow illustrates this setup.
>  
> To reproduce this issue, generate improperly-formatted syslog and send to 
> {{ListenUDPRecord}}.
>  
> In my environment, I have two syslog sources feeding this test cluster. 
> Scenario is as follows:
>  # First source (compliant Syslog format) feeds in.
>  # Flow is OK - no bulletins.
>  # Activate second source, which is of an invalid Syslog format and flows to 
> the {{parse.failure}} relationship of {{ListenUDPRecord}}. This is expected - 
> I deal with this gracefully.
>  # Bulletins start firing in {{PartitionRecord}} and the first source starts 
> getting truncated randomly.
> Overall, the majority of messages from the well-formed source make it 
> through. However I'm seeing roughly 1 bulletin every few seconds, which 
> indicates a small proportion of messages are getting truncated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11331) InvokeHTTP: Add a property for the HTTP Body that can be marked sensitive

2023-03-27 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705423#comment-17705423
 ] 

David Handermann commented on NIFI-11331:
-

Thanks for describing the issue [~v1d3o], can you provide a bit more detail on 
the expected behavior? What should sensitive mean in the context of request 
body content for InvokeHTTP?

The FlowFile content is never considered sensitive at a framework level. 
Permission to view FlowFile content is controlled through configurable policies.

This request does make sense in the context of NIFI-9894, if support were 
implemented to configure the request body from attributes. In that case, being 
able to reference sensitive parameters in the request body formatting could 
support such a use case. From that perspective, is this particular improvement 
dependent on implementing NIFI-9894?

> InvokeHTTP: Add a property for the HTTP Body that can be marked sensitive
> -
>
> Key: NIFI-11331
> URL: https://issues.apache.org/jira/browse/NIFI-11331
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.20.0
>Reporter: Vince Lombardo
>Priority: Major
>
> This request is for adding a property in the InvokeHTTP processor that can be 
> marked as sensitive.
> The use case for this is that some APIs require username and password 
> credentials to be sent as part of the body. Since the only current way to 
> populate the body is through the flowfile, this means that there is no way to 
> have the values be treated as sensitive by NiFi.
> I envision a new property, Body Content, with the default being that if no 
> value is set, then it uses the flowfile as the processor currently does. If 
> possible, then this property will be allowed to optionally be made sensitive. 
> Not sure if that is possible to make a built in property optionally 
> sensitive. Otherwise there may need to be two properties, one for body 
> content that is sensitive and a plain body content that can have EL in it. 
> Either can be set independently, but if they are both set, they are appended 
> together. Lastly a third property would be a dropdown that lets you indicate 
> whether those values are used instead of or append to the flowfile. So that 
> dropdown is only considered when there is data within either of the body 
> contents.
> I am aware of the fact that there are other related requests that have been 
> closed in favor of issue NIFI-9894, but I created this issue separately 
> because I believe the whole sensitivity issue is a large need and I did not 
> see any of the other issues address that.  So this issue could be 
> consolidated into NIFI-9894, with hopefully a final solution that can capture 
> the needs from this issue allong with the others.
> Actually, the only reason I included having both a non-sensitive and 
> sensitive property is to help with those needs of the other issues. If for 
> some reason, NIFI-9894 cannot be done because of the stated problem of 
> potential memory consumption issues, my need is really only for having a 
> Sensitive Body Content attribute that, if populated, is used instead of the 
> flowfile. For once I am able to log in using that, the rest of my uses for 
> InvokeHTTP are met by the current implementation of the processor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11189) Restarting NiFi after failing to upgrade flow can cause NiFi to fail startup

2023-03-27 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-11189:

Status: Resolved  (was: Closed)

> Restarting NiFi after failing to upgrade flow can cause NiFi to fail startup
> 
>
> Key: NIFI-11189
> URL: https://issues.apache.org/jira/browse/NIFI-11189
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Blocker
> Fix For: 2.0.0, 1.21.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When a user updates a group that is under Version Control from one version to 
> another, sometimes the destination of a connection now needs to point to a 
> new component. And the current destination gets deleted, while the new 
> component hasn't yet been created. To handle this, we create a temporary 
> funnel and set the connection’s destination to that funnel.
>  
> Once all components are created, we then move the connections to their 
> intended destination. Then delete the temporary funnel.
>  
> We have an issue, however. If, for some reason, we fail to complete the flow 
> upgrade, that funnel may remain. We are not handling this case where this 
> temporary funnel already exists in the flow definition very well on restart.
> This can result in an error such as:
> {code:java}
> 2023-02-15 16:40:03,347 WARN [main] org.eclipse.jetty.webapp.WebAppContext 
> Failed startup of context 
> o.e.j.w.WebAppContext@345af277{nifi-api,/nifi-api,file:///opt/nifi-1.18.0.2.1.5.1001-1/work/jetty/nifi-web-api-1.18.0.2.1.5.1001-1.war/webapp/,UNAVAILABLE}{./work/nar/extensions/nifi-server-nar-1.18.0.2.1.5.1001-1.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-api-1.18.0.2.1.5.1001-1.war}
> org.apache.nifi.controller.serialization.FlowSynchronizationException: 
> java.lang.IllegalArgumentException: Connection has a destination with 
> identifier c594bee4-b49e-34a3-8795-732d890df61f but no component could be 
> found in the Process Group with a corresponding identifier
>         at 
> org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.synchronizeFlow(VersionedFlowSynchronizer.java:454)
>         at 
> org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.sync(VersionedFlowSynchronizer.java:205)
>         at 
> org.apache.nifi.controller.serialization.StandardFlowSynchronizer.sync(StandardFlowSynchronizer.java:42)
>         at 
> org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1525)
>         at 
> org.apache.nifi.persistence.StandardFlowConfigurationDAO.load(StandardFlowConfigurationDAO.java:104)
>         at 
> org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:837)
>         at 
> org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:558)
>         at 
> org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:67)
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:1073)
>         at 
> org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:572)
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.contextInitialized(ContextHandler.java:1002)
>         at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:765)
>         at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:379)
>         at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1449)
>         at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1414)
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:916)
>         at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:288)
>         at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524)
>         at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
>         at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
>         at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
>         at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
>         at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
>         at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
>         at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
>         at 
> org.eclipse.jetty.server.handler.Abs

[jira] [Commented] (NIFI-11343) Improve the flexibility and compatibility of OIDC integration

2023-03-27 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17705417#comment-17705417
 ] 

Joe Witt commented on NIFI-11343:
-

removed fix versions.  they can be set once a determination is made on 
when/where they'll land.

> Improve the flexibility and compatibility of OIDC integration
> -
>
> Key: NIFI-11343
> URL: https://issues.apache.org/jira/browse/NIFI-11343
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI, Security
>Affects Versions: 1.20.0
> Environment: JDK: 11
> Browser: Chrome / Firefox / Edge
> Configuration of NiFi: OIDC with AWS Cognito
>Reporter: Hung Nguyen Thuan
>Priority: Minor
> Attachments: Superset_OIDC.png
>
>
> There are some OIDC providers that do not support for OIDC RP-Initiated 
> Logout such as AWS Cognito. Therefore, when I try to integrate AWS Cognito 
> with Nifi, the login function works well but the logout function does not. It 
> would be nice if Apache Nifi could provide a way to configure OIDC more 
> flexibly and compatibly with many OIDC providers. For example, in Apache 
> Superset configuration (or Flask App Builder), it allows to enter ODIC 
> configuration as the attached image. User can define 
> authorize/request/refresh/logout URLs if they are not returned from 
> {code:java}
> https:///.well-known/openid-configuration{code}
> Or Nifi could add new properties to configure logout/refresh token URLs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11333) Disable removing components unless all nodes connected

2023-03-27 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-11333:

Fix Version/s: 1.21.0
   (was: 1.latest)

> Disable removing components unless all nodes connected
> --
>
> Key: NIFI-11333
> URL: https://issues.apache.org/jira/browse/NIFI-11333
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.21.0, 2.latest
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In 1.16 we allows users to start updating flows while nodes are disconnected. 
> This has been greatly helpful. However, it can lead to a problem: when a user 
> removes a connection and there's data queued on a disconnected node, that 
> disconnected node can no longer rejoin the cluster. Instead, it remains 
> disconnected; and if the node is shutdown, it cannot be restarted without 
> manually changing nifi.properties to change it from a clustered not to a 
> standalone node, then restarting, and bleeding the data out, shutting down, 
> manually updating properties to make it a clustered node again; and 
> restarting.
> This is painful. Instead, we should simply disallow the removal of any 
> component unless all nodes in the cluster are connected. Components can still 
> be added, started, stopped, and disabled. Just not removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11343) Improve the flexibility and compatibility of OIDC integration

2023-03-27 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-11343:

Fix Version/s: (was: 1.21.0)
   (was: 2.latest)

> Improve the flexibility and compatibility of OIDC integration
> -
>
> Key: NIFI-11343
> URL: https://issues.apache.org/jira/browse/NIFI-11343
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI, Security
>Affects Versions: 1.20.0
> Environment: JDK: 11
> Browser: Chrome / Firefox / Edge
> Configuration of NiFi: OIDC with AWS Cognito
>Reporter: Hung Nguyen Thuan
>Priority: Minor
> Attachments: Superset_OIDC.png
>
>
> There are some OIDC providers that do not support for OIDC RP-Initiated 
> Logout such as AWS Cognito. Therefore, when I try to integrate AWS Cognito 
> with Nifi, the login function works well but the logout function does not. It 
> would be nice if Apache Nifi could provide a way to configure OIDC more 
> flexibly and compatibly with many OIDC providers. For example, in Apache 
> Superset configuration (or Flask App Builder), it allows to enter ODIC 
> configuration as the attached image. User can define 
> authorize/request/refresh/logout URLs if they are not returned from 
> {code:java}
> https:///.well-known/openid-configuration{code}
> Or Nifi could add new properties to configure logout/refresh token URLs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] nandorsoma commented on a diff in pull request #7058: NIFI-11266 PutGoogleDrive, ListGoogleDrive, FetchGoogleDrive can't ac…

2023-03-27 Thread via GitHub


nandorsoma commented on code in PR #7058:
URL: https://github.com/apache/nifi/pull/7058#discussion_r1149424157


##
nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/drive/PutGoogleDrive.java:
##
@@ -187,7 +187,7 @@ public class PutGoogleDrive extends AbstractProcessor 
implements GoogleDriveTrai
 REL_FAILURE
 )));
 
-public static final String MULTIPART_UPLOAD_URL = 
"https://www.googleapis.com/upload/drive/v3/files?uploadType=multipart";;
+public static final String MULTIPART_UPLOAD_URL = 
"https://www.googleapis.com/upload/drive/v3/files?uploadType=multipart&supportsAllDrives=true";;

Review Comment:
   This doesn't seem to be right that we need to include that flag while 
`.setSupportsAllDrives(true)` is already set on the `driveRequest`. Could you 
open a ticket for the Drive API team to investigate it?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-11333) Disable removing components unless all nodes connected

2023-03-27 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-11333:
--
Assignee: Mark Payne
  Status: Patch Available  (was: Open)

> Disable removing components unless all nodes connected
> --
>
> Key: NIFI-11333
> URL: https://issues.apache.org/jira/browse/NIFI-11333
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.latest, 2.latest
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In 1.16 we allows users to start updating flows while nodes are disconnected. 
> This has been greatly helpful. However, it can lead to a problem: when a user 
> removes a connection and there's data queued on a disconnected node, that 
> disconnected node can no longer rejoin the cluster. Instead, it remains 
> disconnected; and if the node is shutdown, it cannot be restarted without 
> manually changing nifi.properties to change it from a clustered not to a 
> standalone node, then restarting, and bleeding the data out, shutting down, 
> manually updating properties to make it a clustered node again; and 
> restarting.
> This is painful. Instead, we should simply disallow the removal of any 
> component unless all nodes in the cluster are connected. Components can still 
> be added, started, stopped, and disabled. Just not removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi-minifi-cpp] fgerlits commented on a diff in pull request #1503: MINIFICPP-2039 Dust off minificontroller

2023-03-27 Thread via GitHub


fgerlits commented on code in PR #1503:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1503#discussion_r1149234480


##
encrypt-config/tests/ConfigFileEncryptorTests.cpp:
##
@@ -77,7 +77,7 @@ TEST_CASE("ConfigFileEncryptor can encrypt the sensitive 
properties", "[encrypt-
 uint32_t num_properties_encrypted = 
encryptSensitivePropertiesInFile(test_file, KEY);
 
 REQUIRE(num_properties_encrypted == 1);
-REQUIRE(test_file.size() == 107);
+REQUIRE(test_file.size() == 110);

Review Comment:
   never mind, I have found the 3 new settings in `minifi.properties`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-11344) Fips support for MiNiFi Java

2023-03-27 Thread Ferenc Erdei (Jira)
Ferenc Erdei created NIFI-11344:
---

 Summary: Fips support for MiNiFi Java
 Key: NIFI-11344
 URL: https://issues.apache.org/jira/browse/NIFI-11344
 Project: Apache NiFi
  Issue Type: Task
  Components: MiNiFi
Reporter: Ferenc Erdei


Add Fips support for MiNiFi Java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-11344) Fips support for MiNiFi Java

2023-03-27 Thread Ferenc Erdei (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Erdei reassigned NIFI-11344:
---

Assignee: Ferenc Erdei

> Fips support for MiNiFi Java
> 
>
> Key: NIFI-11344
> URL: https://issues.apache.org/jira/browse/NIFI-11344
> Project: Apache NiFi
>  Issue Type: Task
>  Components: MiNiFi
>Reporter: Ferenc Erdei
>Assignee: Ferenc Erdei
>Priority: Minor
>  Labels: minifi-java
>
> Add Fips support for MiNiFi Java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11327) Add Export/Import All - NiFi CLI - NiFi Registry

2023-03-27 Thread Timea Barna (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timea Barna updated NIFI-11327:
---
Description: 
In NiFi Toolkit, in the CLI, we currently have the following commands available:

registry list-buckets

registry list-flows

registry list-flow-versions

registry export-flow-version

We should have a command registry export-all-flows that does the following:

List all the buckets, for each bucket, list all flows, for each flow, list all 
versions and export each version. All files should be landing in a target 
directory provided as an argument of the function.

We also current have the following commands:

registry create-bucket

registry create-flow

registry import-flow-version

We should have a commend registry import-all-flows that does the following:

It takes a directory as input (the one created by the export-all-flows 
command), and goes through the files to create the corresponding buckets, flows 
and flows versions.
The original author, bucket id and flow id need to be kept.

Use cases:
* use case 1: NiFi 1 -> connecting to NiFi Registry 1, "re-initialising" an 
existing NiFi Registry, the NiFi Registry does not change, only its 
configuration.
- export everything
- change the NR flow definition repo backend from local FS to database (for 
example), or from git to database, etc
- import everything
the existing NiFi instance should not see any change

* use case 2: NiFi 1 -> Registry 1, NiFi 2 -> Registry 2, Disaster Recovery 
kind of thing between two different sites.
- export everything from NR1
- import everything into NR2
- If the import into NR2 is adding new versions, then NiFi 2 should be able to 
update an existing PG to a newer version of the flow.

  was:
In NiFi Toolkit, in the CLI, we currently have the following commands available:

registry list-buckets

registry list-flows

registry list-flow-versions

registry export-flow-version

We should have a command registry export-all-flows that does the following:

List all the buckets, for each bucket, list all flows, for each flow, list all 
versions and export each version. All files should be landing in a target 
directory provided as an argument of the function.

We also current have the following commands:

registry create-bucket

registry create-flow

registry import-flow-version

We should have a commend registry import-all-flows that does the following:

It takes a directory as input (the one created by the export-all-flows 
command), and goes through the files to create the corresponding buckets, flows 
and flows versions.
The original author need to be kept.


> Add Export/Import All - NiFi CLI - NiFi Registry
> 
>
> Key: NIFI-11327
> URL: https://issues.apache.org/jira/browse/NIFI-11327
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Timea Barna
>Assignee: Timea Barna
>Priority: Major
>
> In NiFi Toolkit, in the CLI, we currently have the following commands 
> available:
> registry list-buckets
> registry list-flows
> registry list-flow-versions
> registry export-flow-version
> We should have a command registry export-all-flows that does the following:
> List all the buckets, for each bucket, list all flows, for each flow, list 
> all versions and export each version. All files should be landing in a target 
> directory provided as an argument of the function.
> We also current have the following commands:
> registry create-bucket
> registry create-flow
> registry import-flow-version
> We should have a commend registry import-all-flows that does the following:
> It takes a directory as input (the one created by the export-all-flows 
> command), and goes through the files to create the corresponding buckets, 
> flows and flows versions.
> The original author, bucket id and flow id need to be kept.
> Use cases:
> * use case 1: NiFi 1 -> connecting to NiFi Registry 1, "re-initialising" an 
> existing NiFi Registry, the NiFi Registry does not change, only its 
> configuration.
> - export everything
> - change the NR flow definition repo backend from local FS to database (for 
> example), or from git to database, etc
> - import everything
> the existing NiFi instance should not see any change
> * use case 2: NiFi 1 -> Registry 1, NiFi 2 -> Registry 2, Disaster Recovery 
> kind of thing between two different sites.
> - export everything from NR1
> - import everything into NR2
> - If the import into NR2 is adding new versions, then NiFi 2 should be able 
> to update an existing PG to a newer version of the flow.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)