[GitHub] [nifi-minifi-cpp] fgerlits commented on a diff in pull request #1553: MINIFICPP-2094 Change validators from shared to raw pointers

2023-04-13 Thread via GitHub


fgerlits commented on code in PR #1553:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1553#discussion_r1166271912


##
libminifi/include/core/PropertyValidation.h:
##
@@ -87,22 +83,23 @@ class ValidationResult {
 
 class PropertyValidator {
  public:
-  PropertyValidator(std::string name) // NOLINT
-  : name_(std::move(name)) {
+  explicit constexpr PropertyValidator(std::string_view name)

Review Comment:
   Thanks for raising this!  I have removed the stored `string_view`s in 
2dd29246eb4a6899b916e80268d48b553f491cce.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] exceptionfactory commented on a diff in pull request #7173: NIFI-11439 - correct provenance reporting parameter

2023-04-13 Thread via GitHub


exceptionfactory commented on code in PR #7173:
URL: https://github.com/apache/nifi/pull/7173#discussion_r1166213682


##
nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/storage/FetchGCSObject.java:
##
@@ -273,7 +275,15 @@ public void onTrigger(final ProcessContext context, final 
ProcessSession session
 
 final long millis = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - 
startNanos);
 getLogger().info("Successfully retrieved GCS Object for {} in {} 
millis; routing to success", new Object[]{flowFile, millis});
-session.getProvenanceReporter().fetch(flowFile, "https://; + 
bucketName + ".storage.googleapis.com/" + key, millis);
+
+String transitUri;
+try {
+final URL url = new URL(storage.getOptions().getHost());
+transitUri = String.format("%s://%s.%s/%s", url.getProtocol(), 
bucketName, url.getHost(), key);
+} catch (MalformedURLException e) {
+transitUri = e.getClass().getSimpleName();
+}

Review Comment:
   Using `URI.create()` avoids the checked `MalformedURLException` and because 
the Storage API URL must be valid for making the request, recommend the 
following approach:
   ```suggestion
   final URI storageApiUri = URI.create(storage.getOptions().getHost());
   final String transitUri = String.format("%s://%s.%s/%s", 
uri,getScheme(), bucketName, uri.getHost(), key);
   ```



##
nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/storage/PutGCSObject.java:
##
@@ -542,9 +544,15 @@ public void process(InputStream rawIn) throws IOException {
 }
 session.transfer(flowFile, REL_SUCCESS);
 final long millis = 
TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startNanos);
-final String url = "https://; + bucket + 
".storage.googleapis.com/" + key;
 
-session.getProvenanceReporter().send(flowFile, url, millis);
+String transitUri;
+try {
+final URL url = new URL(storage.getOptions().getHost());
+transitUri = String.format("%s://%s.%s/%s", url.getProtocol(), 
bucket, url.getHost(), key);
+} catch (MalformedURLException e) {
+transitUri = e.getClass().getSimpleName();
+}

Review Comment:
   It seems like the implementation could be moved to a protected 
`getTransitUri()` method in `AbstractGCSProcessor`, or the changed could be 
implemented in both classes.
   ```suggestion
   final URI storageApiUri = URI.create(storage.getOptions().getHost());
   final String transitUri = String.format("%s://%s.%s/%s", 
uri,getScheme(), bucketName, uri.getHost(), key);
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] greyp9 opened a new pull request, #7173: NIFI-11439 - correct provenance reporting parameter

2023-04-13 Thread via GitHub


greyp9 opened a new pull request, #7173:
URL: https://github.com/apache/nifi/pull/7173

   In previous PR, missed adjustment of URL supplied to provenance reporting 
subsystem.
   
   # Summary
   
   [NIFI-11439](https://issues.apache.org/jira/browse/NIFI-11439)
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [x] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [x] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [x] Pull Request based on current revision of the `main` branch
   - [x] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [x] Build completed using `mvn clean install -P contrib-check`
 - [x] JDK 11
 - [x] JDK 17
   
   ### Licensing
   
   - [x] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [x] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [x] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-11449) add autocommit property to PutDatabaseRecord processor

2023-04-13 Thread Abdelrahim Ahmad (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdelrahim Ahmad updated NIFI-11449:

Summary: add autocommit property to PutDatabaseRecord processor  (was: add 
autocommit property to control commit in PutDatabaseRecord processor)

> add autocommit property to PutDatabaseRecord processor
> --
>
> Key: NIFI-11449
> URL: https://issues.apache.org/jira/browse/NIFI-11449
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.21.0
> Environment: Any Nifi Deployment
>Reporter: Abdelrahim Ahmad
>Priority: Blocker
>  Labels: Trino, autocommit, database, iceberg, putdatabaserecord
>
> The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
> using the processor with the Trino-JDBC-Driver or Dremio-JDBC-Driver to write 
> to an Iceberg catalog, it disables the autocommit feature. This leads to 
> errors such as "{*}Catalog only supports writes using autocommit: iceberg{*}".
> To fix this issue, the autocommit feature needs to be added in the processor 
> to be enabled/disabled.
> enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
> Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
> allowing atomic writes to be performed in the underlying database. This will 
> allow the process to be widely used with bigger range of databases.
> _*Improving this processor will allow Nifi to be the main tool to ingest data 
> into these new Technologies. So we don't have to deal with another tool to do 
> so.*_
> P.S.: using PutSQL is not a good option at all due to the sensitivity of 
> these tables when dealing with small inserts.
> Thanks and best regards
> Abdelrahim Ahmad



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11449) add autocommit property to control commit in PutDatabaseRecord processor

2023-04-13 Thread Abdelrahim Ahmad (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdelrahim Ahmad updated NIFI-11449:

Description: 
The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
using the processor with the Trino-JDBC-Driver or Dremio-JDBC-Driver to write 
to an Iceberg catalog, it disables the autocommit feature. This leads to errors 
such as "{*}Catalog only supports writes using autocommit: iceberg{*}".

To fix this issue, the autocommit feature needs to be added in the processor to 
be enabled/disabled.
enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
allowing atomic writes to be performed in the underlying database. This will 
allow the process to be widely used with bigger range of databases.

_*Improving this processor will allow Nifi to be the main tool to ingest data 
into these new Technologies. So we don't have to deal with another tool to do 
so.*_

P.S.: using PutSQL is not a good option at all due to the sensitivity of these 
tables when dealing with small inserts.

Thanks and best regards
Abdelrahim Ahmad

  was:
The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
using the processor with the Trino-JDBC-Driver to write to an Iceberg catalog, 
it disables the autocommit feature. This leads to errors such as "{*}Catalog 
only supports writes using autocommit: iceberg{*}".

To fix this issue, the autocommit feature needs to be added in the processor to 
be enabled/disabled.
enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
allowing atomic writes to be performed in the underlying database. This will 
allow the process to be widely used with bigger range of databases.

_*Improving this processor will allow Nifi to be the main tool to ingest data 
into these new Technologies. So we don't have to deal with another tool to do 
so.*_

P.S.: using PutSQL is not a good option at all due to the sensitivity of these 
tables when dealing with small inserts.

Thanks and best regards
Abdelrahim Ahmad


> add autocommit property to control commit in PutDatabaseRecord processor
> 
>
> Key: NIFI-11449
> URL: https://issues.apache.org/jira/browse/NIFI-11449
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.21.0
> Environment: Any Nifi Deployment
>Reporter: Abdelrahim Ahmad
>Priority: Blocker
>  Labels: Trino, autocommit, database, iceberg, putdatabaserecord
>
> The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
> using the processor with the Trino-JDBC-Driver or Dremio-JDBC-Driver to write 
> to an Iceberg catalog, it disables the autocommit feature. This leads to 
> errors such as "{*}Catalog only supports writes using autocommit: iceberg{*}".
> To fix this issue, the autocommit feature needs to be added in the processor 
> to be enabled/disabled.
> enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
> Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
> allowing atomic writes to be performed in the underlying database. This will 
> allow the process to be widely used with bigger range of databases.
> _*Improving this processor will allow Nifi to be the main tool to ingest data 
> into these new Technologies. So we don't have to deal with another tool to do 
> so.*_
> P.S.: using PutSQL is not a good option at all due to the sensitivity of 
> these tables when dealing with small inserts.
> Thanks and best regards
> Abdelrahim Ahmad



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11449) add autocommit property to control commit in PutDatabaseRecord processor

2023-04-13 Thread Abdelrahim Ahmad (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdelrahim Ahmad updated NIFI-11449:

Description: 
The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
using the processor with the Trino-JDBC-Driver to write to an Iceberg catalog, 
it disables the autocommit feature. This leads to errors such as "{*}Catalog 
only supports writes using autocommit: iceberg{*}".

To fix this issue, the autocommit feature needs to be added in the processor to 
be enabled/disabled.
enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
allowing atomic writes to be performed in the underlying database. This will 
allow the process to be widely used with bigger range of databases.

_*Improving this processor will allow Nifi to be the main tool to ingest data 
into these new Technologies. So we don't have to deal with another tool to do 
so.*_

P.S.: using PutSQL is not a good option at all due to the sensitivity of these 
tables when dealing with small inserts.

Thanks and best regards
Abdelrahim Ahmad

  was:
The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
using the processor with the Trino-JDBC-Driver to write to an Iceberg catalog, 
it disables the autocommit feature. This leads to errors such as "{*}Catalog 
only supports writes using autocommit: iceberg{*}".

To fix this issue, the autocommit feature needs to be added in the processor to 
be enabled/disabled.
enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
allowing atomic writes to be performed in the underlying database. This will 
allow the process to be widely used with bigger range of databases.

_*Improving this process will allow Nifi to be the main tool to ingest data 
into these new Technologies. So we don't have to deal with another way to 
ingest data.*_

P.S.: using PutSQL is not a good option at all due to the sensitivity of these 
tables when dealing with small inserts.

Thanks and best regards
Abdelrahim Ahmad


> add autocommit property to control commit in PutDatabaseRecord processor
> 
>
> Key: NIFI-11449
> URL: https://issues.apache.org/jira/browse/NIFI-11449
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.21.0
> Environment: Any Nifi Deployment
>Reporter: Abdelrahim Ahmad
>Priority: Blocker
>  Labels: Trino, autocommit, database, iceberg, putdatabaserecord
>
> The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
> using the processor with the Trino-JDBC-Driver to write to an Iceberg 
> catalog, it disables the autocommit feature. This leads to errors such as 
> "{*}Catalog only supports writes using autocommit: iceberg{*}".
> To fix this issue, the autocommit feature needs to be added in the processor 
> to be enabled/disabled.
> enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
> Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
> allowing atomic writes to be performed in the underlying database. This will 
> allow the process to be widely used with bigger range of databases.
> _*Improving this processor will allow Nifi to be the main tool to ingest data 
> into these new Technologies. So we don't have to deal with another tool to do 
> so.*_
> P.S.: using PutSQL is not a good option at all due to the sensitivity of 
> these tables when dealing with small inserts.
> Thanks and best regards
> Abdelrahim Ahmad



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11449) add autocommit property to control commit in PutDatabaseRecord processor

2023-04-13 Thread Abdelrahim Ahmad (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdelrahim Ahmad updated NIFI-11449:

Description: 
The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
using the processor with the Trino-JDBC-Driver to write to an Iceberg catalog, 
it disables the autocommit feature. This leads to errors such as "{*}Catalog 
only supports writes using autocommit: iceberg{*}".

To fix this issue, the autocommit feature needs to be added in the processor to 
be enabled/disabled.
enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
allowing atomic writes to be performed in the underlying database. This will 
allow the process to be widely used with bigger range of databases.

_*Improving this process will allow Nifi to be the main tool to ingest data 
into these new Technologies. So we don't have to deal with another way to 
ingest data.*_

P.S.: using PutSQL is not a good option at all due to the sensitivity of these 
tables when dealing with small inserts.

Thanks and best regards
Abdelrahim Ahmad

  was:
The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
using the processor with the Trino-JDBC-Driver to write to an Iceberg catalog, 
it disables the autocommit feature. This leads to errors such as "{*}Catalog 
only supports writes using autocommit: iceberg{*}".

To fix this issue, the autocommit feature needs to be added in the processor to 
be enabled/disabled.
enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
allowing atomic writes to be performed in the underlying database. This will 
allow the process to be widely used with bigger range of databases.

P.S.: using PutSQL is not a good option at all due to the sensitivity of these 
tables when dealing with small inserts.

Thanks and best regards
Abdelrahim Ahmad


> add autocommit property to control commit in PutDatabaseRecord processor
> 
>
> Key: NIFI-11449
> URL: https://issues.apache.org/jira/browse/NIFI-11449
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.21.0
> Environment: Any Nifi Deployment
>Reporter: Abdelrahim Ahmad
>Priority: Blocker
>  Labels: Trino, autocommit, database, iceberg, putdatabaserecord
>
> The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
> using the processor with the Trino-JDBC-Driver to write to an Iceberg 
> catalog, it disables the autocommit feature. This leads to errors such as 
> "{*}Catalog only supports writes using autocommit: iceberg{*}".
> To fix this issue, the autocommit feature needs to be added in the processor 
> to be enabled/disabled.
> enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
> Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
> allowing atomic writes to be performed in the underlying database. This will 
> allow the process to be widely used with bigger range of databases.
> _*Improving this process will allow Nifi to be the main tool to ingest data 
> into these new Technologies. So we don't have to deal with another way to 
> ingest data.*_
> P.S.: using PutSQL is not a good option at all due to the sensitivity of 
> these tables when dealing with small inserts.
> Thanks and best regards
> Abdelrahim Ahmad



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11449) add autocommit property to control commit in PutDatabaseRecord processor

2023-04-13 Thread Abdelrahim Ahmad (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17712091#comment-17712091
 ] 

Abdelrahim Ahmad commented on NIFI-11449:
-

Improving this process will allow Nifi to be the main tool to ingest data into 
these new Technologies.
So we don't have to deal with another way to ingest data.

> add autocommit property to control commit in PutDatabaseRecord processor
> 
>
> Key: NIFI-11449
> URL: https://issues.apache.org/jira/browse/NIFI-11449
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.21.0
> Environment: Any Nifi Deployment
>Reporter: Abdelrahim Ahmad
>Priority: Blocker
>  Labels: Trino, autocommit, database, iceberg, putdatabaserecord
>
> The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
> using the processor with the Trino-JDBC-Driver to write to an Iceberg 
> catalog, it disables the autocommit feature. This leads to errors such as 
> "{*}Catalog only supports writes using autocommit: iceberg{*}".
> To fix this issue, the autocommit feature needs to be added in the processor 
> to be enabled/disabled.
> enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
> Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
> allowing atomic writes to be performed in the underlying database. This will 
> allow the process to be widely used with bigger range of databases.
> P.S.: using PutSQL is not a good option at all due to the sensitivity of 
> these tables when dealing with small inserts.
> Thanks and best regards
> Abdelrahim Ahmad



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (NIFI-11449) add autocommit property to control commit in PutDatabaseRecord processor

2023-04-13 Thread Abdelrahim Ahmad (Jira)


[ https://issues.apache.org/jira/browse/NIFI-11449 ]


Abdelrahim Ahmad deleted comment on NIFI-11449:
-

was (Author: abdelrahimk):
Improving this process will allow Nifi to be the main tool to ingest data into 
these new Technologies.
So we don't have to deal with another way to ingest data.

> add autocommit property to control commit in PutDatabaseRecord processor
> 
>
> Key: NIFI-11449
> URL: https://issues.apache.org/jira/browse/NIFI-11449
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.21.0
> Environment: Any Nifi Deployment
>Reporter: Abdelrahim Ahmad
>Priority: Blocker
>  Labels: Trino, autocommit, database, iceberg, putdatabaserecord
>
> The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
> using the processor with the Trino-JDBC-Driver to write to an Iceberg 
> catalog, it disables the autocommit feature. This leads to errors such as 
> "{*}Catalog only supports writes using autocommit: iceberg{*}".
> To fix this issue, the autocommit feature needs to be added in the processor 
> to be enabled/disabled.
> enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
> Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
> allowing atomic writes to be performed in the underlying database. This will 
> allow the process to be widely used with bigger range of databases.
> P.S.: using PutSQL is not a good option at all due to the sensitivity of 
> these tables when dealing with small inserts.
> Thanks and best regards
> Abdelrahim Ahmad



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-11449) add autocommit property to control commit in PutDatabaseRecord processor

2023-04-13 Thread Abdelrahim Ahmad (Jira)
Abdelrahim Ahmad created NIFI-11449:
---

 Summary: add autocommit property to control commit in 
PutDatabaseRecord processor
 Key: NIFI-11449
 URL: https://issues.apache.org/jira/browse/NIFI-11449
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 1.21.0
 Environment: Any Nifi Deployment
Reporter: Abdelrahim Ahmad


The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
using the processor with the Trino-JDBC-Driver to write to an Iceberg catalog, 
it disables the autocommit feature. This leads to errors such as "{*}Catalog 
only supports writes using autocommit: iceberg{*}".

To fix this issue, the autocommit feature needs to be added in the processor to 
be enabled/disabled.
enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
allowing atomic writes to be performed in the underlying database. This will 
allow the process to be widely used with bigger range of databases.

P.S.: using PutSQL is not a good option at all due to the sensitivity of these 
tables when dealing with small inserts.

Thanks and best regards
Abdelrahim Ahmad



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] exceptionfactory commented on pull request #7170: NIFI-11440: Speed up Hive Metastore based unit tests

2023-04-13 Thread via GitHub


exceptionfactory commented on PR #7170:
URL: https://github.com/apache/nifi/pull/7170#issuecomment-1507581057

   > I would advise not to waste any time bothering with these tests on 
Windows. I'd just detect running on windows and ignore them.
   
   I concur, the tests are already disabled on Windows, so promoting the 
annotation from method level to class level seems like the best path forward.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] joewitt commented on pull request #7170: NIFI-11440: Speed up Hive Metastore based unit tests

2023-04-13 Thread via GitHub


joewitt commented on PR #7170:
URL: https://github.com/apache/nifi/pull/7170#issuecomment-1507577479

   I would advise not to waste any time bothering with these tests on Windows.  
I'd just detect running on windows and ignore them.  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] abdelrahim-ahmad commented on pull request #5554: NIFI-8605 Adding a new property to enable/disable auto committing

2023-04-13 Thread via GitHub


abdelrahim-ahmad commented on PR #5554:
URL: https://github.com/apache/nifi/pull/5554#issuecomment-1507537242

   Hi All,
   I would appreciate your kind help if anyone knows how to enable autocommit 
in the PutDatabaseRecord process in Nifi. I have tried to do this many ways but 
still not working. I know that putSQL will work but I cannot use it as it's not 
optimized to insert small files into an Iceberg table in Trino.
   
   `- Caused by: io.trino.spi.TrinoException: Catalog only supports writes 
using autocommit: iceberg
   `
   
   Thanks and best regards
   Abdelrahim Ahmad


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] Lehel44 commented on a diff in pull request #7019: NIFI-11224: Refactor and FF attribute support in WHERE in QuerySalesf…

2023-04-13 Thread via GitHub


Lehel44 commented on code in PR #7019:
URL: https://github.com/apache/nifi/pull/7019#discussion_r1165964062


##
nifi-nar-bundles/nifi-salesforce-bundle/nifi-salesforce-processors/src/main/java/org/apache/nifi/processors/salesforce/QuerySalesforceObject.java:
##
@@ -560,6 +566,27 @@ private SalesforceSchemaHolder 
getConvertedSalesforceSchema(String sObject, Stri
 }
 }
 
+private void handleError(ProcessSession session, FlowFile 
originalFlowFile, AtomicBoolean isOriginalTransferred, List 
outgoingFlowFiles,
+ Exception e, String errorMessage) {
+if (originalFlowFile != null) {
+session.transfer(originalFlowFile, REL_FAILURE);
+isOriginalTransferred.set(true);
+}
+getLogger().error(errorMessage, e);
+session.remove(outgoingFlowFiles);
+outgoingFlowFiles.clear();
+}
+
+private StateMap getState(ProcessContext context) {
+StateMap state;
+try {
+state = context.getStateManager().getState(Scope.CLUSTER);

Review Comment:
   I agree but there's a different JIRA for that.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] Lehel44 commented on a diff in pull request #7019: NIFI-11224: Refactor and FF attribute support in WHERE in QuerySalesf…

2023-04-13 Thread via GitHub


Lehel44 commented on code in PR #7019:
URL: https://github.com/apache/nifi/pull/7019#discussion_r1165955781


##
nifi-nar-bundles/nifi-salesforce-bundle/nifi-salesforce-processors/src/main/java/org/apache/nifi/processors/salesforce/QuerySalesforceObject.java:
##
@@ -102,11 +107,11 @@
 @CapabilityDescription("Retrieves records from a Salesforce sObject. Users can 
add arbitrary filter conditions by setting the 'Custom WHERE Condition' 
property."
 + " The processor can also run a custom query, although record 
processing is not supported in that case."
 + " Supports incremental retrieval: users can define a field in the 
'Age Field' property that will be used to determine when the record was 
created."
-+ " When this property is set the processor will retrieve new records. 
It's also possible to define an initial cutoff value for the age, filtering out 
all older records"
++ " When this property is set the processor will retrieve new records. 
Incremental loading and record-based processing are only supported in 
property-based queries."

Review Comment:
   This is mentioned in the processor description here "Incremental loading and 
record-based processing are only supported in property-based queries"



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-11439) Add Storage API URL to GCS Processors

2023-04-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17712040#comment-17712040
 ] 

ASF subversion and git services commented on NIFI-11439:


Commit cd685671c8981114d5215513991df170e556062b in nifi's branch 
refs/heads/main from Paul Grey
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=cd685671c8 ]

NIFI-11439 Corrected Checkstyle violation on GCS Property

Signed-off-by: David Handermann 


> Add Storage API URL to GCS Processors
> -
>
> Key: NIFI-11439
> URL: https://issues.apache.org/jira/browse/NIFI-11439
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Paul Grey
>Assignee: Paul Grey
>Priority: Minor
> Fix For: 2.0.0, 1.22.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> AWS processors allow the override of the base endpoint for API calls to AWS 
> services [1].  GCS libraries provide analogous capabilities, as documented 
> here [2].  Expose a GCS processor property to enable GCS endpoint override 
> behavior.
> [1] 
> https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-abstract-processors/src/main/java/org/apache/nifi/processors/aws/AbstractAWSProcessor.java#L146-L154
> [2] 
> https://cloud.google.com/storage/docs/request-endpoints#storage-set-client-endpoint-java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11439) Add Storage API URL to GCS Processors

2023-04-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17712041#comment-17712041
 ] 

ASF subversion and git services commented on NIFI-11439:


Commit 98a0926680ea05a92b37fc9024d57fafdb6f65bb in nifi's branch 
refs/heads/support/nifi-1.x from Paul Grey
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=98a0926680 ]

NIFI-11439 Corrected Checkstyle violation on GCS Property

Signed-off-by: David Handermann 
(cherry picked from commit cd685671c8981114d5215513991df170e556062b)


> Add Storage API URL to GCS Processors
> -
>
> Key: NIFI-11439
> URL: https://issues.apache.org/jira/browse/NIFI-11439
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Paul Grey
>Assignee: Paul Grey
>Priority: Minor
> Fix For: 2.0.0, 1.22.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> AWS processors allow the override of the base endpoint for API calls to AWS 
> services [1].  GCS libraries provide analogous capabilities, as documented 
> here [2].  Expose a GCS processor property to enable GCS endpoint override 
> behavior.
> [1] 
> https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-abstract-processors/src/main/java/org/apache/nifi/processors/aws/AbstractAWSProcessor.java#L146-L154
> [2] 
> https://cloud.google.com/storage/docs/request-endpoints#storage-set-client-endpoint-java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11439) GCS processors: add ability to configure custom GCS client endpoint

2023-04-13 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-11439:

Fix Version/s: 2.0.0
   1.22.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> GCS processors: add ability to configure custom GCS client endpoint
> ---
>
> Key: NIFI-11439
> URL: https://issues.apache.org/jira/browse/NIFI-11439
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Paul Grey
>Assignee: Paul Grey
>Priority: Minor
> Fix For: 2.0.0, 1.22.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> AWS processors allow the override of the base endpoint for API calls to AWS 
> services [1].  GCS libraries provide analogous capabilities, as documented 
> here [2].  Expose a GCS processor property to enable GCS endpoint override 
> behavior.
> [1] 
> https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-abstract-processors/src/main/java/org/apache/nifi/processors/aws/AbstractAWSProcessor.java#L146-L154
> [2] 
> https://cloud.google.com/storage/docs/request-endpoints#storage-set-client-endpoint-java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11439) Add Storage API URL to GCS Processors

2023-04-13 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-11439:

Summary: Add Storage API URL to GCS Processors  (was: GCS processors: add 
ability to configure custom GCS client endpoint)

> Add Storage API URL to GCS Processors
> -
>
> Key: NIFI-11439
> URL: https://issues.apache.org/jira/browse/NIFI-11439
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Paul Grey
>Assignee: Paul Grey
>Priority: Minor
> Fix For: 2.0.0, 1.22.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> AWS processors allow the override of the base endpoint for API calls to AWS 
> services [1].  GCS libraries provide analogous capabilities, as documented 
> here [2].  Expose a GCS processor property to enable GCS endpoint override 
> behavior.
> [1] 
> https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-abstract-processors/src/main/java/org/apache/nifi/processors/aws/AbstractAWSProcessor.java#L146-L154
> [2] 
> https://cloud.google.com/storage/docs/request-endpoints#storage-set-client-endpoint-java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11439) GCS processors: add ability to configure custom GCS client endpoint

2023-04-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17712031#comment-17712031
 ] 

ASF subversion and git services commented on NIFI-11439:


Commit 7afe09ad5a72dbb567ac1752d9eb1344d644c9b1 in nifi's branch 
refs/heads/support/nifi-1.x from Paul Grey
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=7afe09ad5a ]

NIFI-11439 Added Storage API URL property to GCS Processors

- Included Host Header override with Storage API URL based on Google Private 
Service Connect documentation

This closes #7172

Signed-off-by: David Handermann 
(cherry picked from commit ee24df2830b1d880428f487081c0ea92b0ca0ca1)


> GCS processors: add ability to configure custom GCS client endpoint
> ---
>
> Key: NIFI-11439
> URL: https://issues.apache.org/jira/browse/NIFI-11439
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Paul Grey
>Assignee: Paul Grey
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> AWS processors allow the override of the base endpoint for API calls to AWS 
> services [1].  GCS libraries provide analogous capabilities, as documented 
> here [2].  Expose a GCS processor property to enable GCS endpoint override 
> behavior.
> [1] 
> https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-abstract-processors/src/main/java/org/apache/nifi/processors/aws/AbstractAWSProcessor.java#L146-L154
> [2] 
> https://cloud.google.com/storage/docs/request-endpoints#storage-set-client-endpoint-java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11439) GCS processors: add ability to configure custom GCS client endpoint

2023-04-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17712028#comment-17712028
 ] 

ASF subversion and git services commented on NIFI-11439:


Commit ee24df2830b1d880428f487081c0ea92b0ca0ca1 in nifi's branch 
refs/heads/main from Paul Grey
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ee24df2830 ]

NIFI-11439 Added Storage API URL property to GCS Processors

- Included Host Header override with Storage API URL based on Google Private 
Service Connect documentation

This closes #7172

Signed-off-by: David Handermann 


> GCS processors: add ability to configure custom GCS client endpoint
> ---
>
> Key: NIFI-11439
> URL: https://issues.apache.org/jira/browse/NIFI-11439
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Paul Grey
>Assignee: Paul Grey
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> AWS processors allow the override of the base endpoint for API calls to AWS 
> services [1].  GCS libraries provide analogous capabilities, as documented 
> here [2].  Expose a GCS processor property to enable GCS endpoint override 
> behavior.
> [1] 
> https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-abstract-processors/src/main/java/org/apache/nifi/processors/aws/AbstractAWSProcessor.java#L146-L154
> [2] 
> https://cloud.google.com/storage/docs/request-endpoints#storage-set-client-endpoint-java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] exceptionfactory closed pull request #7172: NIFI-11439 - GCS processors; expose config of custom client endpoint

2023-04-13 Thread via GitHub


exceptionfactory closed pull request #7172: NIFI-11439 - GCS processors; expose 
config of custom client endpoint
URL: https://github.com/apache/nifi/pull/7172


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] Lehel44 commented on pull request #7082: NIFI-11067: Clear property history when set sensitive

2023-04-13 Thread via GitHub


Lehel44 commented on PR #7082:
URL: https://github.com/apache/nifi/pull/7082#issuecomment-1507456370

   Thanks for the review @exceptionfactory. I added the change to 
ReportingTasks and ControllerServices and did some runtime testing.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] markap14 commented on a diff in pull request #7003: NIFI-11241: Initial implementation of Python-based Processor API with…

2023-04-13 Thread via GitHub


markap14 commented on code in PR #7003:
URL: https://github.com/apache/nifi/pull/7003#discussion_r1165569083


##
nifi-nar-bundles/nifi-py4j-bundle/nifi-python-framework/src/main/python/framework/ExtensionManager.py:
##
@@ -0,0 +1,531 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import importlib
+import sys
+import importlib.util  # Note requires Python 3.4+
+import inspect
+import logging
+import subprocess
+import ast
+import pkgutil
+from pathlib import Path
+
+logger = logging.getLogger("org.apache.nifi.py4j.ExtensionManager")
+
+# A simple wrapper class to encompass a processor type and its version
+class ExtensionId:
+def __init__(self, classname=None, version=None):
+self.classname = classname
+self.version = version
+
+def __hash__(self):
+return hash((self.classname, self.version))
+
+def __eq__(self, other):
+return (self.classname, self.version) == (other.classname, 
other.version)
+
+
+class ExtensionDetails:
+class Java:
+implements = ['org.apache.nifi.python.PythonProcessorDetails']
+
+def __init__(self, gateway, type, version='Unknown', dependencies=None, 
source_location=None, package_name=None, description=None, tags=None):
+self.gateway = gateway
+if dependencies is None:
+dependencies = []
+if tags is None:
+tags = []
+
+self.type = type
+self.version = version
+self.dependencies = dependencies
+self.source_location = source_location
+self.package_name = package_name
+self.description = description
+self.tags = tags
+
+def getProcessorType(self):
+return self.type
+
+def getProcessorVersion(self):
+return self.version
+
+def getSourceLocation(self):
+return self.source_location
+
+def getPyPiPackageName(self):
+return self.package_name
+
+def getDependencies(self):
+list = self.gateway.jvm.java.util.ArrayList()
+for dep in self.dependencies:
+list.add(dep)
+
+return list
+
+def getCapabilityDescription(self):
+return self.description
+
+def getTags(self):
+list = self.gateway.jvm.java.util.ArrayList()
+for tag in self.tags:
+list.add(tag)
+
+return list
+
+
+
+
+class ExtensionManager:
+"""
+ExtensionManager is responsible for discovery of extensions types and the 
lifecycle management of those extension types.
+Discovery of extension types includes finding what extension types are 
available
+(e.g., which Processor types exist on the system), as well as information 
about those extension types, such as
+the extension's documentation (tags and capability description).
+
+Lifecycle management includes determining the third-party dependencies 
that an extension has and ensuring that those
+third-party dependencies have been imported.
+"""
+
+processorInterfaces = 
['org.apache.nifi.python.processor.FlowFileTransform', 
'org.apache.nifi.python.processor.RecordTransform']
+processor_details = {}
+processor_class_by_name = {}
+module_files_by_extension_type = {}
+dependency_directories = {}
+
+def __init__(self, gateway):
+self.gateway = gateway
+
+
+def getProcessorTypes(self):
+"""
+:return: a list of Processor types that have been discovered by the 
#discoverExtensions method
+"""
+return self.processor_details.values()
+
+def getProcessorClass(self, type, version, work_dir):
+"""
+Returns the Python class that can be used to instantiate a processor 
of the given type.
+Additionally, it ensures that the required third-party dependencies 
are on the system path in order to ensure that
+the necessary libraries are available to the Processor so that it can 
be instantiated and used.
+
+:param type: the type of Processor
+:param version: the version of the Processor
+:param work_dir: the working directory for extensions
+:return: the Python class that can be used to instantiate a Processor 
of the given type and version
+
+:raises ValueError: if there 

[GitHub] [nifi] exceptionfactory commented on a diff in pull request #7124: NIFI-11385 Expose JMX metrics from NiFi JVM

2023-04-13 Thread via GitHub


exceptionfactory commented on code in PR #7124:
URL: https://github.com/apache/nifi/pull/7124#discussion_r1165799800


##
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/JmxMetricsResource.java:
##
@@ -0,0 +1,129 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.web.api;
+
+import io.swagger.annotations.Api;
+import io.swagger.annotations.ApiOperation;
+import io.swagger.annotations.ApiParam;
+import io.swagger.annotations.ApiResponse;
+import io.swagger.annotations.ApiResponses;
+import io.swagger.annotations.Authorization;
+import org.apache.nifi.authorization.Authorizer;
+import org.apache.nifi.authorization.RequestAction;
+import org.apache.nifi.authorization.resource.Authorizable;
+import org.apache.nifi.authorization.user.NiFiUserUtils;
+import org.apache.nifi.web.NiFiServiceFacade;
+import org.apache.nifi.web.api.metrics.jmx.JmxMetricsCollector;
+import org.apache.nifi.web.api.metrics.jmx.JmxMetricsFilter;
+import org.apache.nifi.web.api.metrics.jmx.JmxMetricsResult;
+import org.apache.nifi.web.api.metrics.jmx.JmxMetricsResultConverter;
+import org.apache.nifi.web.api.metrics.jmx.JmxMetricsWriter;
+
+import javax.ws.rs.Consumes;
+import javax.ws.rs.GET;
+import javax.ws.rs.Path;
+import javax.ws.rs.Produces;
+import javax.ws.rs.QueryParam;
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.Response;
+import javax.ws.rs.core.StreamingOutput;
+import java.util.Collection;
+
+/**
+ * RESTful endpoint for JMX metrics.
+ */
+@Path("/jmx-metrics")
+@Api(
+value = "/jmx-metrics",
+description = "Endpoint for accessing the JMX metrics."
+)
+public class JmxMetricsResource extends ApplicationResource {
+private static final String JMX_METRICS_NIFI_PROPERTY = 
"nifi.jmx.metrics.blacklisting.filter";
+private NiFiServiceFacade serviceFacade;
+private Authorizer authorizer;
+
+/**
+ * Retrieves the JMX metrics.
+ *
+ * @return A jmxMetricsResult list.
+ */
+@GET
+@Consumes(MediaType.WILDCARD)
+@Produces(MediaType.WILDCARD)

Review Comment:
   This should be changed from `WILDCARD` to `APPLICATION_JSON`.
   ```suggestion
   @Produces(MediaType.APPLICATION_JSON)
   ```



##
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/metrics/jmx/JmxMetricsWriter.java:
##
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.web.api.metrics.jmx;
+
+import com.fasterxml.jackson.databind.MapperFeature;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.fasterxml.jackson.databind.json.JsonMapper;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.Collection;
+
+public class JmxMetricsWriter {
+private final static ObjectMapper MAPPER = JsonMapper.builder()
+.disable(MapperFeature.CAN_OVERRIDE_ACCESS_MODIFIERS)
+.build();
+private final JmxMetricsFilter metricsFilter;
+
+public JmxMetricsWriter(final JmxMetricsFilter metricsFilter) {
+this.metricsFilter = metricsFilter;
+}
+
+public void write(final OutputStream outputStream, final 
Collection results) {
+final Collection filteredResults = 
metricsFilter.filter(results);
+
+try {
+MAPPER.writerWithDefaultPrettyPrinter().writeValue(outputStream, 

[jira] [Resolved] (NIFI-11233) adjust Github runner configuration to use HTTP connection pooling

2023-04-13 Thread Paul Grey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Grey resolved NIFI-11233.
--
Resolution: Duplicate

> adjust Github runner configuration to use HTTP connection pooling
> -
>
> Key: NIFI-11233
> URL: https://issues.apache.org/jira/browse/NIFI-11233
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Paul Grey
>Assignee: Paul Grey
>Priority: Minor
>
> Multiple recent Github automation build failures indicate a problem 
> downloading a POM for a dependency.  Investigate adjustments to Github CI 
> configuration to alleviate this issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11437) Improve EncryptContentPGP Content Type Detection

2023-04-13 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-11437:

Fix Version/s: 2.0.0
   1.22.0
   (was: 1.latest)
   (was: 2.latest)
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Improve EncryptContentPGP Content Type Detection
> 
>
> Key: NIFI-11437
> URL: https://issues.apache.org/jira/browse/NIFI-11437
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions, Security
>Affects Versions: 1.15.0, 1.21.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0, 1.22.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The {{EncryptContentPGP}} Processor reads the initial bytes of incoming files 
> to determine whether a file is an OpenPGP message. This initial read is 
> necessary to support flows with {{SignContentPGP}} creating signed OpenPGP 
> messages prior to encryption.
> The current implementation uses an InputStreamCallback with 
> ProcessSession.read() to run content type detection. Instead a separate read 
> callback, the StreamCallback for ProcessSession.write() can be modified to 
> use a buffer with {{{}PushbackInputStream{}}}. This will avoid reading the 
> initial bytes multiple times.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11437) Improve EncryptContentPGP Content Type Detection

2023-04-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17711973#comment-17711973
 ] 

ASF subversion and git services commented on NIFI-11437:


Commit 61b5c1a7f517abc25b0297d6b13952560777bb59 in nifi's branch 
refs/heads/support/nifi-1.x from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=61b5c1a7f5 ]

NIFI-11437 Switched to StreamUtils.fillBuffer() for buffer, Improved 
EncryptContentPGP Content Type Detection

This closes #7166
Signed-off-by: Paul Grey 
(cherry picked from commit bc5f00a6671c0f3fd9d6c8599d196f414ac47fd9)


> Improve EncryptContentPGP Content Type Detection
> 
>
> Key: NIFI-11437
> URL: https://issues.apache.org/jira/browse/NIFI-11437
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions, Security
>Affects Versions: 1.15.0, 1.21.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 1.latest, 2.latest
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The {{EncryptContentPGP}} Processor reads the initial bytes of incoming files 
> to determine whether a file is an OpenPGP message. This initial read is 
> necessary to support flows with {{SignContentPGP}} creating signed OpenPGP 
> messages prior to encryption.
> The current implementation uses an InputStreamCallback with 
> ProcessSession.read() to run content type detection. Instead a separate read 
> callback, the StreamCallback for ProcessSession.write() can be modified to 
> use a buffer with {{{}PushbackInputStream{}}}. This will avoid reading the 
> initial bytes multiple times.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi-minifi-cpp] lordgamez opened a new pull request, #1557: MINIFICPP-2099 Only run tests requiring test processors if they are available

2023-04-13 Thread via GitHub


lordgamez opened a new pull request, #1557:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1557

   https://issues.apache.org/jira/browse/MINIFICPP-2099
   
   -
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically main)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI 
results for build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] exceptionfactory commented on a diff in pull request #7172: NIFI-11439 - GCS processors; expose config of custom client endpoint

2023-04-13 Thread via GitHub


exceptionfactory commented on code in PR #7172:
URL: https://github.com/apache/nifi/pull/7172#discussion_r1165758118


##
nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/storage/DeleteGCSObject.java:
##
@@ -81,6 +81,7 @@ public class DeleteGCSObject extends AbstractGCSProcessor {
 public List getSupportedPropertyDescriptors() {
 final List descriptors = new ArrayList<>();
 descriptors.addAll(super.getSupportedPropertyDescriptors());
+descriptors.add(STORAGE_API_HOST);

Review Comment:
   Instead of adding the Property Descriptor in individual components, it looks 
like it could be added after the Proxy Configuration Service in the 
AbstractGCSProcessor.



##
nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/storage/AbstractGCSProcessor.java:
##
@@ -141,6 +142,10 @@ protected StorageOptions getServiceOptions(ProcessContext 
context, GoogleCredent
 storageOptionsBuilder.setProjectId(projectId);
 }
 
+if (storageApiHost != null && !storageApiHost.isEmpty()) {
+storageOptionsBuilder.setHost(storageApiHost);

Review Comment:
   According to the [Private Service Connect for Google 
APIs](https://codelabs.developers.google.com/cloudnet-psc#12) documentation, 
the HTTP `Host` header should also be overridden to send `www.googleapis.com`.



##
nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/AbstractGCPProcessor.java:
##
@@ -106,6 +106,16 @@
 .sensitive(true)
 .build();
 
+// 
https://cloud.google.com/storage/docs/request-endpoints#storage-set-client-endpoint-java
+public static final PropertyDescriptor STORAGE_API_HOST = new 
PropertyDescriptor
+.Builder().name("storage-api-host")
+.displayName("Storage API Host")
+.description("Cloud Storage client libraries manage request 
endpoints automatically. Optionally, you can set the request endpoint 
manually.")

Review Comment:
   Recommend adjusting the description to include some additional details on 
the default setting and expected custom behavior.
   ```suggestion
   .description("Overrides the default storage URL. Configuring an 
alternative Storage API Host URL also overrides the HTTP Host header on 
requests as described in the Google documentation for Private Service 
Connections.")
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (MINIFICPP-2099) Only run docker tests requiring test processors if they are available

2023-04-13 Thread Jira
Gábor Gyimesi created MINIFICPP-2099:


 Summary: Only run docker tests requiring test processors if they 
are available
 Key: MINIFICPP-2099
 URL: https://issues.apache.org/jira/browse/MINIFICPP-2099
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Gábor Gyimesi
Assignee: Gábor Gyimesi


In the core_functionality.feature file there is a test scenario "Processors are 
destructed when agent is stopped" that requires test processors extension to be 
built. These are not always available in the docker image. We should check if 
the cmake flag enabling the test processors is set similarly to other 
extensions and only run the test case if it is set to ON.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11437) Improve EncryptContentPGP Content Type Detection

2023-04-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17711969#comment-17711969
 ] 

ASF subversion and git services commented on NIFI-11437:


Commit bc5f00a6671c0f3fd9d6c8599d196f414ac47fd9 in nifi's branch 
refs/heads/main from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=bc5f00a667 ]

NIFI-11437 Switched to StreamUtils.fillBuffer() for buffer, Improved 
EncryptContentPGP Content Type Detection

This closes #7166
Signed-off-by: Paul Grey 


> Improve EncryptContentPGP Content Type Detection
> 
>
> Key: NIFI-11437
> URL: https://issues.apache.org/jira/browse/NIFI-11437
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions, Security
>Affects Versions: 1.15.0, 1.21.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 1.latest, 2.latest
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The {{EncryptContentPGP}} Processor reads the initial bytes of incoming files 
> to determine whether a file is an OpenPGP message. This initial read is 
> necessary to support flows with {{SignContentPGP}} creating signed OpenPGP 
> messages prior to encryption.
> The current implementation uses an InputStreamCallback with 
> ProcessSession.read() to run content type detection. Instead a separate read 
> callback, the StreamCallback for ProcessSession.write() can be modified to 
> use a buffer with {{{}PushbackInputStream{}}}. This will avoid reading the 
> initial bytes multiple times.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] greyp9 closed pull request #7166: NIFI-11437 Improve EncryptContentPGP Content Type Detection

2023-04-13 Thread via GitHub


greyp9 closed pull request #7166: NIFI-11437 Improve EncryptContentPGP Content 
Type Detection
URL: https://github.com/apache/nifi/pull/7166


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-11409) OIDC Token Revocation Error on Logout

2023-04-13 Thread macdoor615 (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17711965#comment-17711965
 ] 

macdoor615 commented on NIFI-11409:
---

[~exceptionfactory] Thank you for your suggestion. Translating a hostname into 
different IP in the internal and external network may be the only feasible 
solution at present

> OIDC Token Revocation Error on Logout
> -
>
> Key: NIFI-11409
> URL: https://issues.apache.org/jira/browse/NIFI-11409
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.21.0
> Environment: NiFi 1.21.0 cluster with 4 nodes
> openjdk version "11.0.18" 2023-01-17 LTS
> OpenJDK Runtime Environment (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS)
> OpenJDK 64-Bit Server VM (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS, mixed mode, sharing)
> Linux hb3-ifz-bridge-004 3.10.0-1160.76.1.el7.x86_64 #1 SMP Wed Aug 10 
> 16:21:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
> Keycloak 20.0.2
>Reporter: macdoor615
>Assignee: David Handermann
>Priority: Major
> Attachments: RFC6749 flow.png, macdoor network topology.png, 
> 截屏2023-04-08 12.40.30.png, 截屏2023-04-09 13.17.25.png, 截屏2023-04-09 
> 13.33.25.png
>
>
> My NiFi 1.21.0 cluster has 4 nodes and using oidc authentication.
> I can log in properly, but when I click logout on webui, I got HTTP ERROR 503.
> !截屏2023-04-08 12.40.30.png|width=479,height=179!
> I also find 503 in nifi-request.log
>  
> {code:java}
> 10.12.69.33 - - [08/Apr/2023:04:24:13 +] "GET 
> /nifi-api/access/oidc/logout HTTP/1.1" 503 425 
> "https://36.138.166.203:18088/nifi/; "Mozilla/5.0 (Macintosh; Intel Mac OS X 
> 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.5 
> Safari/605.1.15"{code}
>  
> and WARNs in nifi-user.log, 36.133.55.100 is load balance's external IP. It 
> can not be accessed in intra net.
>  
> {code:java}
> 2023-04-08 12:24:43,511 WARN [NiFi Web Server-59] 
> o.a.n.w.s.o.r.StandardTokenRevocationResponseClient Token Revocation Request 
> processing failed
> org.springframework.web.client.ResourceAccessException: I/O error on POST 
> request for 
> "https://36.133.55.100:8943/realms/zznode/protocol/openid-connect/revoke": 
> connect timed out; nested exception is java.net.SocketTimeoutException: 
> connect timed out
>         at 
> org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:791)
>         at 
> org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:666)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getResponseEntity(StandardTokenRevocationResponseClient.java:81)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getRevocationResponse(StandardTokenRevocationResponseClient.java:70)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processRefreshTokenRevocation(OidcLogoutSuccessHandler.java:181)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processLogoutRequest(OidcLogoutSuccessHandler.java:159)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.onLogoutSuccess(OidcLogoutSuccessHandler.java:127)
>         at 
> org.apache.nifi.web.security.logout.StandardLogoutFilter.doFilterInternal(StandardLogoutFilter.java:62)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.apache.nifi.web.security.csrf.SkipReplicatedCsrfFilter.doFilterInternal(SkipReplicatedCsrfFilter.java:59)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:225)
>         at 
> org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:190)
>         at 
> org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:354)
>         at 
> org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:267)
>         at 
> 

[GitHub] [nifi-minifi-cpp] fgerlits commented on a diff in pull request #1553: MINIFICPP-2094 Change validators from shared to raw pointers

2023-04-13 Thread via GitHub


fgerlits commented on code in PR #1553:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1553#discussion_r1165733292


##
libminifi/include/core/PropertyValidation.h:
##
@@ -87,22 +83,23 @@ class ValidationResult {
 
 class PropertyValidator {
  public:
-  PropertyValidator(std::string name) // NOLINT
-  : name_(std::move(name)) {
+  explicit constexpr PropertyValidator(std::string_view name)

Review Comment:
   Yes, there is a danger of a dangling pointer because we are storing a 
`string_view`.  But I don't think disabling copy would prevent that.
   
   We just need to be careful and remember that `PropertyValidator{"foo"}` is 
OK, but `PropertyValidator{std::string{"foo"}}` is not.
   
   It would be nice if clang-tidy or some other tool could warn on this, but I 
don't think they can, right now -- maybe later.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-11409) OIDC Token Revocation Error on Logout

2023-04-13 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17711949#comment-17711949
 ] 

David Handermann commented on NIFI-11409:
-

Thanks for the diagram [~macdoor615], that is very helpful, and makes sense 
from the previous background shown in the OIDC Discovery configuration. In the 
JSON you previously shared, there was a mix of the hostname and IP address in 
the different endpoints.

It should be possible to make something work if you have an internal DNS 
resolver behind the firewall, or custom /etc/hosts entries. A solution using 
different DNS servers would be the ideal approach.

> OIDC Token Revocation Error on Logout
> -
>
> Key: NIFI-11409
> URL: https://issues.apache.org/jira/browse/NIFI-11409
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.21.0
> Environment: NiFi 1.21.0 cluster with 4 nodes
> openjdk version "11.0.18" 2023-01-17 LTS
> OpenJDK Runtime Environment (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS)
> OpenJDK 64-Bit Server VM (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS, mixed mode, sharing)
> Linux hb3-ifz-bridge-004 3.10.0-1160.76.1.el7.x86_64 #1 SMP Wed Aug 10 
> 16:21:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
> Keycloak 20.0.2
>Reporter: macdoor615
>Assignee: David Handermann
>Priority: Major
> Attachments: RFC6749 flow.png, macdoor network topology.png, 
> 截屏2023-04-08 12.40.30.png, 截屏2023-04-09 13.17.25.png, 截屏2023-04-09 
> 13.33.25.png
>
>
> My NiFi 1.21.0 cluster has 4 nodes and using oidc authentication.
> I can log in properly, but when I click logout on webui, I got HTTP ERROR 503.
> !截屏2023-04-08 12.40.30.png|width=479,height=179!
> I also find 503 in nifi-request.log
>  
> {code:java}
> 10.12.69.33 - - [08/Apr/2023:04:24:13 +] "GET 
> /nifi-api/access/oidc/logout HTTP/1.1" 503 425 
> "https://36.138.166.203:18088/nifi/; "Mozilla/5.0 (Macintosh; Intel Mac OS X 
> 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.5 
> Safari/605.1.15"{code}
>  
> and WARNs in nifi-user.log, 36.133.55.100 is load balance's external IP. It 
> can not be accessed in intra net.
>  
> {code:java}
> 2023-04-08 12:24:43,511 WARN [NiFi Web Server-59] 
> o.a.n.w.s.o.r.StandardTokenRevocationResponseClient Token Revocation Request 
> processing failed
> org.springframework.web.client.ResourceAccessException: I/O error on POST 
> request for 
> "https://36.133.55.100:8943/realms/zznode/protocol/openid-connect/revoke": 
> connect timed out; nested exception is java.net.SocketTimeoutException: 
> connect timed out
>         at 
> org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:791)
>         at 
> org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:666)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getResponseEntity(StandardTokenRevocationResponseClient.java:81)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getRevocationResponse(StandardTokenRevocationResponseClient.java:70)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processRefreshTokenRevocation(OidcLogoutSuccessHandler.java:181)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processLogoutRequest(OidcLogoutSuccessHandler.java:159)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.onLogoutSuccess(OidcLogoutSuccessHandler.java:127)
>         at 
> org.apache.nifi.web.security.logout.StandardLogoutFilter.doFilterInternal(StandardLogoutFilter.java:62)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.apache.nifi.web.security.csrf.SkipReplicatedCsrfFilter.doFilterInternal(SkipReplicatedCsrfFilter.java:59)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:225)
>         at 
> 

[jira] [Commented] (NIFI-11409) OIDC Token Revocation Error on Logout

2023-04-13 Thread macdoor615 (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17711948#comment-17711948
 ] 

macdoor615 commented on NIFI-11409:
---

[~exceptionfactory] Unfortunately, my problem has not been solved yet. Here is 
my network topology,

!macdoor network topology.png|width=416,height=352!

NiFi Server is behind a firewall and cannot access the Internet from inside, 
while WebUI is outside the firewall and cannot directly access intranet 
resources, only through nginx.

Take authorization_endpoint and revocation_endpoint as an example,  WebUI gets 
OpenID Connect Discovery configuration from NiFi Server (step 1,2,3 in the 
figure), so their URLs share the same hostname. 

If I set hostname to external URL, start with [https://36.133.55.100:8943/,] 
WebUI can successfully call authorization_endpoint (step 4 in the figure), but 
NiFi Server will timeout when calling revocation_endpoint (step 5 in the 
figure). In this scenario I can login but not logout.
{noformat}
"authorization_endpoint": 
"https://36.133.55.100:8943/realms/zznode/protocol/openid-connect/auth;,
"revocation_endpoint": 
"https://36.133.55.100:8943/realms/zznode/protocol/openid-connect/revoke;
{noformat}
On the contrary, I set hostname to internal URL, start with 
https://hb3-prod-lb-000:8943/, WebUI will timeout when calling 
authorization_endpoint. In this scenario I cannot login.
{noformat}
"authorization_endpoint": 
"https://hb3-prod-lb-000:8943/realms/zznode/protocol/openid-connect/auth;,
"revocation_endpoint": 
"https://hb3-prod-lb-000:8943/realms/zznode/protocol/openid-connect/revoke;
{noformat}
Maybe I can add host in MacBook's /etc/hosts file
{code:java}
36.133.55 hb3-prod-lb-000{code}
But I still hope to find an elegant way

> OIDC Token Revocation Error on Logout
> -
>
> Key: NIFI-11409
> URL: https://issues.apache.org/jira/browse/NIFI-11409
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.21.0
> Environment: NiFi 1.21.0 cluster with 4 nodes
> openjdk version "11.0.18" 2023-01-17 LTS
> OpenJDK Runtime Environment (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS)
> OpenJDK 64-Bit Server VM (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS, mixed mode, sharing)
> Linux hb3-ifz-bridge-004 3.10.0-1160.76.1.el7.x86_64 #1 SMP Wed Aug 10 
> 16:21:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
> Keycloak 20.0.2
>Reporter: macdoor615
>Assignee: David Handermann
>Priority: Major
> Attachments: RFC6749 flow.png, macdoor network topology.png, 
> 截屏2023-04-08 12.40.30.png, 截屏2023-04-09 13.17.25.png, 截屏2023-04-09 
> 13.33.25.png
>
>
> My NiFi 1.21.0 cluster has 4 nodes and using oidc authentication.
> I can log in properly, but when I click logout on webui, I got HTTP ERROR 503.
> !截屏2023-04-08 12.40.30.png|width=479,height=179!
> I also find 503 in nifi-request.log
>  
> {code:java}
> 10.12.69.33 - - [08/Apr/2023:04:24:13 +] "GET 
> /nifi-api/access/oidc/logout HTTP/1.1" 503 425 
> "https://36.138.166.203:18088/nifi/; "Mozilla/5.0 (Macintosh; Intel Mac OS X 
> 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.5 
> Safari/605.1.15"{code}
>  
> and WARNs in nifi-user.log, 36.133.55.100 is load balance's external IP. It 
> can not be accessed in intra net.
>  
> {code:java}
> 2023-04-08 12:24:43,511 WARN [NiFi Web Server-59] 
> o.a.n.w.s.o.r.StandardTokenRevocationResponseClient Token Revocation Request 
> processing failed
> org.springframework.web.client.ResourceAccessException: I/O error on POST 
> request for 
> "https://36.133.55.100:8943/realms/zznode/protocol/openid-connect/revoke": 
> connect timed out; nested exception is java.net.SocketTimeoutException: 
> connect timed out
>         at 
> org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:791)
>         at 
> org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:666)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getResponseEntity(StandardTokenRevocationResponseClient.java:81)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getRevocationResponse(StandardTokenRevocationResponseClient.java:70)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processRefreshTokenRevocation(OidcLogoutSuccessHandler.java:181)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processLogoutRequest(OidcLogoutSuccessHandler.java:159)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.onLogoutSuccess(OidcLogoutSuccessHandler.java:127)
>         at 
> org.apache.nifi.web.security.logout.StandardLogoutFilter.doFilterInternal(StandardLogoutFilter.java:62)
>         at 
> 

[jira] [Updated] (NIFI-11409) OIDC Token Revocation Error on Logout

2023-04-13 Thread macdoor615 (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

macdoor615 updated NIFI-11409:
--
Attachment: macdoor network topology.png

> OIDC Token Revocation Error on Logout
> -
>
> Key: NIFI-11409
> URL: https://issues.apache.org/jira/browse/NIFI-11409
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.21.0
> Environment: NiFi 1.21.0 cluster with 4 nodes
> openjdk version "11.0.18" 2023-01-17 LTS
> OpenJDK Runtime Environment (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS)
> OpenJDK 64-Bit Server VM (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS, mixed mode, sharing)
> Linux hb3-ifz-bridge-004 3.10.0-1160.76.1.el7.x86_64 #1 SMP Wed Aug 10 
> 16:21:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
> Keycloak 20.0.2
>Reporter: macdoor615
>Assignee: David Handermann
>Priority: Major
> Attachments: RFC6749 flow.png, macdoor network topology.png, 
> 截屏2023-04-08 12.40.30.png, 截屏2023-04-09 13.17.25.png, 截屏2023-04-09 
> 13.33.25.png
>
>
> My NiFi 1.21.0 cluster has 4 nodes and using oidc authentication.
> I can log in properly, but when I click logout on webui, I got HTTP ERROR 503.
> !截屏2023-04-08 12.40.30.png|width=479,height=179!
> I also find 503 in nifi-request.log
>  
> {code:java}
> 10.12.69.33 - - [08/Apr/2023:04:24:13 +] "GET 
> /nifi-api/access/oidc/logout HTTP/1.1" 503 425 
> "https://36.138.166.203:18088/nifi/; "Mozilla/5.0 (Macintosh; Intel Mac OS X 
> 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.5 
> Safari/605.1.15"{code}
>  
> and WARNs in nifi-user.log, 36.133.55.100 is load balance's external IP. It 
> can not be accessed in intra net.
>  
> {code:java}
> 2023-04-08 12:24:43,511 WARN [NiFi Web Server-59] 
> o.a.n.w.s.o.r.StandardTokenRevocationResponseClient Token Revocation Request 
> processing failed
> org.springframework.web.client.ResourceAccessException: I/O error on POST 
> request for 
> "https://36.133.55.100:8943/realms/zznode/protocol/openid-connect/revoke": 
> connect timed out; nested exception is java.net.SocketTimeoutException: 
> connect timed out
>         at 
> org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:791)
>         at 
> org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:666)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getResponseEntity(StandardTokenRevocationResponseClient.java:81)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getRevocationResponse(StandardTokenRevocationResponseClient.java:70)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processRefreshTokenRevocation(OidcLogoutSuccessHandler.java:181)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processLogoutRequest(OidcLogoutSuccessHandler.java:159)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.onLogoutSuccess(OidcLogoutSuccessHandler.java:127)
>         at 
> org.apache.nifi.web.security.logout.StandardLogoutFilter.doFilterInternal(StandardLogoutFilter.java:62)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.apache.nifi.web.security.csrf.SkipReplicatedCsrfFilter.doFilterInternal(SkipReplicatedCsrfFilter.java:59)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:225)
>         at 
> org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:190)
>         at 
> org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:354)
>         at 
> org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:267)
>         at 
> org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626)
>      

[GitHub] [nifi] exceptionfactory commented on pull request #7171: NIFI-11441 Removed OpenCypher client service because the core depende…

2023-04-13 Thread via GitHub


exceptionfactory commented on PR #7171:
URL: https://github.com/apache/nifi/pull/7171#issuecomment-1507145124

   Thanks for getting this started @MikeThomsen. It looks like there are still 
tests that need to be deleted as part of this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-11409) OIDC Token Revocation Error on Logout

2023-04-13 Thread macdoor615 (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

macdoor615 updated NIFI-11409:
--
Attachment: (was: macdoor network topology.png)

> OIDC Token Revocation Error on Logout
> -
>
> Key: NIFI-11409
> URL: https://issues.apache.org/jira/browse/NIFI-11409
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.21.0
> Environment: NiFi 1.21.0 cluster with 4 nodes
> openjdk version "11.0.18" 2023-01-17 LTS
> OpenJDK Runtime Environment (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS)
> OpenJDK 64-Bit Server VM (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS, mixed mode, sharing)
> Linux hb3-ifz-bridge-004 3.10.0-1160.76.1.el7.x86_64 #1 SMP Wed Aug 10 
> 16:21:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
> Keycloak 20.0.2
>Reporter: macdoor615
>Assignee: David Handermann
>Priority: Major
> Attachments: RFC6749 flow.png, 截屏2023-04-08 12.40.30.png, 
> 截屏2023-04-09 13.17.25.png, 截屏2023-04-09 13.33.25.png
>
>
> My NiFi 1.21.0 cluster has 4 nodes and using oidc authentication.
> I can log in properly, but when I click logout on webui, I got HTTP ERROR 503.
> !截屏2023-04-08 12.40.30.png|width=479,height=179!
> I also find 503 in nifi-request.log
>  
> {code:java}
> 10.12.69.33 - - [08/Apr/2023:04:24:13 +] "GET 
> /nifi-api/access/oidc/logout HTTP/1.1" 503 425 
> "https://36.138.166.203:18088/nifi/; "Mozilla/5.0 (Macintosh; Intel Mac OS X 
> 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.5 
> Safari/605.1.15"{code}
>  
> and WARNs in nifi-user.log, 36.133.55.100 is load balance's external IP. It 
> can not be accessed in intra net.
>  
> {code:java}
> 2023-04-08 12:24:43,511 WARN [NiFi Web Server-59] 
> o.a.n.w.s.o.r.StandardTokenRevocationResponseClient Token Revocation Request 
> processing failed
> org.springframework.web.client.ResourceAccessException: I/O error on POST 
> request for 
> "https://36.133.55.100:8943/realms/zznode/protocol/openid-connect/revoke": 
> connect timed out; nested exception is java.net.SocketTimeoutException: 
> connect timed out
>         at 
> org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:791)
>         at 
> org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:666)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getResponseEntity(StandardTokenRevocationResponseClient.java:81)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getRevocationResponse(StandardTokenRevocationResponseClient.java:70)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processRefreshTokenRevocation(OidcLogoutSuccessHandler.java:181)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processLogoutRequest(OidcLogoutSuccessHandler.java:159)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.onLogoutSuccess(OidcLogoutSuccessHandler.java:127)
>         at 
> org.apache.nifi.web.security.logout.StandardLogoutFilter.doFilterInternal(StandardLogoutFilter.java:62)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.apache.nifi.web.security.csrf.SkipReplicatedCsrfFilter.doFilterInternal(SkipReplicatedCsrfFilter.java:59)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:225)
>         at 
> org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:190)
>         at 
> org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:354)
>         at 
> org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:267)
>         at 
> org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626)
>         at 
> 

[GitHub] [nifi-minifi-cpp] martinzink commented on a diff in pull request #1552: MINIFICPP-2089 prefix EventData in flat JSON output so it doesnt need t…

2023-04-13 Thread via GitHub


martinzink commented on code in PR #1552:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1552#discussion_r1165654617


##
extensions/windows-event-log/wel/JSONUtils.cpp:
##
@@ -135,28 +157,37 @@ rapidjson::Document toJSONImpl(const pugi::xml_node& 
root, bool flatten) {
 
   {
 auto eventData_xml = event_xml.child("EventData");
-// create EventData subarray even if flatten requested
-doc.AddMember("EventData", rapidjson::kArrayType, doc.GetAllocator());
-for (const auto& data : eventData_xml.children()) {
-  auto name_attr = data.attribute("Name");
-  rapidjson::Value item(rapidjson::kObjectType);
-  item.AddMember("Name", rapidjson::StringRef(name_attr.value()), 
doc.GetAllocator());
-  item.AddMember("Content", rapidjson::StringRef(data.text().get()), 
doc.GetAllocator());
-  item.AddMember("Type", rapidjson::StringRef(data.name()), 
doc.GetAllocator());
-  // we need to query EventData because a reference to it wouldn't be 
stable, as we
-  // possibly add members to its parent which could result in reallocation
-  doc["EventData"].PushBack(item, doc.GetAllocator());
-  // check collision
-  if (flatten && !name_attr.empty() && !doc.HasMember(name_attr.value())) {
-doc.AddMember(rapidjson::StringRef(name_attr.value()), 
rapidjson::StringRef(data.text().get()), doc.GetAllocator());
+if (flatten) {
+  for (const auto& event_data_child : eventData_xml.children()) {
+std::string key = "EventData";
+if (auto name_attr = event_data_child.attribute("Name").value(); 
strlen(name_attr)) {
+  key = utils::StringUtils::join_pack(key, ".", name_attr);
+}
+
+if (auto type = event_data_child.name(); strlen(type) > 0) {
+  key = utils::StringUtils::join_pack(key, ".", type);
+}
+
+doc.AddMember(rapidjson::Value(createUniqueKey(key, doc), 
doc.GetAllocator()).Move(), 
rapidjson::StringRef(event_data_child.text().get()), doc.GetAllocator());
+  }
+} else {
+  auto& event_data = doc.AddMember("EventData", rapidjson::kArrayType, 
doc.GetAllocator());
+  for (const auto& event_data_child : eventData_xml.children()) {
+auto name_attr = event_data_child.attribute("Name");
+rapidjson::Value item(rapidjson::kObjectType);
+item.AddMember("Name", rapidjson::StringRef(name_attr.value()), 
doc.GetAllocator());
+item.AddMember("Content", 
rapidjson::StringRef(event_data_child.text().get()), doc.GetAllocator());
+item.AddMember("Type", rapidjson::StringRef(event_data_child.name()), 
doc.GetAllocator());
+doc["EventData"].PushBack(item, doc.GetAllocator());

Review Comment:
   This would lose the type information (which might be useful), we could do of 
course something similar to the flat output so something like this?
   ```json
   "EventData": {
   "Foobar.Text": "Lorem ipsum"
   }
   ```
   But since this is only a bugfix for the flat output type I wouldnt change 
the simple format in this PR maybe we could file a Jira ticket and explore the 
possibilities there?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] martinzink commented on a diff in pull request #1552: MINIFICPP-2089 prefix EventData in flat JSON output so it doesnt need t…

2023-04-13 Thread via GitHub


martinzink commented on code in PR #1552:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1552#discussion_r1165654617


##
extensions/windows-event-log/wel/JSONUtils.cpp:
##
@@ -135,28 +157,37 @@ rapidjson::Document toJSONImpl(const pugi::xml_node& 
root, bool flatten) {
 
   {
 auto eventData_xml = event_xml.child("EventData");
-// create EventData subarray even if flatten requested
-doc.AddMember("EventData", rapidjson::kArrayType, doc.GetAllocator());
-for (const auto& data : eventData_xml.children()) {
-  auto name_attr = data.attribute("Name");
-  rapidjson::Value item(rapidjson::kObjectType);
-  item.AddMember("Name", rapidjson::StringRef(name_attr.value()), 
doc.GetAllocator());
-  item.AddMember("Content", rapidjson::StringRef(data.text().get()), 
doc.GetAllocator());
-  item.AddMember("Type", rapidjson::StringRef(data.name()), 
doc.GetAllocator());
-  // we need to query EventData because a reference to it wouldn't be 
stable, as we
-  // possibly add members to its parent which could result in reallocation
-  doc["EventData"].PushBack(item, doc.GetAllocator());
-  // check collision
-  if (flatten && !name_attr.empty() && !doc.HasMember(name_attr.value())) {
-doc.AddMember(rapidjson::StringRef(name_attr.value()), 
rapidjson::StringRef(data.text().get()), doc.GetAllocator());
+if (flatten) {
+  for (const auto& event_data_child : eventData_xml.children()) {
+std::string key = "EventData";
+if (auto name_attr = event_data_child.attribute("Name").value(); 
strlen(name_attr)) {
+  key = utils::StringUtils::join_pack(key, ".", name_attr);
+}
+
+if (auto type = event_data_child.name(); strlen(type) > 0) {
+  key = utils::StringUtils::join_pack(key, ".", type);
+}
+
+doc.AddMember(rapidjson::Value(createUniqueKey(key, doc), 
doc.GetAllocator()).Move(), 
rapidjson::StringRef(event_data_child.text().get()), doc.GetAllocator());
+  }
+} else {
+  auto& event_data = doc.AddMember("EventData", rapidjson::kArrayType, 
doc.GetAllocator());
+  for (const auto& event_data_child : eventData_xml.children()) {
+auto name_attr = event_data_child.attribute("Name");
+rapidjson::Value item(rapidjson::kObjectType);
+item.AddMember("Name", rapidjson::StringRef(name_attr.value()), 
doc.GetAllocator());
+item.AddMember("Content", 
rapidjson::StringRef(event_data_child.text().get()), doc.GetAllocator());
+item.AddMember("Type", rapidjson::StringRef(event_data_child.name()), 
doc.GetAllocator());
+doc["EventData"].PushBack(item, doc.GetAllocator());

Review Comment:
   This would lose the type information (which might be useful), we could do of 
course something similar to the flat output so something like this?
   ```
   "EventData": {
   "Foobar.Text": "Lorem ipsum"
   }
   ```
   But since this is only a bugfix for the flat output type I wouldnt change 
the simple format in this PR maybe we could file a Jira ticket and explore the 
possibilities there?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1553: MINIFICPP-2094 Change validators from shared to raw pointers

2023-04-13 Thread via GitHub


adamdebreceni commented on code in PR #1553:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1553#discussion_r1165631311


##
libminifi/include/core/PropertyValidation.h:
##
@@ -87,22 +83,23 @@ class ValidationResult {
 
 class PropertyValidator {
  public:
-  PropertyValidator(std::string name) // NOLINT
-  : name_(std::move(name)) {
+  explicit constexpr PropertyValidator(std::string_view name)

Review Comment:
   or rather just the copy since the move ctor is not generated with user 
declared destructor



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1553: MINIFICPP-2094 Change validators from shared to raw pointers

2023-04-13 Thread via GitHub


adamdebreceni commented on code in PR #1553:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1553#discussion_r1165627512


##
libminifi/include/core/PropertyValidation.h:
##
@@ -87,22 +83,23 @@ class ValidationResult {
 
 class PropertyValidator {
  public:
-  PropertyValidator(std::string name) // NOLINT
-  : name_(std::move(name)) {
+  explicit constexpr PropertyValidator(std::string_view name)

Review Comment:
   should we disable copy/move ctor, so we don't accidentally store a pointer 
to a temporary?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] martinzink commented on a diff in pull request #1538: MINIFICPP-2073 Separate docker build from docker tests in CI

2023-04-13 Thread via GitHub


martinzink commented on code in PR #1538:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1538#discussion_r1165621722


##
.github/workflows/ci.yml:
##
@@ -302,13 +306,132 @@ jobs:
   if [ -d ~/.ccache ]; then mv ~/.ccache .; fi
   mkdir build
   cd build
-  cmake -DUSE_SHARED_LIBS= -DSTRICT_GSL_CHECKS=AUDIT -DCI_BUILD=ON 
-DDISABLE_JEMALLOC=ON -DENABLE_AWS=ON -DENABLE_LIBRDKAFKA=ON -DENABLE_MQTT=ON 
-DENABLE_AZURE=ON -DENABLE_SQL=ON \
-  -DENABLE_SPLUNK=ON -DENABLE_GCP=ON -DENABLE_OPC=ON 
-DENABLE_PYTHON_SCRIPTING=ON -DENABLE_LUA_SCRIPTING=ON -DENABLE_KUBERNETES=ON 
-DENABLE_TEST_PROCESSORS=ON -DENABLE_PROMETHEUS=ON \
-  -DDOCKER_BUILD_ONLY=ON 
-DDOCKER_CCACHE_DUMP_LOCATION=$HOME/.ccache ..
+  cmake ${DOCKER_CMAKE_FLAGS} ..
   make docker
+  - name: Save docker image
+run: cd build && docker save -o minifi_docker.tar 
apacheminificpp:$(grep CMAKE_PROJECT_VERSION:STATIC CMakeCache.txt | cut -d "=" 
-f2)
+  - name: Upload artifact
+uses: actions/upload-artifact@v3
+with:
+  name: minifi_docker
+  path: build/minifi_docker.tar
+  docker_tests_q1:
+name: "Docker integration tests 1/4"
+needs: docker_build
+runs-on: ubuntu-20.04
+timeout-minutes: 180
+steps:
+  - id: checkout
+uses: actions/checkout@v3
+  - id: run_cmake
+name: Run CMake
+run: |
+  mkdir build
+  cd build
+  cmake ${DOCKER_CMAKE_FLAGS} ..
+  - name: Download artifact
+uses: actions/download-artifact@v3

Review Comment:
   yes, these simple download/upload artifacts actions (provided by github) are 
limited to the current workflow



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-11409) OIDC Token Revocation Error on Logout

2023-04-13 Thread macdoor615 (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

macdoor615 updated NIFI-11409:
--
Attachment: macdoor network topology.png

> OIDC Token Revocation Error on Logout
> -
>
> Key: NIFI-11409
> URL: https://issues.apache.org/jira/browse/NIFI-11409
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.21.0
> Environment: NiFi 1.21.0 cluster with 4 nodes
> openjdk version "11.0.18" 2023-01-17 LTS
> OpenJDK Runtime Environment (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS)
> OpenJDK 64-Bit Server VM (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS, mixed mode, sharing)
> Linux hb3-ifz-bridge-004 3.10.0-1160.76.1.el7.x86_64 #1 SMP Wed Aug 10 
> 16:21:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
> Keycloak 20.0.2
>Reporter: macdoor615
>Assignee: David Handermann
>Priority: Major
> Attachments: RFC6749 flow.png, macdoor network topology.png, 
> 截屏2023-04-08 12.40.30.png, 截屏2023-04-09 13.17.25.png, 截屏2023-04-09 
> 13.33.25.png
>
>
> My NiFi 1.21.0 cluster has 4 nodes and using oidc authentication.
> I can log in properly, but when I click logout on webui, I got HTTP ERROR 503.
> !截屏2023-04-08 12.40.30.png|width=479,height=179!
> I also find 503 in nifi-request.log
>  
> {code:java}
> 10.12.69.33 - - [08/Apr/2023:04:24:13 +] "GET 
> /nifi-api/access/oidc/logout HTTP/1.1" 503 425 
> "https://36.138.166.203:18088/nifi/; "Mozilla/5.0 (Macintosh; Intel Mac OS X 
> 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.5 
> Safari/605.1.15"{code}
>  
> and WARNs in nifi-user.log, 36.133.55.100 is load balance's external IP. It 
> can not be accessed in intra net.
>  
> {code:java}
> 2023-04-08 12:24:43,511 WARN [NiFi Web Server-59] 
> o.a.n.w.s.o.r.StandardTokenRevocationResponseClient Token Revocation Request 
> processing failed
> org.springframework.web.client.ResourceAccessException: I/O error on POST 
> request for 
> "https://36.133.55.100:8943/realms/zznode/protocol/openid-connect/revoke": 
> connect timed out; nested exception is java.net.SocketTimeoutException: 
> connect timed out
>         at 
> org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:791)
>         at 
> org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:666)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getResponseEntity(StandardTokenRevocationResponseClient.java:81)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getRevocationResponse(StandardTokenRevocationResponseClient.java:70)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processRefreshTokenRevocation(OidcLogoutSuccessHandler.java:181)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processLogoutRequest(OidcLogoutSuccessHandler.java:159)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.onLogoutSuccess(OidcLogoutSuccessHandler.java:127)
>         at 
> org.apache.nifi.web.security.logout.StandardLogoutFilter.doFilterInternal(StandardLogoutFilter.java:62)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.apache.nifi.web.security.csrf.SkipReplicatedCsrfFilter.doFilterInternal(SkipReplicatedCsrfFilter.java:59)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:225)
>         at 
> org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:190)
>         at 
> org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:354)
>         at 
> org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:267)
>         at 
> org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626)
>      

[GitHub] [nifi-minifi-cpp] lordgamez commented on a diff in pull request #1551: MINIFICPP-2091 Add ARM64 support for docker system tests

2023-04-13 Thread via GitHub


lordgamez commented on code in PR #1551:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1551#discussion_r1165618648


##
docker/test/integration/resources/kafka_broker/conf/server.properties:
##
@@ -28,15 +28,15 @@ broker.id=0
 # listeners = listener_name://host_name:port
 #   EXAMPLE:
 # listeners = PLAINTEXT://your.host.name:9092
-#listeners=PLAINTEXT://:9092
+listeners=PLAINTEXT://kafka-broker:9092,SSL://kafka-broker:9093,SASL_PLAINTEXT://kafka-broker:9094,SASL_SSL://kafka-broker:9095,SSL_HOST://0.0.0.0:29093,PLAINTEXT_HOST://0.0.0.0:29092,SASL_PLAINTEXT_HOST://0.0.0.0:29094,SASL_SSL_HOST://0.0.0.0:29095
 
 # Hostname and port the broker will advertise to producers and consumers. If 
not set,
 # it uses the value for "listeners" if configured.  Otherwise, it will use the 
value
 # returned from java.net.InetAddress.getCanonicalHostName().
-#advertised.listeners=PLAINTEXT://your.host.name:9092
+advertised.listeners=PLAINTEXT://kafka-broker:9092,PLAINTEXT_HOST://localhost:29092,SSL://kafka-broker:9093,SSL_HOST://localhost:29093,SASL_PLAINTEXT://kafka-broker:9094,SASL_PLAINTEXT_HOST://localhost:29094,SASL_SSL://kafka-broker:9095,SASL_SSL_HOST://localhost:29095

Review Comment:
   The order doesn't matter, it's better to have a consistent order for all 
properties, updated in 80b1ea36d601632083e4a8b16b5731e8495a668b



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-11439) GCS processors: add ability to configure custom GCS client endpoint

2023-04-13 Thread Paul Grey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Grey updated NIFI-11439:
-
Status: Patch Available  (was: In Progress)

> GCS processors: add ability to configure custom GCS client endpoint
> ---
>
> Key: NIFI-11439
> URL: https://issues.apache.org/jira/browse/NIFI-11439
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Paul Grey
>Assignee: Paul Grey
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> AWS processors allow the override of the base endpoint for API calls to AWS 
> services [1].  GCS libraries provide analogous capabilities, as documented 
> here [2].  Expose a GCS processor property to enable GCS endpoint override 
> behavior.
> [1] 
> https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-abstract-processors/src/main/java/org/apache/nifi/processors/aws/AbstractAWSProcessor.java#L146-L154
> [2] 
> https://cloud.google.com/storage/docs/request-endpoints#storage-set-client-endpoint-java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11409) OIDC Token Revocation Error on Logout

2023-04-13 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17711919#comment-17711919
 ] 

David Handermann commented on NIFI-11409:
-

Thanks for the reply [~macdoor615]. Changing the NiFi OIDC integration to a 
user-agent based application would open up other integration possibilities as 
you mentioned. One major factor is that OIDC is just one several options for 
NiFi along with SAML, not to mention username and password options like LDAP or 
Kerberos. This might be worth exploring, but it would require significant 
effort and refactoring.

As far as your issue with token revocation, are you able to adjust the 
revocation endpoint URI to match the other endpoints with which NiFi is already 
able to communicate?

> OIDC Token Revocation Error on Logout
> -
>
> Key: NIFI-11409
> URL: https://issues.apache.org/jira/browse/NIFI-11409
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.21.0
> Environment: NiFi 1.21.0 cluster with 4 nodes
> openjdk version "11.0.18" 2023-01-17 LTS
> OpenJDK Runtime Environment (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS)
> OpenJDK 64-Bit Server VM (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS, mixed mode, sharing)
> Linux hb3-ifz-bridge-004 3.10.0-1160.76.1.el7.x86_64 #1 SMP Wed Aug 10 
> 16:21:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
> Keycloak 20.0.2
>Reporter: macdoor615
>Assignee: David Handermann
>Priority: Major
> Attachments: RFC6749 flow.png, 截屏2023-04-08 12.40.30.png, 
> 截屏2023-04-09 13.17.25.png, 截屏2023-04-09 13.33.25.png
>
>
> My NiFi 1.21.0 cluster has 4 nodes and using oidc authentication.
> I can log in properly, but when I click logout on webui, I got HTTP ERROR 503.
> !截屏2023-04-08 12.40.30.png|width=479,height=179!
> I also find 503 in nifi-request.log
>  
> {code:java}
> 10.12.69.33 - - [08/Apr/2023:04:24:13 +] "GET 
> /nifi-api/access/oidc/logout HTTP/1.1" 503 425 
> "https://36.138.166.203:18088/nifi/; "Mozilla/5.0 (Macintosh; Intel Mac OS X 
> 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.5 
> Safari/605.1.15"{code}
>  
> and WARNs in nifi-user.log, 36.133.55.100 is load balance's external IP. It 
> can not be accessed in intra net.
>  
> {code:java}
> 2023-04-08 12:24:43,511 WARN [NiFi Web Server-59] 
> o.a.n.w.s.o.r.StandardTokenRevocationResponseClient Token Revocation Request 
> processing failed
> org.springframework.web.client.ResourceAccessException: I/O error on POST 
> request for 
> "https://36.133.55.100:8943/realms/zznode/protocol/openid-connect/revoke": 
> connect timed out; nested exception is java.net.SocketTimeoutException: 
> connect timed out
>         at 
> org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:791)
>         at 
> org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:666)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getResponseEntity(StandardTokenRevocationResponseClient.java:81)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getRevocationResponse(StandardTokenRevocationResponseClient.java:70)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processRefreshTokenRevocation(OidcLogoutSuccessHandler.java:181)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processLogoutRequest(OidcLogoutSuccessHandler.java:159)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.onLogoutSuccess(OidcLogoutSuccessHandler.java:127)
>         at 
> org.apache.nifi.web.security.logout.StandardLogoutFilter.doFilterInternal(StandardLogoutFilter.java:62)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.apache.nifi.web.security.csrf.SkipReplicatedCsrfFilter.doFilterInternal(SkipReplicatedCsrfFilter.java:59)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> 

[GitHub] [nifi] greyp9 opened a new pull request, #7172: NIFI-11439 - GCS processors; expose config of custom client endpoint

2023-04-13 Thread via GitHub


greyp9 opened a new pull request, #7172:
URL: https://github.com/apache/nifi/pull/7172

   # Summary
   
   [NIFI-11439](https://issues.apache.org/jira/browse/NIFI-11439)
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [x] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [x] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [x] Pull Request based on current revision of the `main` branch
   - [x] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [x] Build completed using `mvn clean install -P contrib-check`
 - [x] JDK 11
 - [x] JDK 17
   
   ### Licensing
   
   - [x] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [x] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [x] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1538: MINIFICPP-2073 Separate docker build from docker tests in CI

2023-04-13 Thread via GitHub


adamdebreceni commented on code in PR #1538:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1538#discussion_r1165598645


##
.github/workflows/ci.yml:
##
@@ -302,13 +306,132 @@ jobs:
   if [ -d ~/.ccache ]; then mv ~/.ccache .; fi
   mkdir build
   cd build
-  cmake -DUSE_SHARED_LIBS= -DSTRICT_GSL_CHECKS=AUDIT -DCI_BUILD=ON 
-DDISABLE_JEMALLOC=ON -DENABLE_AWS=ON -DENABLE_LIBRDKAFKA=ON -DENABLE_MQTT=ON 
-DENABLE_AZURE=ON -DENABLE_SQL=ON \
-  -DENABLE_SPLUNK=ON -DENABLE_GCP=ON -DENABLE_OPC=ON 
-DENABLE_PYTHON_SCRIPTING=ON -DENABLE_LUA_SCRIPTING=ON -DENABLE_KUBERNETES=ON 
-DENABLE_TEST_PROCESSORS=ON -DENABLE_PROMETHEUS=ON \
-  -DDOCKER_BUILD_ONLY=ON 
-DDOCKER_CCACHE_DUMP_LOCATION=$HOME/.ccache ..
+  cmake ${DOCKER_CMAKE_FLAGS} ..
   make docker
+  - name: Save docker image
+run: cd build && docker save -o minifi_docker.tar 
apacheminificpp:$(grep CMAKE_PROJECT_VERSION:STATIC CMakeCache.txt | cut -d "=" 
-f2)
+  - name: Upload artifact
+uses: actions/upload-artifact@v3
+with:
+  name: minifi_docker
+  path: build/minifi_docker.tar
+  docker_tests_q1:
+name: "Docker integration tests 1/4"
+needs: docker_build
+runs-on: ubuntu-20.04
+timeout-minutes: 180
+steps:
+  - id: checkout
+uses: actions/checkout@v3
+  - id: run_cmake
+name: Run CMake
+run: |
+  mkdir build
+  cd build
+  cmake ${DOCKER_CMAKE_FLAGS} ..
+  - name: Download artifact
+uses: actions/download-artifact@v3

Review Comment:
   is it guaranteed that this steps downloads the previous step's artifact? 
could another (PR) run override the artifact?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] lordgamez commented on a diff in pull request #1554: MINIFICPP-2058 Fix AWS extension link error on ARM64

2023-04-13 Thread via GitHub


lordgamez commented on code in PR #1554:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1554#discussion_r1165592135


##
extensions/aws/CMakeLists.txt:
##
@@ -34,6 +34,9 @@ add_library(minifi-aws SHARED ${SOURCES})
 target_link_libraries(minifi-aws PUBLIC ${LIBMINIFI} Threads::Threads)
 
 target_wholearchive_library_private(minifi-aws AWS::aws-cpp-sdk-s3)
+if(NOT CMAKE_SYSTEM_PROCESSOR MATCHES "(x86)|(X86)")

Review Comment:
   I actually made the change before I read Martin's comment, I thought it 
would be better to link the library on any non-x86 architecture as it may be 
needed there as well. But as I checked 
https://stackoverflow.com/questions/70475665/what-are-the-possible-values-of-cmake-system-processor
 there are more possibilities than I thought, like you said AMD64 also should 
have been included on windows. So it may be better to only concentrate on arm64 
possibilities and we'll see in the future if we want to support more. I changed 
it to match only on arm64, ARM64, aarch64 and armv8 in 
4037275310298711c7f949011b3e09e32a10ccdb (From what I gathered the overall 
arm64 architecture possiblities are arm64, ARM64, aarch64, aarch64_be, armv8b, 
armv8l on Windows, Linux and MacOS)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] lordgamez commented on a diff in pull request #1554: MINIFICPP-2058 Fix AWS extension link error on ARM64

2023-04-13 Thread via GitHub


lordgamez commented on code in PR #1554:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1554#discussion_r1165592135


##
extensions/aws/CMakeLists.txt:
##
@@ -34,6 +34,9 @@ add_library(minifi-aws SHARED ${SOURCES})
 target_link_libraries(minifi-aws PUBLIC ${LIBMINIFI} Threads::Threads)
 
 target_wholearchive_library_private(minifi-aws AWS::aws-cpp-sdk-s3)
+if(NOT CMAKE_SYSTEM_PROCESSOR MATCHES "(x86)|(X86)")

Review Comment:
   I actually made the change before I read Martin's comment, I thought it 
would be better to link the library on any non-x86 architecture as it may be 
needed there as well. But as I checked 
https://stackoverflow.com/questions/70475665/what-are-the-possible-values-of-cmake-system-processor
 there are more possibilities than I thought, like you said AMD64 also should 
have been included on windows. So it may be better to only concentrate on arm64 
possibilities and we'll see in the future if we want to support more. I changed 
it to match only on arm64, ARM64, aarch64 and armv8 in 
4037275310298711c7f949011b3e09e32a10ccdb



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-11409) OIDC Token Revocation Error on Logout

2023-04-13 Thread macdoor615 (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17711905#comment-17711905
 ] 

macdoor615 commented on NIFI-11409:
---

[~exceptionfactory] You are right. The current implementation of NiFi is spec 
compliant. My issue should not be a bug but a new feature. I suggest NiFi 
support user-agent-based application in future version. In this way, NiFi can 
support more complex network environments. In fact, the current WebUI of NiFi 
is already very powerful.

 

> OIDC Token Revocation Error on Logout
> -
>
> Key: NIFI-11409
> URL: https://issues.apache.org/jira/browse/NIFI-11409
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.21.0
> Environment: NiFi 1.21.0 cluster with 4 nodes
> openjdk version "11.0.18" 2023-01-17 LTS
> OpenJDK Runtime Environment (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS)
> OpenJDK 64-Bit Server VM (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS, mixed mode, sharing)
> Linux hb3-ifz-bridge-004 3.10.0-1160.76.1.el7.x86_64 #1 SMP Wed Aug 10 
> 16:21:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
> Keycloak 20.0.2
>Reporter: macdoor615
>Assignee: David Handermann
>Priority: Major
> Attachments: RFC6749 flow.png, 截屏2023-04-08 12.40.30.png, 
> 截屏2023-04-09 13.17.25.png, 截屏2023-04-09 13.33.25.png
>
>
> My NiFi 1.21.0 cluster has 4 nodes and using oidc authentication.
> I can log in properly, but when I click logout on webui, I got HTTP ERROR 503.
> !截屏2023-04-08 12.40.30.png|width=479,height=179!
> I also find 503 in nifi-request.log
>  
> {code:java}
> 10.12.69.33 - - [08/Apr/2023:04:24:13 +] "GET 
> /nifi-api/access/oidc/logout HTTP/1.1" 503 425 
> "https://36.138.166.203:18088/nifi/; "Mozilla/5.0 (Macintosh; Intel Mac OS X 
> 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.5 
> Safari/605.1.15"{code}
>  
> and WARNs in nifi-user.log, 36.133.55.100 is load balance's external IP. It 
> can not be accessed in intra net.
>  
> {code:java}
> 2023-04-08 12:24:43,511 WARN [NiFi Web Server-59] 
> o.a.n.w.s.o.r.StandardTokenRevocationResponseClient Token Revocation Request 
> processing failed
> org.springframework.web.client.ResourceAccessException: I/O error on POST 
> request for 
> "https://36.133.55.100:8943/realms/zznode/protocol/openid-connect/revoke": 
> connect timed out; nested exception is java.net.SocketTimeoutException: 
> connect timed out
>         at 
> org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:791)
>         at 
> org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:666)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getResponseEntity(StandardTokenRevocationResponseClient.java:81)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getRevocationResponse(StandardTokenRevocationResponseClient.java:70)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processRefreshTokenRevocation(OidcLogoutSuccessHandler.java:181)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processLogoutRequest(OidcLogoutSuccessHandler.java:159)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.onLogoutSuccess(OidcLogoutSuccessHandler.java:127)
>         at 
> org.apache.nifi.web.security.logout.StandardLogoutFilter.doFilterInternal(StandardLogoutFilter.java:62)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.apache.nifi.web.security.csrf.SkipReplicatedCsrfFilter.doFilterInternal(SkipReplicatedCsrfFilter.java:59)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:225)
>         at 
> org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:190)
>         at 
> 

[GitHub] [nifi-minifi-cpp] fgerlits commented on a diff in pull request #1551: MINIFICPP-2091 Add ARM64 support for docker system tests

2023-04-13 Thread via GitHub


fgerlits commented on code in PR #1551:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1551#discussion_r1165563766


##
docker/test/integration/resources/kafka_broker/conf/server.properties:
##
@@ -28,15 +28,15 @@ broker.id=0
 # listeners = listener_name://host_name:port
 #   EXAMPLE:
 # listeners = PLAINTEXT://your.host.name:9092
-#listeners=PLAINTEXT://:9092
+listeners=PLAINTEXT://kafka-broker:9092,SSL://kafka-broker:9093,SASL_PLAINTEXT://kafka-broker:9094,SASL_SSL://kafka-broker:9095,SSL_HOST://0.0.0.0:29093,PLAINTEXT_HOST://0.0.0.0:29092,SASL_PLAINTEXT_HOST://0.0.0.0:29094,SASL_SSL_HOST://0.0.0.0:29095
 
 # Hostname and port the broker will advertise to producers and consumers. If 
not set,
 # it uses the value for "listeners" if configured.  Otherwise, it will use the 
value
 # returned from java.net.InetAddress.getCanonicalHostName().
-#advertised.listeners=PLAINTEXT://your.host.name:9092
+advertised.listeners=PLAINTEXT://kafka-broker:9092,PLAINTEXT_HOST://localhost:29092,SSL://kafka-broker:9093,SSL_HOST://localhost:29093,SASL_PLAINTEXT://kafka-broker:9094,SASL_PLAINTEXT_HOST://localhost:29094,SASL_SSL://kafka-broker:9095,SASL_SSL_HOST://localhost:29095

Review Comment:
   Does the order of these matter?  It would be easier to read if the two lists 
were in the same order.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] markap14 commented on a diff in pull request #7003: NIFI-11241: Initial implementation of Python-based Processor API with…

2023-04-13 Thread via GitHub


markap14 commented on code in PR #7003:
URL: https://github.com/apache/nifi/pull/7003#discussion_r1165569083


##
nifi-nar-bundles/nifi-py4j-bundle/nifi-python-framework/src/main/python/framework/ExtensionManager.py:
##
@@ -0,0 +1,531 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import importlib
+import sys
+import importlib.util  # Note requires Python 3.4+
+import inspect
+import logging
+import subprocess
+import ast
+import pkgutil
+from pathlib import Path
+
+logger = logging.getLogger("org.apache.nifi.py4j.ExtensionManager")
+
+# A simple wrapper class to encompass a processor type and its version
+class ExtensionId:
+def __init__(self, classname=None, version=None):
+self.classname = classname
+self.version = version
+
+def __hash__(self):
+return hash((self.classname, self.version))
+
+def __eq__(self, other):
+return (self.classname, self.version) == (other.classname, 
other.version)
+
+
+class ExtensionDetails:
+class Java:
+implements = ['org.apache.nifi.python.PythonProcessorDetails']
+
+def __init__(self, gateway, type, version='Unknown', dependencies=None, 
source_location=None, package_name=None, description=None, tags=None):
+self.gateway = gateway
+if dependencies is None:
+dependencies = []
+if tags is None:
+tags = []
+
+self.type = type
+self.version = version
+self.dependencies = dependencies
+self.source_location = source_location
+self.package_name = package_name
+self.description = description
+self.tags = tags
+
+def getProcessorType(self):
+return self.type
+
+def getProcessorVersion(self):
+return self.version
+
+def getSourceLocation(self):
+return self.source_location
+
+def getPyPiPackageName(self):
+return self.package_name
+
+def getDependencies(self):
+list = self.gateway.jvm.java.util.ArrayList()
+for dep in self.dependencies:
+list.add(dep)
+
+return list
+
+def getCapabilityDescription(self):
+return self.description
+
+def getTags(self):
+list = self.gateway.jvm.java.util.ArrayList()
+for tag in self.tags:
+list.add(tag)
+
+return list
+
+
+
+
+class ExtensionManager:
+"""
+ExtensionManager is responsible for discovery of extensions types and the 
lifecycle management of those extension types.
+Discovery of extension types includes finding what extension types are 
available
+(e.g., which Processor types exist on the system), as well as information 
about those extension types, such as
+the extension's documentation (tags and capability description).
+
+Lifecycle management includes determining the third-party dependencies 
that an extension has and ensuring that those
+third-party dependencies have been imported.
+"""
+
+processorInterfaces = 
['org.apache.nifi.python.processor.FlowFileTransform', 
'org.apache.nifi.python.processor.RecordTransform']
+processor_details = {}
+processor_class_by_name = {}
+module_files_by_extension_type = {}
+dependency_directories = {}
+
+def __init__(self, gateway):
+self.gateway = gateway
+
+
+def getProcessorTypes(self):
+"""
+:return: a list of Processor types that have been discovered by the 
#discoverExtensions method
+"""
+return self.processor_details.values()
+
+def getProcessorClass(self, type, version, work_dir):
+"""
+Returns the Python class that can be used to instantiate a processor 
of the given type.
+Additionally, it ensures that the required third-party dependencies 
are on the system path in order to ensure that
+the necessary libraries are available to the Processor so that it can 
be instantiated and used.
+
+:param type: the type of Processor
+:param version: the version of the Processor
+:param work_dir: the working directory for extensions
+:return: the Python class that can be used to instantiate a Processor 
of the given type and version
+
+:raises ValueError: if there 

[GitHub] [nifi] markap14 commented on pull request #7003: NIFI-11241: Initial implementation of Python-based Processor API with…

2023-04-13 Thread via GitHub


markap14 commented on PR #7003:
URL: https://github.com/apache/nifi/pull/7003#issuecomment-1507012164

   Thanks for the thorough review @exceptionfactory . I've pushed a couple new 
commits that should address the concerns raised (except for the few that I 
responded to directly). Or I created Jiras to address them. For example, 
https://issues.apache.org/jira/browse/NIFI-11448 to speed up the adding of a 
Python Processor to the canvas and 
https://issues.apache.org/jira/browse/NIFI-11446 for better handling of the 
case when a Python process dies.
   
   As for the dependency management - I am ok with a follow-on activity that 
allows inclusion of a `requirements.txt` but I definitely don't think we should 
eliminate the ability to define the requirements within the Processor itself. 
This is actually not an unheard-of practice. I actually implemented it that way 
based on what Apache Airflow allows for: 
https://airflow.apache.org/docs/apache-airflow/stable/howto/operator/python.html
 - this is an example of using their @task.virtualenv decorator:
   ```
   @task.virtualenv(
   task_id="virtualenv_python", requirements=["colorama==0.4.0"], 
system_site_packages=False
   )
   ```
   I think this makes great sense. While in some cases a `requirements.txt` 
could perhaps be more appropriate, providing this inline makes great sense for 
a large number of use cases. For example, for a user who develops a custom 
Processor. If they want to share that Processor with others in their 
organization, they should able to just send their python file. It should not 
require some external requirements.txt file - doing so means they'd have to zip 
up a directory. And then the person receiving it would have to unzip the file, 
would have to know where to unzip it, what directories are included in the zip, 
etc. It's very doable but makes everything far more complicated.
   
   Otherwise, I think the newest commits and noted Jiras cover all of your 
feedback.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] markap14 commented on a diff in pull request #7003: NIFI-11241: Initial implementation of Python-based Processor API with…

2023-04-13 Thread via GitHub


markap14 commented on code in PR #7003:
URL: https://github.com/apache/nifi/pull/7003#discussion_r1165547987


##
nifi-nar-bundles/nifi-py4j-bundle/nifi-py4j-bridge/src/main/java/org/apache/nifi/py4j/PythonProcess.java:
##
@@ -0,0 +1,293 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.py4j;
+
+import org.apache.nifi.py4j.client.JavaObjectBindings;
+import org.apache.nifi.py4j.client.NiFiPythonGateway;
+import org.apache.nifi.py4j.client.StandardPythonClient;
+import org.apache.nifi.py4j.server.NiFiGatewayServer;
+import org.apache.nifi.python.ControllerServiceTypeLookup;
+import org.apache.nifi.python.PythonController;
+import org.apache.nifi.python.PythonProcessConfig;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import py4j.CallbackClient;
+import py4j.GatewayServer;
+
+import javax.net.ServerSocketFactory;
+import javax.net.SocketFactory;
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.util.Collections;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.TimeUnit;
+
+// TODO / Figure Out for MVP:
+//  MUST DO:
+//  - Documentation
+//  - Admin Guide
+//  - JavaDocs
+//  - Developer Guide
+//  - Explain how communication between Java & Python work.
+//  - Java is preferred, Python is slower and more expensive b/c 
of network
+//  - Different Extension Points (FlowFileTransform, 
RecordTransform)
+//  - What the API Looks like, Links to JavaDocs for 
ProcessContext, etc.
+//  - Example Code
+//  - Exposing properties
+//  - Relationships
+//  - Controller Services
+//  - Need to update docs to show the interfaces that are 
exposed, explain how to get these...
+//  - Design Doc
+//  - Setup proper logging on the Python side: 
https://docs.python.org/2/howto/logging-cookbook.html#using-file-rotation
+//  - For FlowFileTransform, allow the result to contain either a byte 
array or a String. If a String, just convert in the parent class.
+//  - Figure out how to deal with Python Packaging
+//  - Need to figure out how to deal with additionalDetails.html, 
docs directory in python project typically?
+//  - Understand how to deal with versioning
+//  - Look at performance improvements for Py4J - socket comms appear to 
be INCREDIBLY slow.
+//  - Create test that calls Python 1M times. Just returns 
'hello'. See how long it takes
+//  - Create test that calls Python 1M times. Returns .toString() and see how long it takes.
+//  - Will help to understand if it's the call from Java to Python 
that's slow, Python to Java, or both.
+//  - Performance concern for TransformRecord
+//  - Currently, triggering the transform() method is pretty fast. 
But then the Result object comes back and we have to call into the Python side 
to call the getters
+//over and over. Need to look into instead serializing the 
entire response as JSON and sending that back.
+//  - Also, since this is heavy JSON processing, might want to 
consider ORJSON or something like that instead of inbuilt JSON parser/generator
+//  - Test pip install nifi-my-proc, does nifi pick it up?
+//  - When ran DetectObjectInImage with multiple threads, Python died. 
Need to figure out why.
+//  - If Python Process dies, need to create a new process and need to 
then create all of the Processors that were in that Process and initialize them.
+//- Milestone 2 or 3, not Milestone 1.
+//  - Remove test-pypi usage from ExtensionManager.py
+//  - Additional Interfaces beyond just FlowFileTransform
+//  - FlowFileSource
+//  - Restructure Maven projects
+//  - Should this all go under Framework?
+//
+//
+//  CONSIDER:
+//  - Clustering: Ensure component on all nodes?
+//  - Consider "pip freeze" type of thing to ensure that python 
dependencies are same across nodes when joining cluster.
+//  - Update python code 

[jira] [Created] (NIFI-11448) Speed up process of adding a Python Processor

2023-04-13 Thread Mark Payne (Jira)
Mark Payne created NIFI-11448:
-

 Summary: Speed up process of adding a Python Processor
 Key: NIFI-11448
 URL: https://issues.apache.org/jira/browse/NIFI-11448
 Project: Apache NiFi
  Issue Type: Sub-task
  Components: Core Framework
Reporter: Mark Payne


When a Python Processor is added to the canvas, it is quite slow. Especially 
the first time a Processor of that type is added. A lot of actions take place 
in order to enable that. This needs to be made faster. This may be done either 
by performing some of the actions more eagerly, or they may be done in a 
background thread, such as using an @OnAdded annotation etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-11447) Improve performance for TransformRecord implementations

2023-04-13 Thread Mark Payne (Jira)
Mark Payne created NIFI-11447:
-

 Summary: Improve performance for TransformRecord implementations
 Key: NIFI-11447
 URL: https://issues.apache.org/jira/browse/NIFI-11447
 Project: Apache NiFi
  Issue Type: Sub-task
  Components: Core Framework
Reporter: Mark Payne


Currently, triggering the transform() method is pretty fast. But then the 
Result object comes back and we have to call into the Python side to call the 
getters over and over. We need to look into instead serializing the entire 
response as JSON and sending that back. Also, since this is heavy JSON 
processing, might want to consider ORJSON or something like that instead of 
inbuilt JSON parser/generator



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-11446) Better handling for cases where Python process dies

2023-04-13 Thread Mark Payne (Jira)
Mark Payne created NIFI-11446:
-

 Summary: Better handling for cases where Python process dies
 Key: NIFI-11446
 URL: https://issues.apache.org/jira/browse/NIFI-11446
 Project: Apache NiFi
  Issue Type: Sub-task
  Components: Core Framework
Reporter: Mark Payne


If a Python process dies, we need the ability to detect this, re-launch the 
Process, and recreate the Processors that are a part of the Process, and then 
restore the Processors' configuration and enable/start them. Essentially, if 
the Python process dies, the framework should spawn a new process and allow the 
Processor to keep running.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-11445) Support packaging of Python processors

2023-04-13 Thread Mark Payne (Jira)
Mark Payne created NIFI-11445:
-

 Summary: Support packaging of Python processors
 Key: NIFI-11445
 URL: https://issues.apache.org/jira/browse/NIFI-11445
 Project: Apache NiFi
  Issue Type: Sub-task
  Components: Core Framework
Reporter: Mark Payne


We need to support packaging additionalDetails.html as part of a Python 
Processor.

We also need to figure out how to (and document how to) package a Python 
Processor using Pypi so that custom processors can be easily made available to 
others. We should follow a common convention of detecting any package that has 
a well-known prefix, such as "nifi-" and import those as Processors when 
searching for new Python Processors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11409) OIDC Token Revocation Error on Logout

2023-04-13 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17711887#comment-17711887
 ] 

David Handermann commented on NIFI-11409:
-

[~macdoor615] 

Although it is possible to think of the NiFi UI and the NiFi Server as separate 
applications, the current OIDC integration does not follow that approach.

[RFC 6749 Section 2.1|https://www.rfc-editor.org/rfc/rfc6749.html#section-2.1] 
defines two different types of clients: {{confidential}} and {{public}}. Under 
the heading, the Section 2.1 also defines {{web applications}} and {{user-agent 
based applications}}. Following those definitions, NiFi falls into the 
confidential web application category. That is why the NiFi server currently 
handles the token request and token revocation communication with the 
Authorization Server.

> OIDC Token Revocation Error on Logout
> -
>
> Key: NIFI-11409
> URL: https://issues.apache.org/jira/browse/NIFI-11409
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.21.0
> Environment: NiFi 1.21.0 cluster with 4 nodes
> openjdk version "11.0.18" 2023-01-17 LTS
> OpenJDK Runtime Environment (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS)
> OpenJDK 64-Bit Server VM (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS, mixed mode, sharing)
> Linux hb3-ifz-bridge-004 3.10.0-1160.76.1.el7.x86_64 #1 SMP Wed Aug 10 
> 16:21:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
> Keycloak 20.0.2
>Reporter: macdoor615
>Assignee: David Handermann
>Priority: Major
> Attachments: RFC6749 flow.png, 截屏2023-04-08 12.40.30.png, 
> 截屏2023-04-09 13.17.25.png, 截屏2023-04-09 13.33.25.png
>
>
> My NiFi 1.21.0 cluster has 4 nodes and using oidc authentication.
> I can log in properly, but when I click logout on webui, I got HTTP ERROR 503.
> !截屏2023-04-08 12.40.30.png|width=479,height=179!
> I also find 503 in nifi-request.log
>  
> {code:java}
> 10.12.69.33 - - [08/Apr/2023:04:24:13 +] "GET 
> /nifi-api/access/oidc/logout HTTP/1.1" 503 425 
> "https://36.138.166.203:18088/nifi/; "Mozilla/5.0 (Macintosh; Intel Mac OS X 
> 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.5 
> Safari/605.1.15"{code}
>  
> and WARNs in nifi-user.log, 36.133.55.100 is load balance's external IP. It 
> can not be accessed in intra net.
>  
> {code:java}
> 2023-04-08 12:24:43,511 WARN [NiFi Web Server-59] 
> o.a.n.w.s.o.r.StandardTokenRevocationResponseClient Token Revocation Request 
> processing failed
> org.springframework.web.client.ResourceAccessException: I/O error on POST 
> request for 
> "https://36.133.55.100:8943/realms/zznode/protocol/openid-connect/revoke": 
> connect timed out; nested exception is java.net.SocketTimeoutException: 
> connect timed out
>         at 
> org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:791)
>         at 
> org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:666)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getResponseEntity(StandardTokenRevocationResponseClient.java:81)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getRevocationResponse(StandardTokenRevocationResponseClient.java:70)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processRefreshTokenRevocation(OidcLogoutSuccessHandler.java:181)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processLogoutRequest(OidcLogoutSuccessHandler.java:159)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.onLogoutSuccess(OidcLogoutSuccessHandler.java:127)
>         at 
> org.apache.nifi.web.security.logout.StandardLogoutFilter.doFilterInternal(StandardLogoutFilter.java:62)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.apache.nifi.web.security.csrf.SkipReplicatedCsrfFilter.doFilterInternal(SkipReplicatedCsrfFilter.java:59)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> 

[GitHub] [nifi-minifi-cpp] fgerlits commented on a diff in pull request #1554: MINIFICPP-2058 Fix AWS extension link error on ARM64

2023-04-13 Thread via GitHub


fgerlits commented on code in PR #1554:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1554#discussion_r1165532281


##
extensions/aws/CMakeLists.txt:
##
@@ -34,6 +34,9 @@ add_library(minifi-aws SHARED ${SOURCES})
 target_link_libraries(minifi-aws PUBLIC ${LIBMINIFI} Threads::Threads)
 
 target_wholearchive_library_private(minifi-aws AWS::aws-cpp-sdk-s3)
+if(NOT CMAKE_SYSTEM_PROCESSOR MATCHES "(x86)|(X86)")

Review Comment:
   why is this " not (x86 or X86)" instead of "aarch64 or arm64" as originally 
suggested by @martinzink?
   
   if we go the negative route, do we want to add x64 and amd64?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-11444) Improve FlowFileTransform to allow returning a String for the content instead of byte[]

2023-04-13 Thread Mark Payne (Jira)
Mark Payne created NIFI-11444:
-

 Summary: Improve FlowFileTransform to allow returning a String for 
the content instead of byte[]
 Key: NIFI-11444
 URL: https://issues.apache.org/jira/browse/NIFI-11444
 Project: Apache NiFi
  Issue Type: Sub-task
  Components: Core Framework
Reporter: Mark Payne


The FlowFileTransform Python class returns a FlowFileTransformResult. If the 
contents are to be returned, they must be provided as a byte[]. But we should 
also allow providing the contents as a String and deal with the conversion 
behind the scenes, in order to provide a simpler API.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-11443) Setup proper logging for Python framework

2023-04-13 Thread Mark Payne (Jira)
Mark Payne created NIFI-11443:
-

 Summary: Setup proper logging for Python framework
 Key: NIFI-11443
 URL: https://issues.apache.org/jira/browse/NIFI-11443
 Project: Apache NiFi
  Issue Type: Sub-task
  Components: Core Framework
Reporter: Mark Payne


Currently the python framework establishes logging to logs/nifi-python.log 
(directory configured in nifi.properties). But we need to establish proper 
logging with log file rotation, etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-11442) Add docs to explain how to use Controller Services from Python Processors

2023-04-13 Thread Mark Payne (Jira)
Mark Payne created NIFI-11442:
-

 Summary: Add docs to explain how to use Controller Services from 
Python Processors
 Key: NIFI-11442
 URL: https://issues.apache.org/jira/browse/NIFI-11442
 Project: Apache NiFi
  Issue Type: Sub-task
  Components: Documentation  Website
Reporter: Mark Payne


Python Processors can make use of Controller Services. But in order to do so, 
they need to know what methods are available on those Controller Services. We 
need to update the python-developer-guide to explain how to use the NiFi Docs 
in order to determine which interfaces are exposed by a Controller Service, and 
how to then make use of those interfaces.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] MikeThomsen opened a new pull request, #7171: NIFI-11441 Removed OpenCypher client service because the core depende…

2023-04-13 Thread via GitHub


MikeThomsen opened a new pull request, #7171:
URL: https://github.com/apache/nifi/pull/7171

   …ncy appears to be unsupported for quite some time.
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   # Summary
   
   [NIFI-0](https://issues.apache.org/jira/browse/NIFI-0)
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [ ] Pull Request based on current revision of the `main` branch
   - [ ] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [ ] Build completed using `mvn clean install -P contrib-check`
 - [ ] JDK 11
 - [ ] JDK 17
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] lordgamez commented on a diff in pull request #1538: MINIFICPP-2073 Separate docker build from docker tests in CI

2023-04-13 Thread via GitHub


lordgamez commented on code in PR #1538:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1538#discussion_r1165506164


##
.github/workflows/ci.yml:
##


Review Comment:
   @martinzink all right, I'm okay with that
   @szaszm unfortunately that will not work due to the previously mentioned 
container naming conflicts



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1538: MINIFICPP-2073 Separate docker build from docker tests in CI

2023-04-13 Thread via GitHub


szaszm commented on code in PR #1538:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1538#discussion_r1165478090


##
.github/workflows/ci.yml:
##


Review Comment:
   In the meantime, maybe we could do something like this:
   ```
   make docker-verify-q1 &
   make docker-verify-q2 &
   make docker-verify-q3 &
   make docker-verify-q4 &
   wait %1 %2 %3 %4
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] martinzink commented on pull request #761: MINIFICPP-1008 - Chunkio integration into nanofi

2023-04-13 Thread via GitHub


martinzink commented on PR #761:
URL: https://github.com/apache/nifi-minifi-cpp/pull/761#issuecomment-1506901910

   I'm closing this due to inactivity and nanofi is already kinda deprecated.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] martinzink closed pull request #761: MINIFICPP-1008 - Chunkio integration into nanofi

2023-04-13 Thread via GitHub


martinzink closed pull request #761: MINIFICPP-1008 - Chunkio integration into 
nanofi
URL: https://github.com/apache/nifi-minifi-cpp/pull/761


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-11441) Remove OpenCypher client service from the graph bundle

2023-04-13 Thread Mike Thomsen (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-11441:

Description: The core dependency from the OpenCypher project has not been 
updated since 2019 and is becoming a blocker to updating Gremlin dependencies. 
It's doubtful that anyone is using it, as OpenCypher never really took off as 
far as I can tell.  (was: The core dependency from the OpenCypher project has 
not been updated since 2019 and is becoming a blocker to updating Gremlin 
dependencies. )

> Remove OpenCypher client service from the graph bundle
> --
>
> Key: NIFI-11441
> URL: https://issues.apache.org/jira/browse/NIFI-11441
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> The core dependency from the OpenCypher project has not been updated since 
> 2019 and is becoming a blocker to updating Gremlin dependencies. It's 
> doubtful that anyone is using it, as OpenCypher never really took off as far 
> as I can tell.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-11441) Remove OpenCypher client service from the graph bundle

2023-04-13 Thread Mike Thomsen (Jira)
Mike Thomsen created NIFI-11441:
---

 Summary: Remove OpenCypher client service from the graph bundle
 Key: NIFI-11441
 URL: https://issues.apache.org/jira/browse/NIFI-11441
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Mike Thomsen
Assignee: Mike Thomsen


The core dependency from the OpenCypher project has not been updated since 2019 
and is becoming a blocker to updating Gremlin dependencies. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi-minifi-cpp] martinzink commented on a diff in pull request #1538: MINIFICPP-2073 Separate docker build from docker tests in CI

2023-04-13 Thread via GitHub


martinzink commented on code in PR #1538:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1538#discussion_r1165458301


##
.github/workflows/ci.yml:
##


Review Comment:
   Besides the reruns, since the download/upload artifact overhead is minimal 
separating them vs keeping them together is roughly the same cloud cpu time, 
but the separated one finishes considerably faster IRL (~3-4 times faster).
   When we manage to run these parallel on the same host we can reconsider 
merging them, but until then I think its a good idea to shorten the overall CI 
runtime if there are no major drawbacks.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-11429) Upgrade Gremlin to 3.6.2

2023-04-13 Thread Mike Thomsen (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-11429:

Fix Version/s: 1.latest
   2.latest
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Upgrade Gremlin to 3.6.2
> 
>
> Key: NIFI-11429
> URL: https://issues.apache.org/jira/browse/NIFI-11429
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.latest, 2.latest
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Upgrade Gremlin to 3.6.2
> [https://github.com/apache/tinkerpop/blob/master/CHANGELOG.asciidoc] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11429) Upgrade Gremlin to 3.6.2

2023-04-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17711870#comment-17711870
 ] 

ASF subversion and git services commented on NIFI-11429:


Commit 420a69806239bf842c556176c566baf664c724f8 in nifi's branch 
refs/heads/support/nifi-1.x from Pierre Villard
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=420a698062 ]

NIFI-11429 - Upgrade Gremlin to 3.6.2

This closes #7160

Signed-off-by: Mike Thomsen 


> Upgrade Gremlin to 3.6.2
> 
>
> Key: NIFI-11429
> URL: https://issues.apache.org/jira/browse/NIFI-11429
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Upgrade Gremlin to 3.6.2
> [https://github.com/apache/tinkerpop/blob/master/CHANGELOG.asciidoc] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11429) Upgrade Gremlin to 3.6.2

2023-04-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17711869#comment-17711869
 ] 

ASF subversion and git services commented on NIFI-11429:


Commit 061f3a13805375666550d1ef5e0e62c2e237aa1b in nifi's branch 
refs/heads/main from Pierre Villard
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=061f3a1380 ]

NIFI-11429 - Upgrade Gremlin to 3.6.2

This closes #7160

Signed-off-by: Mike Thomsen 


> Upgrade Gremlin to 3.6.2
> 
>
> Key: NIFI-11429
> URL: https://issues.apache.org/jira/browse/NIFI-11429
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Upgrade Gremlin to 3.6.2
> [https://github.com/apache/tinkerpop/blob/master/CHANGELOG.asciidoc] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] asfgit closed pull request #7160: NIFI-11429 - Upgrade Gremlin to 3.6.2

2023-04-13 Thread via GitHub


asfgit closed pull request #7160: NIFI-11429 - Upgrade Gremlin to 3.6.2
URL: https://github.com/apache/nifi/pull/7160


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] lordgamez commented on a diff in pull request #1503: MINIFICPP-2039 Dust off minificontroller

2023-04-13 Thread via GitHub


lordgamez commented on code in PR #1503:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1503#discussion_r1165408681


##
controller/MiNiFiController.cpp:
##
@@ -75,33 +80,60 @@ int main(int argc, char **argv) {
   secure_context = 
std::make_shared("ControllerSocketProtocolSSL",
 configuration);
   secure_context->onEnable();
 }
+  } else {
+secure_context->onEnable();
   }
+  return secure_context;
+}
 
-  std::string value;
+int main(int argc, char **argv) {
+  const auto logger = 
minifi::core::logging::LoggerConfiguration::getConfiguration().getLogger("controller");
 
+  const std::string minifi_home = determineMinifiHome(logger);
+  if (minifi_home.empty()) {
+// determineMinifiHome already logged everything we need
+return -1;
+  }
+
+  const auto configuration = std::make_shared();
+  configuration->setHome(minifi_home);
+  configuration->loadConfigureFile(DEFAULT_NIFI_PROPERTIES_FILE);
+
+  const auto log_properties = 
std::make_shared();
+  log_properties->setHome(minifi_home);
+  log_properties->loadConfigureFile(DEFAULT_LOG_PROPERTIES_FILE);
+  
minifi::core::logging::LoggerConfiguration::getConfiguration().initialize(log_properties);
+
+  std::shared_ptr secure_context;
+  try {
+secure_context = getSSLContextService(configuration);
+  } catch(const minifi::Exception& ex) {
+logger->log_error(ex.what());
+exit(1);
+  }
   auto stream_factory_ = minifi::io::StreamFactory::getInstance(configuration);
 
   std::string host = "localhost";
-  std::string portStr;
-  std::string caCert;
+  std::string port_str;
+  std::string ca_cert;
   int port = -1;
 
   cxxopts::Options options("MiNiFiController", "MiNiFi local agent 
controller");
   options.positional_help("[optional args]").show_positional_help();
 
-  options.add_options()  //NOLINT
-  ("h,help", "Shows Help")  //NOLINT
-  ("host", "Specifies connecting host name", cxxopts::value())  
//NOLINT
-  ("port", "Specifies connecting host port", cxxopts::value())  //NOLINT
-  ("stop", "Shuts down the provided component", 
cxxopts::value>())  //NOLINT
-  ("start", "Starts provided component", 
cxxopts::value>())  //NOLINT
-  ("l,list", "Provides a list of connections or processors", 
cxxopts::value())  //NOLINT
-  ("c,clear", "Clears the associated connection queue", 
cxxopts::value>())  //NOLINT
-  ("getsize", "Reports the size of the associated connection queue", 
cxxopts::value>())  //NOLINT
-  ("updateflow", "Updates the flow of the agent using the provided flow file", 
cxxopts::value())  //NOLINT
-  ("getfull", "Reports a list of full connections")  //NOLINT
-  ("jstack", "Returns backtraces from the agent")  //NOLINT
-  ("manifest", "Generates a manifest for the current binary")  //NOLINT
+  options.add_options()
+  ("h,help", "Shows Help")
+  ("host", "Specifies connecting host name", cxxopts::value())
+  ("port", "Specifies connecting host port", cxxopts::value())
+  ("stop", "Shuts down the provided component", 
cxxopts::value>())
+  ("start", "Starts provided component", 
cxxopts::value>())
+  ("l,list", "Provides a list of connections or processors", 
cxxopts::value())
+  ("c,clear", "Clears the associated connection queue", 
cxxopts::value>())
+  ("getsize", "Reports the size of the associated connection queue", 
cxxopts::value>())
+  ("updateflow", "Updates the flow of the agent using the provided flow file", 
cxxopts::value())
+  ("getfull", "Reports a list of full connections")
+  ("jstack", "Returns backtraces from the agent")
+  ("manifest", "Generates a manifest for the current binary")

Review Comment:
   Updated in 03f64d39caf87072f2cb0063cb4ea164f0ffe29e



##
controller/tests/ControllerTests.cpp:
##
@@ -0,0 +1,545 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "range/v3/algorithm/find.hpp"
+
+#include "TestBase.h"
+#include "Catch.h"
+#include "io/ClientSocket.h"
+#include "core/Processor.h"
+#include "Controller.h"
+#include "c2/ControllerSocketProtocol.h"
+#include "utils/IntegrationTestUtils.h"
+#include "c2/ControllerSocketMetricsPublisher.h"
+#include "core/controller/ControllerServiceProvider.h"
+#include 

[GitHub] [nifi] MikeThomsen commented on pull request #7160: NIFI-11429 - Upgrade Gremlin to 3.6.2

2023-04-13 Thread via GitHub


MikeThomsen commented on PR #7160:
URL: https://github.com/apache/nifi/pull/7160#issuecomment-1506779850

   @exceptionfactory the other graph services module needs to be reexamined 
because its main selling point was OpenCypher support, and I'm not sure what 
level of support that has today.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] mark-bathori opened a new pull request, #7170: NIFI-11440: Speed up Hive Metastore based unit tests

2023-04-13 Thread via GitHub


mark-bathori opened a new pull request, #7170:
URL: https://github.com/apache/nifi/pull/7170

   
   
   
   
   
   
   
   
   
   
   
   
   
   # Summary
   
   [NIFI-11440](https://issues.apache.org/jira/browse/NIFI-11440)
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [ ] Pull Request based on current revision of the `main` branch
   - [ ] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [x] Build completed using `mvn clean install -P contrib-check`
 - [x] JDK 11
 - [ ] JDK 17
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1508: MINIFICPP-2040 - Avoid deserializing flow files just to be deleted

2023-04-13 Thread via GitHub


szaszm commented on code in PR #1508:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1508#discussion_r1165298009


##
extensions/rocksdb-repos/FlowFileRepository.cpp:
##
@@ -41,52 +41,70 @@ void FlowFileRepository::flush() {
 return;
   }
   auto batch = opendb->createWriteBatch();
-  rocksdb::ReadOptions options;
 
-  std::vector> purgeList;
+  std::list flow_files;
 
-  std::vector keys;
-  std::list keystrings;
-  std::vector values;
-
-  while (keys_to_delete.size_approx() > 0) {
-std::string key;
-if (keys_to_delete.try_dequeue(key)) {
-  keystrings.push_back(std::move(key));  // rocksdb::Slice doesn't copy 
the string, only grabs ptrs. Hacky, but have to ensure the required lifetime of 
the strings.
-  keys.push_back(keystrings.back());
+  while (keys_to_delete_.size_approx() > 0) {
+ExpiredFlowFileInfo info;
+if (keys_to_delete_.try_dequeue(info)) {
+  flow_files.push_back(std::move(info));
 }
   }
-  auto multistatus = opendb->MultiGet(options, keys, );
 
-  for (size_t i = 0; i < keys.size() && i < values.size() && i < 
multistatus.size(); ++i) {
-if (!multistatus[i].ok()) {
-  logger_->log_error("Failed to read key from rocksdb: %s! DB is most 
probably in an inconsistent state!", keys[i].data());
-  keystrings.remove(keys[i].data());
-  continue;
-}
+  deserializeFlowFilesWithNoContentClaim(opendb.value(), flow_files);
 
-utils::Identifier containerId;
-auto eventRead = 
FlowFileRecord::DeSerialize(gsl::make_span(values[i]).as_span(), content_repo_, containerId);
-if (eventRead) {
-  purgeList.push_back(eventRead);
-}
-logger_->log_debug("Issuing batch delete, including %s, Content path %s", 
eventRead->getUUIDStr(), eventRead->getContentFullPath());
-batch.Delete(keys[i]);
+  for (auto& ff : flow_files) {
+batch.Delete(ff.key);
+logger_->log_debug("Issuing batch delete, including %s, Content path %s", 
ff.key, ff.content ? ff.content->getContentFullPath() : "null");
   }
 
   auto operation = [, ]() { return 
opendb->Write(rocksdb::WriteOptions(), ); };
 
   if (!ExecuteWithRetry(operation)) {
-for (const auto& key : keystrings) {
-  keys_to_delete.enqueue(key);  // Push back the values that we could get 
but couldn't delete
+for (auto&& ff : flow_files) {
+  keys_to_delete_.enqueue(std::move(ff));
 }
 return;  // Stop here - don't delete from content repo while we have 
records in FF repo
   }
 
   if (content_repo_) {
-for (const auto  : purgeList) {
-  auto claim = ffr->getResourceClaim();
-  if (claim) claim->decreaseFlowFileRecordOwnedCount();
+for (auto& ff : flow_files) {
+  if (ff.content) {
+ff.content->decreaseFlowFileRecordOwnedCount();
+  }
+}
+  }
+}
+
+void 
FlowFileRepository::deserializeFlowFilesWithNoContentClaim(minifi::internal::OpenRocksDb&
 opendb, std::list& flow_files) {
+  std::vector keys;
+  std::vector::iterator> key_positions;
+  for (auto it = flow_files.begin(); it != flow_files.end(); ++it) {
+if (!it->content) {
+  keys.push_back(it->key);
+  key_positions.push_back(it);
+}
+  }
+  if (!keys.empty()) {

Review Comment:
   I would flip this condition, make an early return, and reduce the 
indentation level of the code below.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1503: MINIFICPP-2039 Dust off minificontroller

2023-04-13 Thread via GitHub


szaszm commented on code in PR #1503:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1503#discussion_r1165289303


##
controller/tests/ControllerTests.cpp:
##
@@ -0,0 +1,545 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "range/v3/algorithm/find.hpp"
+
+#include "TestBase.h"
+#include "Catch.h"
+#include "io/ClientSocket.h"
+#include "core/Processor.h"
+#include "Controller.h"
+#include "c2/ControllerSocketProtocol.h"
+#include "utils/IntegrationTestUtils.h"
+#include "c2/ControllerSocketMetricsPublisher.h"
+#include "core/controller/ControllerServiceProvider.h"
+#include "controllers/SSLContextService.h"
+#include "utils/StringUtils.h"
+#include "state/UpdateController.h"
+
+using namespace std::literals::chrono_literals;
+
+namespace org::apache::nifi::minifi::test {
+
+class TestStateController : public minifi::state::StateController {
+ public:
+  TestStateController()
+: is_running(false) {
+  }
+
+  std::string getComponentName() const override {
+return "TestStateController";
+  }
+
+  minifi::utils::Identifier getComponentUUID() const override {
+static auto dummyUUID = 
minifi::utils::Identifier::parse("12345678-1234-1234-1234-123456789abc").value();
+return dummyUUID;
+  }
+
+  int16_t start() override {
+is_running = true;
+return 0;
+  }
+
+  int16_t stop() override {
+is_running = false;
+return 0;
+  }
+
+  bool isRunning() const override {
+return is_running;
+  }
+
+  int16_t pause() override {
+return 0;
+  }
+
+  int16_t resume() override {
+return 0;
+  }
+
+  std::atomic is_running;
+};
+
+class TestBackTrace : public BackTrace {
+ public:
+  using BackTrace::BackTrace;
+  void addTraceLines(uint32_t line_count) {
+for (uint32_t i = 1; i <= line_count; ++i) {
+  addLine("bt line " + std::to_string(i) + " for " + getName());
+}
+  }
+};
+
+class TestUpdateSink : public minifi::state::StateMonitor {
+ public:
+  explicit TestUpdateSink(std::shared_ptr controller)
+: is_running(true),
+  clear_calls(0),
+  controller(std::move(controller)),
+  update_calls(0) {
+  }
+
+  void executeOnComponent(const std::string&, 
std::function func) override {
+func(*controller);
+  }
+
+  void 
executeOnAllComponents(std::function 
func) override {
+func(*controller);
+  }
+
+  std::string getComponentName() const override {
+return "TestUpdateSink";
+  }
+
+  minifi::utils::Identifier getComponentUUID() const override {
+static auto dummyUUID = 
minifi::utils::Identifier::parse("12345678-1234-1234-1234-123456789abc").value();
+return dummyUUID;
+  }
+
+  int16_t start() override {
+is_running = true;
+return 0;
+  }
+
+  int16_t stop() override {
+is_running = false;
+return 0;
+  }
+
+  bool isRunning() const override {
+return is_running;
+  }
+
+  int16_t pause() override {
+return 0;
+  }
+
+  int16_t resume() override {
+return 0;
+  }
+  std::vector getTraces() override {
+std::vector traces;
+TestBackTrace trace1("trace1");
+trace1.addTraceLines(2);
+traces.push_back(trace1);
+TestBackTrace trace2("trace2");
+trace2.addTraceLines(3);
+traces.push_back(trace2);
+return traces;
+  }
+
+  int16_t drainRepositories() override {
+return 0;
+  }
+
+  std::map> 
getDebugInfo() override {
+return {};
+  }
+
+  int16_t clearConnection(const std::string& /*connection*/) override {
+clear_calls++;
+return 0;
+  }
+
+  std::vector getSupportedConfigurationFormats() const override {
+return {};
+  }
+
+  int16_t applyUpdate(const std::string& /*source*/, const std::string& 
/*configuration*/, bool /*persist*/ = false, const std::optional& 
/*flow_id*/ = std::nullopt) override {
+update_calls++;
+return 0;
+  }
+
+  int16_t applyUpdate(const std::string& /*source*/, const 
std::shared_ptr& /*updateController*/) override {
+update_calls++;
+return 0;
+  }
+
+  uint64_t getUptime() override {
+return 8765309;
+  }
+
+  std::atomic is_running;
+  std::atomic clear_calls;
+  std::shared_ptr controller;
+  std::atomic update_calls;
+};
+
+class TestControllerSocketReporter : 

[jira] [Created] (NIFI-11440) Speed up Hive Metastore based unit tests

2023-04-13 Thread Mark Bathori (Jira)
Mark Bathori created NIFI-11440:
---

 Summary: Speed up Hive Metastore based unit tests
 Key: NIFI-11440
 URL: https://issues.apache.org/jira/browse/NIFI-11440
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Mark Bathori
Assignee: Mark Bathori


The Hive Metastore based unit tests take a lot of time because the metastore is 
initialized before every test. These executions can be speed up by changing the 
initialization to be called only once before starting the tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi-minifi-cpp] lordgamez commented on a diff in pull request #1538: MINIFICPP-2073 Separate docker build from docker tests in CI

2023-04-13 Thread via GitHub


lordgamez commented on code in PR #1538:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1538#discussion_r1165231758


##
.github/workflows/ci.yml:
##


Review Comment:
   I think it's good to separate the test jobs and the build jobs and it would 
be cool to have that for the other actions too. Although I'm not convinced that 
we need to rerun these jobs so often that we need to separate them to 4 
quadrants. I think it's more common that some of the tests in ctest runs fail 
than the ones running in docker. I would rather encourage identifying and 
fixing the flaky tests cases instead of making it easier to rerun them (which 
would help people disregard the issues).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] lordgamez commented on a diff in pull request #1538: MINIFICPP-2073 Separate docker build from docker tests in CI

2023-04-13 Thread via GitHub


lordgamez commented on code in PR #1538:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1538#discussion_r1165217460


##
.github/workflows/ci.yml:
##


Review Comment:
   There is already a ticket for running tests in parallel, but as Martin 
mentioned it requires larger changes on the test framework: 
https://issues.apache.org/jira/browse/MINIFICPP-1641



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] lordgamez commented on a diff in pull request #1554: MINIFICPP-2058 Fix AWS extension link error on ARM64

2023-04-13 Thread via GitHub


lordgamez commented on code in PR #1554:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1554#discussion_r1165209315


##
extensions/aws/CMakeLists.txt:
##


Review Comment:
   Restricted linkage and removed test workaround in 
3ad1d51a4b678eed4550251c33f5a0a62d19d524



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] martinzink commented on a diff in pull request #1554: MINIFICPP-2058 Fix AWS extension link error on ARM64

2023-04-13 Thread via GitHub


martinzink commented on code in PR #1554:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1554#discussion_r1165182366


##
extensions/aws/CMakeLists.txt:
##


Review Comment:
   I think something like 
   ```
   if (CMAKE_SYSTEM_PROCESSOR MATCHES "(aarch64)|(arm64)")
   target_wholearchive_library_private(minifi-aws AWS::aws-checksums)
   endif()
   ```
   should work



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1554: MINIFICPP-2058 Fix AWS extension link error on ARM64

2023-04-13 Thread via GitHub


szaszm commented on code in PR #1554:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1554#discussion_r1165139414


##
extensions/aws/CMakeLists.txt:
##


Review Comment:
   Can we restrict the "wholearchive" linkage to ARM64?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] martinzink commented on a diff in pull request #1554: MINIFICPP-2058 Fix AWS extension link error on ARM64

2023-04-13 Thread via GitHub


martinzink commented on code in PR #1554:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1554#discussion_r1165135660


##
extensions/aws/CMakeLists.txt:
##


Review Comment:
   Now with this fix, it seems we can remove the workarounds I've included in 
MINIFICPP-2048 that fixed the compile issues in the tests. 
https://github.com/apache/nifi-minifi-cpp/commit/06d5467c6ca2d9a20ca9a969fd768e03025c362d#diff-4a191f6c5036df0f852be61f427c6cc029220f0dd8dbc9600fcfa59e760b2edfR21
 
https://github.com/apache/nifi-minifi-cpp/commit/06d5467c6ca2d9a20ca9a969fd768e03025c362d#diff-4a191f6c5036df0f852be61f427c6cc029220f0dd8dbc9600fcfa59e760b2edfR38



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] fgerlits commented on a diff in pull request #1555: MINIFICPP-2097 Fix build failure when ENABLE_ALL is ON

2023-04-13 Thread via GitHub


fgerlits commented on code in PR #1555:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1555#discussion_r1165117425


##
extensions/script/CMakeLists.txt:
##


Review Comment:
   I have created Jira https://issues.apache.org/jira/browse/MINIFICPP-2098 as 
a follow-up.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (MINIFICPP-2098) ENABLE_ALL should enable (almost) all extensions

2023-04-13 Thread Ferenc Gerlits (Jira)
Ferenc Gerlits created MINIFICPP-2098:
-

 Summary: ENABLE_ALL should enable (almost) all extensions
 Key: MINIFICPP-2098
 URL: https://issues.apache.org/jira/browse/MINIFICPP-2098
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Affects Versions: 0.14.0
Reporter: Ferenc Gerlits


Currently, {{ENABLE_ALL=ON}} enables most, but not all, extensions.

For example, Bustache, Kubernetes, OPC, OpenCV, OpenWSMAN, PDH, Systemd and 
Tensorflow are not currently enabled by {{ENABLE_ALL}}.

Tensorflow should probably not be enabled, as it requires special libraries to 
be installed on the system and cannot install them during the build process 
(and these libraries are not available as packages).

The rest of the extensions should be enabled by {{ENABLE_ALL=ON}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi-minifi-cpp] fgerlits commented on a diff in pull request #1553: MINIFICPP-2094 Change validators from shared to raw pointers

2023-04-13 Thread via GitHub


fgerlits commented on code in PR #1553:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1553#discussion_r1165098428


##
libminifi/include/core/Property.h:
##
@@ -96,7 +96,7 @@ class Property {
   std::string getDisplayName() const;
   std::vector getAllowedTypes() const;
   std::string getDescription() const;
-  std::shared_ptr getValidator() const;
+  gsl::not_null getValidator() const;

Review Comment:
   also done in 1a816b300b0dd3be28f8a5c55188893fca238159



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] fgerlits commented on a diff in pull request #1553: MINIFICPP-2094 Change validators from shared to raw pointers

2023-04-13 Thread via GitHub


fgerlits commented on code in PR #1553:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1553#discussion_r1165098248


##
libminifi/include/core/CachedValueValidator.h:
##
@@ -67,29 +67,21 @@ class CachedValueValidator {
 return *this;
   }
 
-  explicit CachedValueValidator(const std::shared_ptr& 
other) : validator_(other) {}
+  explicit CachedValueValidator(const gsl::not_null 
other) : validator_(other) {}

Review Comment:
   done in 1a816b300b0dd3be28f8a5c55188893fca238159



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] arkadius opened a new pull request, #7169: NIFI-8161 NiFi EL: migration from SimpleDateFormat to DateTimeFormatter: rebased to 2.0

2023-04-13 Thread via GitHub


arkadius opened a new pull request, #7169:
URL: https://github.com/apache/nifi/pull/7169

   
   
   
   
   
   
   
   
   
   
   
   
   
   # Summary
   
   Optimization improvement: Migrates NiFi Expression Language from 
SimpleDateFormat to DateTimeFormatter
   [NIFI-8161](https://issues.apache.org/jira/browse/NIFI-8161). This pull 
request is "reopen" of #4773 after rebase to 2.0, main branch
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [x] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [x] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [x] Pull Request based on current revision of the `main` branch
   - [  ] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [ ] Build completed using `mvn clean install -P contrib-check`
 - [x] JDK 11
 - [ ] JDK 17
   
   ### Licensing
   
   - [x] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [x] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [x] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-11409) OIDC Token Revocation Error on Logout

2023-04-13 Thread macdoor615 (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17711715#comment-17711715
 ] 

macdoor615 commented on NIFI-11409:
---

[~exceptionfactory] 

You said "As the client, NiFi needs to call the revocation endpoint directly, 
not through the browser"

I think NiFi consists of two applications, one is the NiFi WebUI running in the 
browser, and the other is the NiFi Server running in the background. My 
understanding of the specification of the RFC6749 is that NiFi WebUI act as the 
role of Client, and NiFi server act as the role of Resource Server. Client 
exchanges token with Authorization Server and Resource Server . Resource Server 
does not exchange tokens with the Authorization Server directly.

So I think it should be NiFi WebUI to exchange token with keycloak. NiFi server 
cannot act as the role of Client and Resource Server at the same time

[https://www.rfc-editor.org/rfc/rfc6749#section-1.5]

!RFC6749 flow.png|width=635,height=351!

> OIDC Token Revocation Error on Logout
> -
>
> Key: NIFI-11409
> URL: https://issues.apache.org/jira/browse/NIFI-11409
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.21.0
> Environment: NiFi 1.21.0 cluster with 4 nodes
> openjdk version "11.0.18" 2023-01-17 LTS
> OpenJDK Runtime Environment (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS)
> OpenJDK 64-Bit Server VM (Red_Hat-11.0.18.0.10-1.el7_9) (build 
> 11.0.18+10-LTS, mixed mode, sharing)
> Linux hb3-ifz-bridge-004 3.10.0-1160.76.1.el7.x86_64 #1 SMP Wed Aug 10 
> 16:21:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
> Keycloak 20.0.2
>Reporter: macdoor615
>Assignee: David Handermann
>Priority: Major
> Attachments: RFC6749 flow.png, 截屏2023-04-08 12.40.30.png, 
> 截屏2023-04-09 13.17.25.png, 截屏2023-04-09 13.33.25.png
>
>
> My NiFi 1.21.0 cluster has 4 nodes and using oidc authentication.
> I can log in properly, but when I click logout on webui, I got HTTP ERROR 503.
> !截屏2023-04-08 12.40.30.png|width=479,height=179!
> I also find 503 in nifi-request.log
>  
> {code:java}
> 10.12.69.33 - - [08/Apr/2023:04:24:13 +] "GET 
> /nifi-api/access/oidc/logout HTTP/1.1" 503 425 
> "https://36.138.166.203:18088/nifi/; "Mozilla/5.0 (Macintosh; Intel Mac OS X 
> 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.5 
> Safari/605.1.15"{code}
>  
> and WARNs in nifi-user.log, 36.133.55.100 is load balance's external IP. It 
> can not be accessed in intra net.
>  
> {code:java}
> 2023-04-08 12:24:43,511 WARN [NiFi Web Server-59] 
> o.a.n.w.s.o.r.StandardTokenRevocationResponseClient Token Revocation Request 
> processing failed
> org.springframework.web.client.ResourceAccessException: I/O error on POST 
> request for 
> "https://36.133.55.100:8943/realms/zznode/protocol/openid-connect/revoke": 
> connect timed out; nested exception is java.net.SocketTimeoutException: 
> connect timed out
>         at 
> org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:791)
>         at 
> org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:666)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getResponseEntity(StandardTokenRevocationResponseClient.java:81)
>         at 
> org.apache.nifi.web.security.oidc.revocation.StandardTokenRevocationResponseClient.getRevocationResponse(StandardTokenRevocationResponseClient.java:70)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processRefreshTokenRevocation(OidcLogoutSuccessHandler.java:181)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.processLogoutRequest(OidcLogoutSuccessHandler.java:159)
>         at 
> org.apache.nifi.web.security.oidc.logout.OidcLogoutSuccessHandler.onLogoutSuccess(OidcLogoutSuccessHandler.java:127)
>         at 
> org.apache.nifi.web.security.logout.StandardLogoutFilter.doFilterInternal(StandardLogoutFilter.java:62)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.apache.nifi.web.security.csrf.SkipReplicatedCsrfFilter.doFilterInternal(SkipReplicatedCsrfFilter.java:59)
>         at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
>         at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:361)
>         at 
> org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62)
>         at 
> 

[jira] [Assigned] (MINIFICPP-2081) CWEL saved logs support

2023-04-13 Thread Adam Debreceni (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-2081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Debreceni reassigned MINIFICPP-2081:
-

Assignee: Adam Debreceni

> CWEL saved logs support
> ---
>
> Key: MINIFICPP-2081
> URL: https://issues.apache.org/jira/browse/MINIFICPP-2081
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Marton Szasz
>Assignee: Adam Debreceni
>Priority: Minor
>
> According to the EvtQuery docs, specifying a file path as Channel should 
> normally work, but for some reason, it doesn't. This may be an easy feature 
> addition if it only needs some small fix.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-10353) ConsumeAzureEventHub does not stop even though output queue is backpressured

2023-04-13 Thread Peter Schmitzer (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Schmitzer resolved NIFI-10353.

Resolution: Workaround

remove eventhub processor and use standard kafka consumer instead

>  ConsumeAzureEventHub does not stop even though output queue is backpressured
> -
>
> Key: NIFI-10353
> URL: https://issues.apache.org/jira/browse/NIFI-10353
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.16.3
>Reporter: Peter Schmitzer
>Priority: Major
>
> ConsumeAzureEventHub seems to not care about backpressure and continues to 
> send flowfiles to the output queue even though it is backpressured. This 
> endlessly growing queue will ultimately lead to nifi going into overload and 
> be unhealthy.
> It was expected that the processor will stop putting further data in the 
> outgoing queue as soon as it is backpressured.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-10353) ConsumeAzureEventHub does not stop even though output queue is backpressured

2023-04-13 Thread Peter Schmitzer (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17711708#comment-17711708
 ] 

Peter Schmitzer commented on NIFI-10353:


Update from our side:
Azure eventhub supports connecting with the standard kafka protocol so standard 
kafka consumers can (and I believe should be) used for that purpose. We have 
removed this processor from all our flows and have no need to improve this.

>  ConsumeAzureEventHub does not stop even though output queue is backpressured
> -
>
> Key: NIFI-10353
> URL: https://issues.apache.org/jira/browse/NIFI-10353
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.16.3
>Reporter: Peter Schmitzer
>Priority: Major
>
> ConsumeAzureEventHub seems to not care about backpressure and continues to 
> send flowfiles to the output queue even though it is backpressured. This 
> endlessly growing queue will ultimately lead to nifi going into overload and 
> be unhealthy.
> It was expected that the processor will stop putting further data in the 
> outgoing queue as soon as it is backpressured.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >