[jira] [Updated] (NIFI-11261) GetAzureEventHub does not disconnect on Primary Node Changes

2023-03-08 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-11261:

Status: Patch Available  (was: In Progress)

> GetAzureEventHub does not disconnect on Primary Node Changes
> 
>
> Key: NIFI-11261
> URL: https://issues.apache.org/jira/browse/NIFI-11261
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.20.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{GetAzureEventHub}} Processor creates an Event Hub Client when scheduled 
> and maintains the client instance until Processor is stopped. This approach 
> works for standalone deployments, but is not suitable for clustered 
> deployments because the client maintains connections after the primary node 
> changes.
> The {{ConsumeAzureEventHub}} Processor provides the preferred approach for 
> consuming events, but {{GetAzureEventHub}} should be updated to handle 
> primary node state changes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] exceptionfactory opened a new pull request, #7023: NIFI-11261 Add Primary Node State handling to GetAzureEventHub

2023-03-08 Thread via GitHub


exceptionfactory opened a new pull request, #7023:
URL: https://github.com/apache/nifi/pull/7023

   # Summary
   
   [NIFI-11261](https://issues.apache.org/jira/browse/NIFI-11261) Adds Primary 
Node State Change handling to `GetAzureEventHub` to improve Processor behavior 
in a clustered deployment.
   
   Although `ConsumeAzureEventHub` provides the recommend approach for 
receiving events, `GetAzureEventHub` can be configured to run on the cluster 
primary node only to reduce the potential for duplication.
   
   Updates include checking for the Execution Node status and avoiding 
unnecessary Event Hub Consumer Client creation. The new handler method for 
Primary Node State changes closes the Consumer Client when a node has its 
primary status revoked, and creates a new Consumer Client when a node is 
elected primary.
   
   Additional changes include upgrading the Qpid Proton J dependency from 
0.34.0 to 
[0.34.1](https://qpid.apache.org/releases/qpid-proton-j-0.34.1/release-notes.html)
 and setting the Consumer Client Identifier to the Processor Identifier for 
improved connection tracking.
   
   New unit test methods provide basic confirmation of Processor behavior when 
configured for primary node execution.
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [X] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [X] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [X] Pull Request based on current revision of the `main` branch
   - [X] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [X] Build completed using `mvn clean install -P contrib-check`
 - [X] JDK 11
 - [X] JDK 17
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-11262) Correct test scope for Bouncy Castle in nifi-security-kerberos

2023-03-08 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-11262:

Fix Version/s: 2.0.0
   1.21.0

> Correct test scope for Bouncy Castle in nifi-security-kerberos
> --
>
> Key: NIFI-11262
> URL: https://issues.apache.org/jira/browse/NIFI-11262
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.19.0, 1.20.0, 1.19.1
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Changes introduced when upgrading to Bouncy Castle 1.71 inadvertently changed 
> the scope of {{bcprov-jdk18on}} in the {{nifi-security-kerberos}} module, 
> resulting in unnecessary packaging of the Bouncy Castle library in multiple 
> modules.
> The Bouncy Castle Provider library should be a test dependency in 
> {{nifi-security-kerberos}}. The {{bcprov-jdk18on}} library is necessary only 
> for the {{hadoop-minikdc}} test dependency.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11262) Correct test scope for Bouncy Castle in nifi-security-kerberos

2023-03-08 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-11262:

Status: Patch Available  (was: Open)

> Correct test scope for Bouncy Castle in nifi-security-kerberos
> --
>
> Key: NIFI-11262
> URL: https://issues.apache.org/jira/browse/NIFI-11262
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.19.1, 1.20.0, 1.19.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Changes introduced when upgrading to Bouncy Castle 1.71 inadvertently changed 
> the scope of {{bcprov-jdk18on}} in the {{nifi-security-kerberos}} module, 
> resulting in unnecessary packaging of the Bouncy Castle library in multiple 
> modules.
> The Bouncy Castle Provider library should be a test dependency in 
> {{nifi-security-kerberos}}. The {{bcprov-jdk18on}} library is necessary only 
> for the {{hadoop-minikdc}} test dependency.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] exceptionfactory opened a new pull request, #7022: NIFI-11262 Correct scope for bcprov-jdk18on in nifi-security-kerberos

2023-03-08 Thread via GitHub


exceptionfactory opened a new pull request, #7022:
URL: https://github.com/apache/nifi/pull/7022

   # Summary
   
   [NIFI-11262](https://issues.apache.org/jira/browse/NIFI-11262) Corrects the 
scope of the Bouncy Castle `bcprov-jdk18on` library in `nifi-security-kerberos` 
to `test`, avoiding unnecessary runtime inclusion of the library in multiple 
modules.
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [X] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [X] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [X] Pull Request based on current revision of the `main` branch
   - [X] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [X] Build completed using `mvn clean install -P contrib-check`
 - [X] JDK 11
 - [ ] JDK 17
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-11262) Correct test scope for Bouncy Castle in nifi-security-kerberos

2023-03-08 Thread David Handermann (Jira)
David Handermann created NIFI-11262:
---

 Summary: Correct test scope for Bouncy Castle in 
nifi-security-kerberos
 Key: NIFI-11262
 URL: https://issues.apache.org/jira/browse/NIFI-11262
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.19.1, 1.20.0, 1.19.0
Reporter: David Handermann
Assignee: David Handermann


Changes introduced when upgrading to Bouncy Castle 1.71 inadvertently changed 
the scope of {{bcprov-jdk18on}} in the {{nifi-security-kerberos}} module, 
resulting in unnecessary packaging of the Bouncy Castle library in multiple 
modules.

The Bouncy Castle Provider library should be a test dependency in 
{{nifi-security-kerberos}}. The {{bcprov-jdk18on}} library is necessary only 
for the {{hadoop-minikdc}} test dependency.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11260) Add SSL Context Service in AWSCredentialsProviderControllerService

2023-03-08 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi updated NIFI-11260:
---
Status: Patch Available  (was: Open)

> Add SSL Context Service in AWSCredentialsProviderControllerService 
> ---
>
> Key: NIFI-11260
> URL: https://issues.apache.org/jira/browse/NIFI-11260
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Peter Turcsanyi
>Assignee: Peter Turcsanyi
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> AWSCredentialsProviderControllerService supports custom endpoints for Session 
> Token Service used by Assume Role credential strategy. The custom endpoint 
> may use HTTPS with a corporate certificate which is not signed by a public CA 
> from the default truststore.
> Add SSL Context Service to support custom endpoints with HTTPS.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] turcsanyip opened a new pull request, #7021: NIFI-11260: Added SSL Context Service in AWSCredentialsProviderContro…

2023-03-08 Thread via GitHub


turcsanyip opened a new pull request, #7021:
URL: https://github.com/apache/nifi/pull/7021

   …llerService
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   # Summary
   
   [NIFI-11260](https://issues.apache.org/jira/browse/NIFI-11260)
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [ ] Pull Request based on current revision of the `main` branch
   - [ ] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [ ] Build completed using `mvn clean install -P contrib-check`
 - [ ] JDK 11
 - [ ] JDK 17
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] gresockj commented on pull request #6993: NIFI-11231 Stateless NiFi sensitive parameter context support

2023-03-08 Thread via GitHub


gresockj commented on PR #6993:
URL: https://github.com/apache/nifi/pull/6993#issuecomment-1460864476

   
   > Hmm.. I'm not sure Parameter Value Provider would work for us since these 
have to be passed in on runtime and known beforehand. We're going to be 
supporting a variety of flows stored in Registry. The goal is to run stateless 
NiFi with any given Registry url, bucket id, flow id, and flow version without 
any knowledge of what properties/params users have configured in their flows. 
So those flows that utilize sensitive parameter contexts need to be able to 
work under any running statless nifi pod/container. We wouldn't be changing the 
run command or properties files for each flow that's processed with stateless 
NiFi. Let me chat some more with @Dye357 to see if there's anything I'm missing.
   
   Hi @slambrose, what you describe should be possible with the existing 
EnvironmentVariableParameterValueProvider.  Let me know if you have any other 
questions about the setup.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-11217) NiFi NAR Maven Plugin fails to build external NARs with transitive, provided dependencies.

2023-03-08 Thread Kevin Doran (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17698075#comment-17698075
 ] 

Kevin Doran commented on NIFI-11217:


Credit to Julien G for reporting and helping us identify the issue on the 
Apache NiFi Slack:
https://apachenifi.slack.com/archives/C0L9S92JY/p1676561597970849 

> NiFi NAR Maven Plugin fails to build external NARs with transitive, provided 
> dependencies.
> --
>
> Key: NIFI-11217
> URL: https://issues.apache.org/jira/browse/NIFI-11217
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: nifi-nar-maven-plugin-1.4.0
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>Priority: Major
> Fix For: nifi-nar-maven-plugin-1.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> It appears that the NAR maven plugin was benefiting from behavior in older 
> version of the maven-dependency-tree library that would resolve artifacts in 
> addition to poms when crawling dependendncies. This guaranteed that they 
> would be in the local Maven repository/cache when in the Extension 
> Documentation generation phase of NAR building.
> Version 1.4.0 of the plugin upgraded maven-dependency-tree to 3.2.0, which 
> included this behavior change to only download poms:
> https://github.com/apache/maven-dependency-tree/commit/b330fa93b70e35c70a8afa75f0404cf47d5935d6
>  
> This broke building NARs that are external from the Apache NiFi 
> repository/project that inherit from (or depend on) NiFi NARs that have 
> transitive dependencies marked as provided, because the Extension 
> Documentation generation needs the full artifact resolved in order to create 
> a working ClassLoader. Not having artifacts resolved results in error 
> messages such as:
> {noformat}
> [INFO] --- nifi-nar-maven-plugin:1.4.0:nar (default-nar) @ 
> nifi-example-processors-nar ---
> [INFO] Copying nifi-example-processors-1.0.jar to 
> /Users/kdoran/dev/code/nifi-dependency-example/nifi-inherits-processor-bundle/nifi-example-processors-nar/target/classes/META-INF/bundled-dependencies/nifi-example-processors-1.0.jar
> [INFO] Generating documentation for NiFi extensions in the NAR...
> [INFO] Found NAR dependency of 
> org.apache.nifi:nifi-standard-services-api-nar:nar:1.20.0:compile
> [INFO] Found NAR dependency of 
> org.apache.nifi:nifi-jetty-bundle:nar:1.20.0:compile
> [INFO] Found a dependency on version 1.20.0 of NiFi API
> [ERROR] Could not generate extensions' documentation
> org.apache.maven.plugin.MojoExecutionException: Failed to create Extension 
> Documentation
> at org.apache.nifi.NarMojo.generateDocumentation (NarMojo.java:534)
> at org.apache.nifi.NarMojo.execute (NarMojo.java:505)
> at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo 
> (DefaultBuildPluginManager.java:137)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:210)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:156)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:148)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
> (LifecycleModuleBuilder.java:117)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
> (LifecycleModuleBuilder.java:81)
> at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
>  (SingleThreadedBuilder.java:56)
> at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
> (LifecycleStarter.java:128)
> at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
> at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
> at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
> at org.apache.maven.cli.MavenCli.execute (MavenCli.java:972)
> at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:293)
> at org.apache.maven.cli.MavenCli.main (MavenCli.java:196)
> at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
> at jdk.internal.reflect.NativeMethodAccessorImpl.invoke 
> (NativeMethodAccessorImpl.java:62)
> at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke 
> (DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke (Method.java:566)
> at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced 
> (Launcher.java:282)
> at org.codehaus.plexus.classworlds.launcher.Launcher.launch 
> (Launcher.java:225)
> at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode 
> (Launcher.java:406)
> at org.codehaus.plexus.classworlds.launcher.Launcher.main 
> (Launcher.java:347)
> Caused by

[jira] [Resolved] (NIFI-3070) Add integration tests for PutAzureEventHub and GetAzureEventHub

2023-03-08 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann resolved NIFI-3070.

Resolution: Won't Fix

Current unit tests use a mocked extension of the Processors to exercise much of 
the behavior. An integration would require credentials, which could not be 
checked into version control. Closing this issue based on lack of activity.

> Add integration tests for PutAzureEventHub and GetAzureEventHub
> ---
>
> Key: NIFI-3070
> URL: https://issues.apache.org/jira/browse/NIFI-3070
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joey Frazee
>Priority: Minor
>
> There aren't currently any integration tests for PutAzureEventHub or 
> GetAzureEventHub and the unit tests are currently inactive (because they're 
> not in a src/test/java). Since there isn't a good way to mock Event Hubs, 
> which makes the unit tests of limited value, some kind of integration tests 
> should be added.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-8989) Azure Managed Identities for EventHub - ConsumeAzureEventHub and GetAzureEventHub

2023-03-08 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann resolved NIFI-8989.

Resolution: Duplicate

> Azure Managed Identities for EventHub - ConsumeAzureEventHub and 
> GetAzureEventHub
> -
>
> Key: NIFI-8989
> URL: https://issues.apache.org/jira/browse/NIFI-8989
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.14.0, 1.13.2
>Reporter: Miles Scott-Hill
>Priority: Major
>
> From what I understand, currently just the PutAzureEventHub processor permits 
> Managed Identities via the VM host. Can some wonderful person(s) please add 
> this to the ConsumeAzureEventHub and GetAzureEventHub processors.
>  
> Please see 
> [https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-managed-service-identity]
>  and https://issues.apache.org/jira/browse/NIFI-6149 for further information.
>  
> Thank you  :)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-11261) GetAzureEventHub does not disconnect on Primary Node Changes

2023-03-08 Thread David Handermann (Jira)
David Handermann created NIFI-11261:
---

 Summary: GetAzureEventHub does not disconnect on Primary Node 
Changes
 Key: NIFI-11261
 URL: https://issues.apache.org/jira/browse/NIFI-11261
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.20.0
Reporter: David Handermann
Assignee: David Handermann


The {{GetAzureEventHub}} Processor creates an Event Hub Client when scheduled 
and maintains the client instance until Processor is stopped. This approach 
works for standalone deployments, but is not suitable for clustered deployments 
because the client maintains connections after the primary node changes.

The {{ConsumeAzureEventHub}} Processor provides the preferred approach for 
consuming events, but {{GetAzureEventHub}} should be updated to handle primary 
node state changes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-2835) GetAzureEventHub processor should leverage partition offset to better handle restarts

2023-03-08 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-2835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann resolved NIFI-2835.

  Assignee: David Handermann  (was: Eric Ulicny)
Resolution: Won't Fix

As described in the comments, ConsumeAzureEventHub is the preferred approach 
for processing records from Azure Event Hubs. It does require Azure Storage for 
checkpointing, so the best solution would be to update ConsumeAzureEventHub to 
support alternative checkpoint locations. More recent updates to the Azure 
Event Hub libraries should make it easier to implement the Checkpoint Storage 
interfaces.

> GetAzureEventHub processor should leverage partition offset to better handle 
> restarts
> -
>
> Key: NIFI-2835
> URL: https://issues.apache.org/jira/browse/NIFI-2835
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joe Percivall
>Assignee: David Handermann
>Priority: Major
>
> The GetAzureEventHub processor utilizes the Azure client that consists of 
> receivers for each partition. The processor stores them in a map[1] that gets 
> cleared every time the processor is stopped[2]. These receivers have 
> partition offsets which keep track of which message it's currently on and 
> which it should receive next. So currently, when the processor is 
> stopped/restarted, any tracking of which message is next to be received is 
> lost.
> If instead of clearing the map each time, we hold onto the receivers, or kept 
> track of the partitionId/Offsets when stopping, (barring any relevant 
> configuration changes) the processor would restart exactly where it left off 
> with no loss of data.
> This would work very well with NIFI-2826.
> [1]https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/eventhub/GetAzureEventHub.java#L122
> [2] 
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/eventhub/GetAzureEventHub.java#L229



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11218) Upgrade dependencies in NAR Maven Plugin

2023-03-08 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-11218:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Upgrade dependencies in NAR Maven Plugin
> 
>
> Key: NIFI-11218
> URL: https://issues.apache.org/jira/browse/NIFI-11218
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: nifi-nar-maven-plugin-1.4.0
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>Priority: Minor
> Fix For: nifi-nar-maven-plugin-1.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In investigating NIFI-11217, we observed that a number of core dependencies 
> for the NiFi NAR Maven Plugin are far outdated, some a full major version 
> behind. 
> This task is to bring core maven dependencies for the NiFi NAR Maven Plugin 
> up to latest versions, which will require some code changes. Specifically, we 
> depend heavily on maven-dependency 2.x and will need code changes to update 
> to 3.x.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi-maven] bbende merged pull request #30: NIFI-11218 Upgrade core dependencies

2023-03-08 Thread via GitHub


bbende merged PR #30:
URL: https://github.com/apache/nifi-maven/pull/30


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-11217) NiFi NAR Maven Plugin fails to build external NARs with transitive, provided dependencies.

2023-03-08 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-11217:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> NiFi NAR Maven Plugin fails to build external NARs with transitive, provided 
> dependencies.
> --
>
> Key: NIFI-11217
> URL: https://issues.apache.org/jira/browse/NIFI-11217
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: nifi-nar-maven-plugin-1.4.0
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>Priority: Major
> Fix For: nifi-nar-maven-plugin-1.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> It appears that the NAR maven plugin was benefiting from behavior in older 
> version of the maven-dependency-tree library that would resolve artifacts in 
> addition to poms when crawling dependendncies. This guaranteed that they 
> would be in the local Maven repository/cache when in the Extension 
> Documentation generation phase of NAR building.
> Version 1.4.0 of the plugin upgraded maven-dependency-tree to 3.2.0, which 
> included this behavior change to only download poms:
> https://github.com/apache/maven-dependency-tree/commit/b330fa93b70e35c70a8afa75f0404cf47d5935d6
>  
> This broke building NARs that are external from the Apache NiFi 
> repository/project that inherit from (or depend on) NiFi NARs that have 
> transitive dependencies marked as provided, because the Extension 
> Documentation generation needs the full artifact resolved in order to create 
> a working ClassLoader. Not having artifacts resolved results in error 
> messages such as:
> {noformat}
> [INFO] --- nifi-nar-maven-plugin:1.4.0:nar (default-nar) @ 
> nifi-example-processors-nar ---
> [INFO] Copying nifi-example-processors-1.0.jar to 
> /Users/kdoran/dev/code/nifi-dependency-example/nifi-inherits-processor-bundle/nifi-example-processors-nar/target/classes/META-INF/bundled-dependencies/nifi-example-processors-1.0.jar
> [INFO] Generating documentation for NiFi extensions in the NAR...
> [INFO] Found NAR dependency of 
> org.apache.nifi:nifi-standard-services-api-nar:nar:1.20.0:compile
> [INFO] Found NAR dependency of 
> org.apache.nifi:nifi-jetty-bundle:nar:1.20.0:compile
> [INFO] Found a dependency on version 1.20.0 of NiFi API
> [ERROR] Could not generate extensions' documentation
> org.apache.maven.plugin.MojoExecutionException: Failed to create Extension 
> Documentation
> at org.apache.nifi.NarMojo.generateDocumentation (NarMojo.java:534)
> at org.apache.nifi.NarMojo.execute (NarMojo.java:505)
> at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo 
> (DefaultBuildPluginManager.java:137)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:210)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:156)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:148)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
> (LifecycleModuleBuilder.java:117)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
> (LifecycleModuleBuilder.java:81)
> at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
>  (SingleThreadedBuilder.java:56)
> at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
> (LifecycleStarter.java:128)
> at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
> at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
> at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
> at org.apache.maven.cli.MavenCli.execute (MavenCli.java:972)
> at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:293)
> at org.apache.maven.cli.MavenCli.main (MavenCli.java:196)
> at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
> at jdk.internal.reflect.NativeMethodAccessorImpl.invoke 
> (NativeMethodAccessorImpl.java:62)
> at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke 
> (DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke (Method.java:566)
> at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced 
> (Launcher.java:282)
> at org.codehaus.plexus.classworlds.launcher.Launcher.launch 
> (Launcher.java:225)
> at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode 
> (Launcher.java:406)
> at org.codehaus.plexus.classworlds.launcher.Launcher.main 
> (Launcher.java:347)
> Caused by: org.apache.maven.plugin.MojoExecutionException: Could not resolve 
> local dependency org.apache.nifi:nifi-framework-api:jar:1.20.0
> at 
> or

[GitHub] [nifi-maven] bbende merged pull request #29: NIFI-11217 Fix building external NARs with transitive dependencies...

2023-03-08 Thread via GitHub


bbende merged PR #29:
URL: https://github.com/apache/nifi-maven/pull/29


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (NIFI-2935) GetAzureEventHub freezes after period of reading data

2023-03-08 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-2935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann resolved NIFI-2935.

Resolution: Cannot Reproduce

The GetAzureEventHub Processor has gone through several revisions since NiFi 
1.0.0, so if this still appears to be an issue, it would be helpful to verify 
in more recent versions.

> GetAzureEventHub freezes after period of reading data
> -
>
> Key: NIFI-2935
> URL: https://issues.apache.org/jira/browse/NIFI-2935
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Michiel Moonen
>Priority: Major
> Attachments: nifi logs.zip
>
>
> We have a GetAzureEventHub processor running on a AWS instance to collect 
> data from Azure. Runs quite smoothly, no heavy load (~140 kb / 30 seconds), 
> but after 2 - 14 hours it just stops collecting data. 
> GetAzureEventHub Processor block can be stopped in UI, but 'under the hood' 
> actually it doesn't stop; processor block actually freezes / no response. 
> Only option is to kill NiFi (which eventually terminates itself forcefully). 
> Feels like some buffer has been flooded.
> I don't have logs atm, will add them soon.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] exceptionfactory commented on pull request #7016: [NIFI-10792] Fixed bug to allow for processing files larger than 10MB…

2023-03-08 Thread via GitHub


exceptionfactory commented on PR #7016:
URL: https://github.com/apache/nifi/pull/7016#issuecomment-1460568600

   Thanks for the comment @mh013370, this issue is similar to the size limits 
for Tar files resolved in #6369.
   
   @dan-s1, Although unit tests are the preferred way to confirm expected 
behavior, unit tests are not optimal for testing these kinds of scenarios. For 
this particular situation, confirming existing functionality is good, and 
runtime verification is better than introducing large files or long-running 
tests into the repository.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] mh013370 commented on pull request #7016: [NIFI-10792] Fixed bug to allow for processing files larger than 10MB…

2023-03-08 Thread via GitHub


mh013370 commented on PR #7016:
URL: https://github.com/apache/nifi/pull/7016#issuecomment-1460537814

   > @exceptionfactory I did not end up including the unit test I had as it was 
a unit which tested with a 20MB file. I would have thought there should be a 
unit test to exercise the change I made. Please advise.
   
   Anecdotally, the NiFi mock framework will read the entire FF contents into 
memory (I discovered this in #6369) so I think you're correct in not including 
unit tests testing larger FFs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-11194) Remove unused gzip CSS and JS bundling from nifi-web-ui

2023-03-08 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17698016#comment-17698016
 ] 

ASF subversion and git services commented on NIFI-11194:


Commit cc1e5b314b779839bc27fc2d61b86553ebf85d09 in nifi's branch 
refs/heads/support/nifi-1.x from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=cc1e5b314b ]

NIFI-11194 Remove gzip CSS and JS from nifi-web-ui (#6968)

* NIFI-11194 Removed gzip CSS and JS from nifi-web-ui

* NIFI-11194 Updated comment for static content aggregation

This closes #6968 

> Remove unused gzip CSS and JS bundling from nifi-web-ui
> ---
>
> Key: NIFI-11194
> URL: https://issues.apache.org/jira/browse/NIFI-11194
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The {{nifi-web-ui}} includes Maven build plugins to prepare CSS and 
> JavaScript files for bundling. The current configuration also creates gzip 
> versions of each file, which are not used at runtime. This gzip files 
> increase the size of the {{nifi-web-ui}} WAR and also take a majority of 
> build time relative to other modules. The gzip processing and bundling should 
> be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11257) Improve Reliability of GitHub Runner Artifact Retrieval

2023-03-08 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17698015#comment-17698015
 ] 

ASF subversion and git services commented on NIFI-11257:


Commit 699e2e79132d3576c89b81ea046c475ceb8a3319 in nifi's branch 
refs/heads/support/nifi-1.x from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=699e2e7913 ]

NIFI-11257 Enable Maven HTTP pooling and additional retries (#7020)

* NIFI-11257 Enabled Maven HTTP pooling and additional retries

- Enabled HTTP connection pooling for Maven Wagon
- Configured 30 second timeout for HTTP pooled connection lifespan
- Enabled 5 retries for HTTP connections
- Set maximum connection per route to 5 instead of 20
- Enabled retry for sent HTTP requests

> Improve Reliability of GitHub Runner Artifact Retrieval
> ---
>
> Key: NIFI-11257
> URL: https://issues.apache.org/jira/browse/NIFI-11257
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Automated builds using GitHub Actions have failed more often with errors 
> related to artifact retrieval.
> {noformat}
> Error:  Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:attach-descriptor 
> (attach-descriptor) on project nifi-registry:
> Execution attach-descriptor of goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:attach-descriptor failed:
> Plugin org.apache.maven.plugins:maven-site-plugin:3.4 or one of its 
> dependencies could not be resolved:
> Failed to collect dependencies at 
> org.apache.maven.plugins:maven-site-plugin:jar:3.4 -> 
> org.apache.maven.reporting:maven-reporting-exec:jar:1.2:
> Failed to read artifact descriptor for 
> org.apache.maven.reporting:maven-reporting-exec:jar:1.2:
> Could not transfer artifact 
> org.apache.maven.reporting:maven-reporting-exec:pom:1.2 from/to central 
> (https://repo.maven.apache.org/maven2): Connection reset -> [Help 1]{noformat}
> {noformat}
> Error:  Failed to execute goal on project nifi-hadoop-utils: Could not 
> resolve dependencies for project 
> org.apache.nifi:nifi-hadoop-utils:jar:2.0.0-SNAPSHOT:
> Failed to collect dependencies at org.apache.hadoop:hadoop-common:jar:3.3.4:
> Failed to read artifact descriptor for 
> org.apache.hadoop:hadoop-common:jar:3.3.4:
> Could not transfer artifact org.apache.hadoop:hadoop-common:pom:3.3.4 from/to 
> central (https://repo.maven.apache.org/maven2): Connection reset -> [Help 1]
> {noformat}
> Adjusting Maven Wagon connection settings and using an alternative Maven 
> repository mirror may help improve reliability.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] krisztina-zsihovszki commented on a diff in pull request #7019: NIFI-11224: Refactor and FF attribute support in WHERE in QuerySalesf…

2023-03-08 Thread via GitHub


krisztina-zsihovszki commented on code in PR #7019:
URL: https://github.com/apache/nifi/pull/7019#discussion_r1129767879


##
nifi-nar-bundles/nifi-salesforce-bundle/nifi-salesforce-processors/src/main/java/org/apache/nifi/processors/salesforce/QuerySalesforceObject.java:
##
@@ -560,6 +566,27 @@ private SalesforceSchemaHolder 
getConvertedSalesforceSchema(String sObject, Stri
 }
 }
 
+private void handleError(ProcessSession session, FlowFile 
originalFlowFile, AtomicBoolean isOriginalTransferred, List 
outgoingFlowFiles,
+ Exception e, String errorMessage) {
+if (originalFlowFile != null) {
+session.transfer(originalFlowFile, REL_FAILURE);

Review Comment:
   Please consider penalizing the FlowFile before sending it to REL_FAILURE.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] exceptionfactory commented on a diff in pull request #6968: NIFI-11194 Remove gzip CSS and JS from nifi-web-ui

2023-03-08 Thread via GitHub


exceptionfactory commented on code in PR #6968:
URL: https://github.com/apache/nifi/pull/6968#discussion_r1129777336


##
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/pom.xml:
##
@@ -896,81 +896,45 @@
 
 assets/**/*,
 css/common-ui.css,
-css/common-ui.css.gz,

Review Comment:
   Thanks @mcgilman, I pushed a commit updating that comment as suggested.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] dan-s1 commented on pull request #7016: [NIFI-10792] Fixed bug to allow for processing files larger than 10MB…

2023-03-08 Thread via GitHub


dan-s1 commented on PR #7016:
URL: https://github.com/apache/nifi/pull/7016#issuecomment-1460523830

   @exceptionfactory I did not end up including the unit test I had as it was a 
unit which tested with a 20MB file. I would have thought there should be a unit 
test to exercise the change I made.  Please advise.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] krisztina-zsihovszki commented on a diff in pull request #7019: NIFI-11224: Refactor and FF attribute support in WHERE in QuerySalesf…

2023-03-08 Thread via GitHub


krisztina-zsihovszki commented on code in PR #7019:
URL: https://github.com/apache/nifi/pull/7019#discussion_r1129771182


##
nifi-nar-bundles/nifi-salesforce-bundle/nifi-salesforce-processors/src/main/java/org/apache/nifi/processors/salesforce/QuerySalesforceObject.java:
##
@@ -102,11 +107,11 @@
 @CapabilityDescription("Retrieves records from a Salesforce sObject. Users can 
add arbitrary filter conditions by setting the 'Custom WHERE Condition' 
property."
 + " The processor can also run a custom query, although record 
processing is not supported in that case."
 + " Supports incremental retrieval: users can define a field in the 
'Age Field' property that will be used to determine when the record was 
created."
-+ " When this property is set the processor will retrieve new records. 
It's also possible to define an initial cutoff value for the age, filtering out 
all older records"
++ " When this property is set the processor will retrieve new records. 
Incremental loading and record-based processing are only supported in 
property-based queries."
++ " It's also possible to define an initial cutoff value for the age, 
filtering out all older records"
 + " even for the first run. In case of 'Property Based Query' this 
processor should run on the Primary Node only."
 + " FlowFile attribute 'record.count' indicates how many records were 
retrieved and written to the output."
-+ " By using 'Custom Query', the processor can accept an optional 
input flowfile and reference the flowfile attributes in the query."
-+ " However, incremental loading and record-based processing are not 
supported in this scenario.")
++ " The processor can accept an optional input flowfile and reference 
the flowfile attributes in the query.")

Review Comment:
   flowfile -> FlowFile



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-11194) Remove unused gzip CSS and JS bundling from nifi-web-ui

2023-03-08 Thread Matt Gilman (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-11194:
---
Fix Version/s: 1.21.0

> Remove unused gzip CSS and JS bundling from nifi-web-ui
> ---
>
> Key: NIFI-11194
> URL: https://issues.apache.org/jira/browse/NIFI-11194
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The {{nifi-web-ui}} includes Maven build plugins to prepare CSS and 
> JavaScript files for bundling. The current configuration also creates gzip 
> versions of each file, which are not used at runtime. This gzip files 
> increase the size of the {{nifi-web-ui}} WAR and also take a majority of 
> build time relative to other modules. The gzip processing and bundling should 
> be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] krisztina-zsihovszki commented on a diff in pull request #7019: NIFI-11224: Refactor and FF attribute support in WHERE in QuerySalesf…

2023-03-08 Thread via GitHub


krisztina-zsihovszki commented on code in PR #7019:
URL: https://github.com/apache/nifi/pull/7019#discussion_r1129775136


##
nifi-nar-bundles/nifi-salesforce-bundle/nifi-salesforce-processors/src/main/java/org/apache/nifi/processors/salesforce/QuerySalesforceObject.java:
##
@@ -385,144 +354,181 @@ private void processQuery(ProcessContext context, 
ProcessSession session) {
 .collect(Collectors.joining(","));
 }
 
-String querySObject = buildQuery(
-sObject,
-fields,
-customWhereClause,
-ageField,
-initialAgeFilter,
-ageFilterLower,
-ageFilterUpper
-);
+String querySObject = new SalesforceQueryBuilder(incrementalContext)
+.buildQuery(sObject, fields, customWhereClause);
+
+AtomicBoolean isOriginalTransferred = new AtomicBoolean(false);
+List outgoingFlowFiles = new ArrayList<>();
+Map originalAttributes = 
Optional.ofNullable(originalFlowFile)
+.map(FlowFile::getAttributes)
+.orElseGet(HashMap::new);
+
+long startNanos = System.nanoTime();
 
 do {
-FlowFile flowFile = session.create();
-Map originalAttributes = flowFile.getAttributes();
-Map attributes = new HashMap<>();
+FlowFile outgoingFlowFile = createOutgoingFlowFile(session, 
originalFlowFile);
+outgoingFlowFiles.add(outgoingFlowFile);
+Map attributes = new HashMap<>(originalAttributes);
 
 AtomicInteger recordCountHolder = new AtomicInteger();
-long startNanos = System.nanoTime();
-flowFile = session.write(flowFile, out -> {
-try (
-InputStream querySObjectResultInputStream = 
getResultInputStream(nextRecordsUrl.get(), querySObject);
-
-JsonTreeRowRecordReader jsonReader = new 
JsonTreeRowRecordReader(
-querySObjectResultInputStream,
-getLogger(),
-salesForceSchemaHolder.recordSchema,
-DATE_FORMAT,
-TIME_FORMAT,
-DATE_TIME_FORMAT,
-StartingFieldStrategy.NESTED_FIELD,
-STARTING_FIELD_NAME,
-SchemaApplicationStrategy.SELECTED_PART,
-CAPTURE_PREDICATE
-);
-
-RecordSetWriter writer = writerFactory.createWriter(
-getLogger(),
-writerFactory.getSchema(
-originalAttributes,
-salesForceSchemaHolder.recordSchema
-),
-out,
-originalAttributes
-)
-) {
-writer.beginRecordSet();
-
-Record querySObjectRecord;
-while ((querySObjectRecord = jsonReader.nextRecord()) != 
null) {
-writer.write(querySObjectRecord);
-}
-
-WriteResult writeResult = writer.finishRecordSet();
+try {
+outgoingFlowFile = session.write(outgoingFlowFile, 
processRecordsCallback(context, nextRecordsUrl, writerFactory, state, 
incrementalContext,
+salesForceSchemaHolder, querySObject, 
originalAttributes, attributes, recordCountHolder));
+int recordCount = recordCountHolder.get();
 
-Map capturedFields = 
jsonReader.getCapturedFields();
+if (createZeroRecordFlowFiles || recordCount != 0) {
+outgoingFlowFile = 
session.putAllAttributes(outgoingFlowFile, attributes);
 
-
nextRecordsUrl.set(capturedFields.getOrDefault(NEXT_RECORDS_URL, null));
+session.adjustCounter("Records Processed", recordCount, 
false);
+getLogger().info("Successfully written {} records for {}", 
recordCount, outgoingFlowFile);
+} else {
+outgoingFlowFiles.remove(outgoingFlowFile);
+session.remove(outgoingFlowFile);
+}
+} catch (Exception e) {
+if (e.getCause() instanceof IOException) {
+throw new ProcessException("Couldn't get Salesforce 
records", e);
+} else if (e.getCause() instanceof SchemaNotFoundException) {
+handleError(session, originalFlowFile, 
isOriginalTransferred, outgoingFlowFiles, e, "Couldn't create record writer");
+} else if (e.getCause() instanceof MalformedRecordExce

[GitHub] [nifi] krisztina-zsihovszki commented on a diff in pull request #7019: NIFI-11224: Refactor and FF attribute support in WHERE in QuerySalesf…

2023-03-08 Thread via GitHub


krisztina-zsihovszki commented on code in PR #7019:
URL: https://github.com/apache/nifi/pull/7019#discussion_r1129770785


##
nifi-nar-bundles/nifi-salesforce-bundle/nifi-salesforce-processors/src/main/java/org/apache/nifi/processors/salesforce/QuerySalesforceObject.java:
##
@@ -102,11 +107,11 @@
 @CapabilityDescription("Retrieves records from a Salesforce sObject. Users can 
add arbitrary filter conditions by setting the 'Custom WHERE Condition' 
property."
 + " The processor can also run a custom query, although record 
processing is not supported in that case."
 + " Supports incremental retrieval: users can define a field in the 
'Age Field' property that will be used to determine when the record was 
created."
-+ " When this property is set the processor will retrieve new records. 
It's also possible to define an initial cutoff value for the age, filtering out 
all older records"
++ " When this property is set the processor will retrieve new records. 
Incremental loading and record-based processing are only supported in 
property-based queries."

Review Comment:
   It'd be useful to separate the "Property based query" and "Custom query" 
mode in the processor description.
   E.g. 'Age Field' does not exist in "Custom query" mode.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-11194) Remove unused gzip CSS and JS bundling from nifi-web-ui

2023-03-08 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17698012#comment-17698012
 ] 

ASF subversion and git services commented on NIFI-11194:


Commit ccea9760f9bf6cf292a46622f6c46fde7e1dbff7 in nifi's branch 
refs/heads/main from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ccea9760f9 ]

NIFI-11194 Remove gzip CSS and JS from nifi-web-ui (#6968)

* NIFI-11194 Removed gzip CSS and JS from nifi-web-ui

* NIFI-11194 Updated comment for static content aggregation

This closes #6968 

> Remove unused gzip CSS and JS bundling from nifi-web-ui
> ---
>
> Key: NIFI-11194
> URL: https://issues.apache.org/jira/browse/NIFI-11194
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The {{nifi-web-ui}} includes Maven build plugins to prepare CSS and 
> JavaScript files for bundling. The current configuration also creates gzip 
> versions of each file, which are not used at runtime. This gzip files 
> increase the size of the {{nifi-web-ui}} WAR and also take a majority of 
> build time relative to other modules. The gzip processing and bundling should 
> be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11194) Remove unused gzip CSS and JS bundling from nifi-web-ui

2023-03-08 Thread Matt Gilman (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-11194:
---
Fix Version/s: 2.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Remove unused gzip CSS and JS bundling from nifi-web-ui
> ---
>
> Key: NIFI-11194
> URL: https://issues.apache.org/jira/browse/NIFI-11194
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The {{nifi-web-ui}} includes Maven build plugins to prepare CSS and 
> JavaScript files for bundling. The current configuration also creates gzip 
> versions of each file, which are not used at runtime. This gzip files 
> increase the size of the {{nifi-web-ui}} WAR and also take a majority of 
> build time relative to other modules. The gzip processing and bundling should 
> be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] mcgilman merged pull request #6968: NIFI-11194 Remove gzip CSS and JS from nifi-web-ui

2023-03-08 Thread via GitHub


mcgilman merged PR #6968:
URL: https://github.com/apache/nifi/pull/6968


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] mcgilman commented on a diff in pull request #6968: NIFI-11194 Remove gzip CSS and JS from nifi-web-ui

2023-03-08 Thread via GitHub


mcgilman commented on code in PR #6968:
URL: https://github.com/apache/nifi/pull/6968#discussion_r1129762289


##
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/pom.xml:
##
@@ -896,81 +896,45 @@
 
 assets/**/*,
 css/common-ui.css,
-css/common-ui.css.gz,

Review Comment:
   Only minor thing I would suggest is to update the comment above to something 
like...
   
   > Configuration to ensure that we only bundle the aggregated version of 
static content.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] simonbence commented on a diff in pull request #7017: NIFI-11213 Showing version change in older (pre 1.18.0) contained version flows properly

2023-03-08 Thread via GitHub


simonbence commented on code in PR #7017:
URL: https://github.com/apache/nifi/pull/7017#discussion_r1129758652


##
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/util/FlowDifferenceFilters.java:
##
@@ -192,22 +193,13 @@ public static boolean 
isIgnorableVersionedFlowCoordinateChange(final FlowDiffere
 final VersionedFlowCoordinates coordinatesB = 
versionedProcessGroupB.getVersionedFlowCoordinates();
 
 if (coordinatesA != null && coordinatesB != null) {
-String registryUrlA = coordinatesA.getRegistryUrl();
-String registryUrlB = coordinatesB.getRegistryUrl();
-
-if (registryUrlA != null && registryUrlB != null && 
!registryUrlA.equals(registryUrlB)) {
-if (registryUrlA.endsWith("/")) {
-registryUrlA = registryUrlA.substring(0, 
registryUrlA.length() - 1);
-}
-
-if (registryUrlB.endsWith("/")) {
-registryUrlB = registryUrlB.substring(0, 
registryUrlB.length() - 1);
-}
-
-if (registryUrlA.equals(registryUrlB)) {
-return true;
-}
+if (coordinatesA.getStorageLocation() != null || 
coordinatesB.getStorageLocation() != null) {
+return false;
 }
+
+return  
!FlowDifferenceUtil.areRegistryStrictlyEqual(coordinatesA, coordinatesB)
+&& 
FlowDifferenceUtil.areRegistryUrlsEqual(coordinatesA, coordinatesB)

Review Comment:
   My first approach and assumption was that, but with that, test 
`TestFlowDifferenceFilter#testFilterIgnorableVersionCoordinateDifferenceWithNonIgnorableDifference`has
 broken and after careful check, it turned out this distinction in the original 
code is deliberate. Our considerations might changed since but removing that 
would bring in a potential regression



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] markap14 commented on pull request #7020: NIFI-11257 Enable Maven HTTP pooling and additional retries

2023-03-08 Thread via GitHub


markap14 commented on PR #7020:
URL: https://github.com/apache/nifi/pull/7020#issuecomment-1460471184

   Thanks @exceptionfactory this is really helpful. +1 merged to main


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-11257) Improve Reliability of GitHub Runner Artifact Retrieval

2023-03-08 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-11257:
--
Fix Version/s: 2.0.0
   1.21.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Improve Reliability of GitHub Runner Artifact Retrieval
> ---
>
> Key: NIFI-11257
> URL: https://issues.apache.org/jira/browse/NIFI-11257
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Automated builds using GitHub Actions have failed more often with errors 
> related to artifact retrieval.
> {noformat}
> Error:  Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:attach-descriptor 
> (attach-descriptor) on project nifi-registry:
> Execution attach-descriptor of goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:attach-descriptor failed:
> Plugin org.apache.maven.plugins:maven-site-plugin:3.4 or one of its 
> dependencies could not be resolved:
> Failed to collect dependencies at 
> org.apache.maven.plugins:maven-site-plugin:jar:3.4 -> 
> org.apache.maven.reporting:maven-reporting-exec:jar:1.2:
> Failed to read artifact descriptor for 
> org.apache.maven.reporting:maven-reporting-exec:jar:1.2:
> Could not transfer artifact 
> org.apache.maven.reporting:maven-reporting-exec:pom:1.2 from/to central 
> (https://repo.maven.apache.org/maven2): Connection reset -> [Help 1]{noformat}
> {noformat}
> Error:  Failed to execute goal on project nifi-hadoop-utils: Could not 
> resolve dependencies for project 
> org.apache.nifi:nifi-hadoop-utils:jar:2.0.0-SNAPSHOT:
> Failed to collect dependencies at org.apache.hadoop:hadoop-common:jar:3.3.4:
> Failed to read artifact descriptor for 
> org.apache.hadoop:hadoop-common:jar:3.3.4:
> Could not transfer artifact org.apache.hadoop:hadoop-common:pom:3.3.4 from/to 
> central (https://repo.maven.apache.org/maven2): Connection reset -> [Help 1]
> {noformat}
> Adjusting Maven Wagon connection settings and using an alternative Maven 
> repository mirror may help improve reliability.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] markap14 merged pull request #7020: NIFI-11257 Enable Maven HTTP pooling and additional retries

2023-03-08 Thread via GitHub


markap14 merged PR #7020:
URL: https://github.com/apache/nifi/pull/7020


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-11257) Improve Reliability of GitHub Runner Artifact Retrieval

2023-03-08 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17698000#comment-17698000
 ] 

ASF subversion and git services commented on NIFI-11257:


Commit a13c2cf010c4e12ee266ca5fbc45b485813e9c09 in nifi's branch 
refs/heads/main from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=a13c2cf010 ]

NIFI-11257 Enable Maven HTTP pooling and additional retries (#7020)

* NIFI-11257 Enabled Maven HTTP pooling and additional retries

- Enabled HTTP connection pooling for Maven Wagon
- Configured 30 second timeout for HTTP pooled connection lifespan
- Enabled 5 retries for HTTP connections
- Set maximum connection per route to 5 instead of 20
- Enabled retry for sent HTTP requests

> Improve Reliability of GitHub Runner Artifact Retrieval
> ---
>
> Key: NIFI-11257
> URL: https://issues.apache.org/jira/browse/NIFI-11257
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Automated builds using GitHub Actions have failed more often with errors 
> related to artifact retrieval.
> {noformat}
> Error:  Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:attach-descriptor 
> (attach-descriptor) on project nifi-registry:
> Execution attach-descriptor of goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:attach-descriptor failed:
> Plugin org.apache.maven.plugins:maven-site-plugin:3.4 or one of its 
> dependencies could not be resolved:
> Failed to collect dependencies at 
> org.apache.maven.plugins:maven-site-plugin:jar:3.4 -> 
> org.apache.maven.reporting:maven-reporting-exec:jar:1.2:
> Failed to read artifact descriptor for 
> org.apache.maven.reporting:maven-reporting-exec:jar:1.2:
> Could not transfer artifact 
> org.apache.maven.reporting:maven-reporting-exec:pom:1.2 from/to central 
> (https://repo.maven.apache.org/maven2): Connection reset -> [Help 1]{noformat}
> {noformat}
> Error:  Failed to execute goal on project nifi-hadoop-utils: Could not 
> resolve dependencies for project 
> org.apache.nifi:nifi-hadoop-utils:jar:2.0.0-SNAPSHOT:
> Failed to collect dependencies at org.apache.hadoop:hadoop-common:jar:3.3.4:
> Failed to read artifact descriptor for 
> org.apache.hadoop:hadoop-common:jar:3.3.4:
> Could not transfer artifact org.apache.hadoop:hadoop-common:pom:3.3.4 from/to 
> central (https://repo.maven.apache.org/maven2): Connection reset -> [Help 1]
> {noformat}
> Adjusting Maven Wagon connection settings and using an alternative Maven 
> repository mirror may help improve reliability.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] exceptionfactory commented on a diff in pull request #4901: NIFI-8326: Send records as individual messages in Kafka RecordSinks

2023-03-08 Thread via GitHub


exceptionfactory commented on code in PR #4901:
URL: https://github.com/apache/nifi/pull/4901#discussion_r1129697755


##
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-2-6-processors/src/main/java/org/apache/nifi/record/sink/kafka/KafkaRecordSink_2_6.java:
##
@@ -160,6 +165,7 @@ public class KafkaRecordSink_2_6 extends 
AbstractControllerService implements Ka
 private volatile long maxAckWaitMillis;
 private volatile String topic;
 private volatile Producer producer;
+private final Queue> ackQ = new LinkedList<>();

Review Comment:
   Introducing this queue as a member variable seems like it could cause issues 
when multiple components are using the same RecordSink. Moving this queue to a 
method-local variable should avoid the potential for mixing up messages from 
different callers.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] exceptionfactory commented on a diff in pull request #4901: NIFI-8326: Send records as individual messages in Kafka RecordSinks

2023-03-08 Thread via GitHub


exceptionfactory commented on code in PR #4901:
URL: https://github.com/apache/nifi/pull/4901#discussion_r1129699796


##
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-2-6-processors/src/main/java/org/apache/nifi/record/sink/kafka/KafkaRecordSink_2_6.java:
##
@@ -226,65 +232,80 @@ public void onEnabled(final ConfigurationContext context) 
throws InitializationE
 public WriteResult sendData(final RecordSet recordSet, final Map attributes, final boolean sendZeroResults) throws IOException {
 
 try {
-WriteResult writeResult;
 final RecordSchema writeSchema = 
getWriterFactory().getSchema(null, recordSet.getSchema());
 final ByteArrayOutputStream baos = new ByteArrayOutputStream();
 final ByteCountingOutputStream out = new 
ByteCountingOutputStream(baos);
 int recordCount = 0;
 try (final RecordSetWriter writer = 
getWriterFactory().createWriter(getLogger(), writeSchema, out, attributes)) {
-writer.beginRecordSet();
 Record record;
 while ((record = recordSet.next()) != null) {
+baos.reset();
+out.reset();
 writer.write(record);
+writer.flush();
 recordCount++;
 if (out.getBytesWritten() > maxMessageSize) {
-throw new TokenTooLargeException("The query's result 
set size exceeds the maximum allowed message size of " + maxMessageSize + " 
bytes.");
+throw new TokenTooLargeException("A record's size 
exceeds the maximum allowed message size of " + maxMessageSize + " bytes.");
 }
+sendMessage(topic, baos.toByteArray());
 }
-writeResult = writer.finishRecordSet();
 if (out.getBytesWritten() > maxMessageSize) {
-throw new TokenTooLargeException("The query's result set 
size exceeds the maximum allowed message size of " + maxMessageSize + " 
bytes.");
+throw new TokenTooLargeException("A record's size exceeds 
the maximum allowed message size of " + maxMessageSize + " bytes.");
 }
-recordCount = writeResult.getRecordCount();
 
 attributes.put(CoreAttributes.MIME_TYPE.key(), 
writer.getMimeType());
 attributes.put("record.count", Integer.toString(recordCount));
-attributes.putAll(writeResult.getAttributes());
 }
 
-if (recordCount > 0 || sendZeroResults) {
-final ProducerRecord record = new 
ProducerRecord<>(topic, null, null, baos.toByteArray());
-try {
-producer.send(record, (metadata, exception) -> {
-if (exception != null) {
-throw new KafkaSendException(exception);
-}
-}).get(maxAckWaitMillis, TimeUnit.MILLISECONDS);
-} catch (KafkaSendException kse) {
-Throwable t = kse.getCause();
-if (t instanceof IOException) {
-throw (IOException) t;
-} else {
-throw new IOException(t);
-}
-} catch (final InterruptedException e) {
-getLogger().warn("Interrupted while waiting for an 
acknowledgement from Kafka");
-Thread.currentThread().interrupt();
-} catch (final TimeoutException e) {
-getLogger().warn("Timed out while waiting for an 
acknowledgement from Kafka");
+if (recordCount == 0) {
+if (sendZeroResults) {
+sendMessage(topic, new byte[0]);
+} else {
+return WriteResult.EMPTY;
 }
-} else {
-writeResult = WriteResult.EMPTY;
 }
 
-return writeResult;
+acknowledgeTransmission();
+
+return WriteResult.of(recordCount, attributes);
 } catch (IOException ioe) {
 throw ioe;
 } catch (Exception e) {
 throw new IOException("Failed to write metrics using record 
writer: " + e.getMessage(), e);
 }
 }
 
+public void sendMessage(String topic, byte[] payload) throws IOException, 
ExecutionException {
+final ProducerRecord record = new 
ProducerRecord<>(topic, null, null, payload);
+// Add the Future to the queue
+ackQ.add(producer.send(record, (metadata, exception) -> {
+if (exception != null) {
+throw new KafkaSendException(exception);
+}
+}));
+}
+
+public void acknowledgeTransmission() throws IOException, 
ExecutionException {

Review Comment:
   It looks like this method should be `private`. As mentioned ab

[GitHub] [nifi] exceptionfactory commented on a diff in pull request #4901: NIFI-8326: Send records as individual messages in Kafka RecordSinks

2023-03-08 Thread via GitHub


exceptionfactory commented on code in PR #4901:
URL: https://github.com/apache/nifi/pull/4901#discussion_r1129694093


##
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-2-6-processors/src/main/java/org/apache/nifi/record/sink/kafka/KafkaRecordSink_2_6.java:
##
@@ -226,65 +232,80 @@ public void onEnabled(final ConfigurationContext context) 
throws InitializationE
 public WriteResult sendData(final RecordSet recordSet, final Map attributes, final boolean sendZeroResults) throws IOException {
 
 try {
-WriteResult writeResult;
 final RecordSchema writeSchema = 
getWriterFactory().getSchema(null, recordSet.getSchema());
 final ByteArrayOutputStream baos = new ByteArrayOutputStream();
 final ByteCountingOutputStream out = new 
ByteCountingOutputStream(baos);
 int recordCount = 0;
 try (final RecordSetWriter writer = 
getWriterFactory().createWriter(getLogger(), writeSchema, out, attributes)) {
-writer.beginRecordSet();
 Record record;
 while ((record = recordSet.next()) != null) {
+baos.reset();
+out.reset();
 writer.write(record);
+writer.flush();
 recordCount++;
 if (out.getBytesWritten() > maxMessageSize) {
-throw new TokenTooLargeException("The query's result 
set size exceeds the maximum allowed message size of " + maxMessageSize + " 
bytes.");
+throw new TokenTooLargeException("A record's size 
exceeds the maximum allowed message size of " + maxMessageSize + " bytes.");
 }
+sendMessage(topic, baos.toByteArray());
 }
-writeResult = writer.finishRecordSet();
 if (out.getBytesWritten() > maxMessageSize) {
-throw new TokenTooLargeException("The query's result set 
size exceeds the maximum allowed message size of " + maxMessageSize + " 
bytes.");
+throw new TokenTooLargeException("A record's size exceeds 
the maximum allowed message size of " + maxMessageSize + " bytes.");
 }
-recordCount = writeResult.getRecordCount();
 
 attributes.put(CoreAttributes.MIME_TYPE.key(), 
writer.getMimeType());
 attributes.put("record.count", Integer.toString(recordCount));
-attributes.putAll(writeResult.getAttributes());
 }
 
-if (recordCount > 0 || sendZeroResults) {
-final ProducerRecord record = new 
ProducerRecord<>(topic, null, null, baos.toByteArray());
-try {
-producer.send(record, (metadata, exception) -> {
-if (exception != null) {
-throw new KafkaSendException(exception);
-}
-}).get(maxAckWaitMillis, TimeUnit.MILLISECONDS);
-} catch (KafkaSendException kse) {
-Throwable t = kse.getCause();
-if (t instanceof IOException) {
-throw (IOException) t;
-} else {
-throw new IOException(t);
-}
-} catch (final InterruptedException e) {
-getLogger().warn("Interrupted while waiting for an 
acknowledgement from Kafka");
-Thread.currentThread().interrupt();
-} catch (final TimeoutException e) {
-getLogger().warn("Timed out while waiting for an 
acknowledgement from Kafka");
+if (recordCount == 0) {
+if (sendZeroResults) {
+sendMessage(topic, new byte[0]);
+} else {
+return WriteResult.EMPTY;
 }
-} else {
-writeResult = WriteResult.EMPTY;
 }
 
-return writeResult;
+acknowledgeTransmission();
+
+return WriteResult.of(recordCount, attributes);
 } catch (IOException ioe) {
 throw ioe;
 } catch (Exception e) {
 throw new IOException("Failed to write metrics using record 
writer: " + e.getMessage(), e);
 }
 }
 
+public void sendMessage(String topic, byte[] payload) throws IOException, 
ExecutionException {

Review Comment:
   It looks like this method should be `private` or at least `protected` since 
it is not part of the RecordSink interface. Is there a reason for making it 
public otherwise?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infr

[GitHub] [nifi] tpalfy commented on pull request #6987: NIFI-11137 Add record support to Consume/PublishJMS

2023-03-08 Thread via GitHub


tpalfy commented on PR #6987:
URL: https://github.com/apache/nifi/pull/6987#issuecomment-1460369270

   Reviewing


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (NIFI-11166) IdentifyMimeType processor identifies flowfile-v3 as video/x-ms-wmv when containing wmv file

2023-03-08 Thread Nissim Shiman (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nissim Shiman resolved NIFI-11166.
--
Resolution: Duplicate

> IdentifyMimeType processor identifies flowfile-v3 as video/x-ms-wmv when 
> containing wmv file
> 
>
> Key: NIFI-11166
> URL: https://issues.apache.org/jira/browse/NIFI-11166
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Nissim Shiman
>Assignee: Nissim Shiman
>Priority: Major
>
> To recreate:
> GetFile -> MergeContent -> IdentifyMimeType -> funnel
> where GetFile is getting a .wmv video file.
> MergeContent should have
> Merge Format as "FlowFile Stream, v3"
> Output flowfile will have mime.type of
> video/x-ms-wmv
> as opposed to
> application/flowfile-v3
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi-minifi-cpp] fgerlits commented on a diff in pull request #1492: MINIFICPP-2021 Auto-generate extra doc sections in PROCESSORS.md

2023-03-08 Thread via GitHub


fgerlits commented on code in PR #1492:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1492#discussion_r1129605930


##
extensions/http-curl/processors/InvokeHTTP.cpp:
##
@@ -149,6 +149,13 @@ const core::Relationship InvokeHTTP::RelFailure("failure",
 "The original FlowFile will be routed on any type of connection failure, "
 "timeout or general exception. It will have new attributes detailing the 
request.");
 
+
+const core::OutputAttribute InvokeHTTP::StatusCode{"invokehttp.status.code", { 
Success, RelResponse, RelRetry, RelNoRetry }, "The status code that is 
returned"};

Review Comment:
   good idea, fixed in b520a96bc29abfb4019ae92cd167ce806b06a4c4



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-11217) NiFi NAR Maven Plugin fails to build external NARs with transitive, provided dependencies.

2023-03-08 Thread Kevin Doran (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFI-11217:
---
Status: Patch Available  (was: In Progress)

> NiFi NAR Maven Plugin fails to build external NARs with transitive, provided 
> dependencies.
> --
>
> Key: NIFI-11217
> URL: https://issues.apache.org/jira/browse/NIFI-11217
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: nifi-nar-maven-plugin-1.4.0
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>Priority: Major
> Fix For: nifi-nar-maven-plugin-1.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> It appears that the NAR maven plugin was benefiting from behavior in older 
> version of the maven-dependency-tree library that would resolve artifacts in 
> addition to poms when crawling dependendncies. This guaranteed that they 
> would be in the local Maven repository/cache when in the Extension 
> Documentation generation phase of NAR building.
> Version 1.4.0 of the plugin upgraded maven-dependency-tree to 3.2.0, which 
> included this behavior change to only download poms:
> https://github.com/apache/maven-dependency-tree/commit/b330fa93b70e35c70a8afa75f0404cf47d5935d6
>  
> This broke building NARs that are external from the Apache NiFi 
> repository/project that inherit from (or depend on) NiFi NARs that have 
> transitive dependencies marked as provided, because the Extension 
> Documentation generation needs the full artifact resolved in order to create 
> a working ClassLoader. Not having artifacts resolved results in error 
> messages such as:
> {noformat}
> [INFO] --- nifi-nar-maven-plugin:1.4.0:nar (default-nar) @ 
> nifi-example-processors-nar ---
> [INFO] Copying nifi-example-processors-1.0.jar to 
> /Users/kdoran/dev/code/nifi-dependency-example/nifi-inherits-processor-bundle/nifi-example-processors-nar/target/classes/META-INF/bundled-dependencies/nifi-example-processors-1.0.jar
> [INFO] Generating documentation for NiFi extensions in the NAR...
> [INFO] Found NAR dependency of 
> org.apache.nifi:nifi-standard-services-api-nar:nar:1.20.0:compile
> [INFO] Found NAR dependency of 
> org.apache.nifi:nifi-jetty-bundle:nar:1.20.0:compile
> [INFO] Found a dependency on version 1.20.0 of NiFi API
> [ERROR] Could not generate extensions' documentation
> org.apache.maven.plugin.MojoExecutionException: Failed to create Extension 
> Documentation
> at org.apache.nifi.NarMojo.generateDocumentation (NarMojo.java:534)
> at org.apache.nifi.NarMojo.execute (NarMojo.java:505)
> at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo 
> (DefaultBuildPluginManager.java:137)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:210)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:156)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:148)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
> (LifecycleModuleBuilder.java:117)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
> (LifecycleModuleBuilder.java:81)
> at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
>  (SingleThreadedBuilder.java:56)
> at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
> (LifecycleStarter.java:128)
> at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
> at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
> at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
> at org.apache.maven.cli.MavenCli.execute (MavenCli.java:972)
> at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:293)
> at org.apache.maven.cli.MavenCli.main (MavenCli.java:196)
> at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
> at jdk.internal.reflect.NativeMethodAccessorImpl.invoke 
> (NativeMethodAccessorImpl.java:62)
> at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke 
> (DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke (Method.java:566)
> at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced 
> (Launcher.java:282)
> at org.codehaus.plexus.classworlds.launcher.Launcher.launch 
> (Launcher.java:225)
> at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode 
> (Launcher.java:406)
> at org.codehaus.plexus.classworlds.launcher.Launcher.main 
> (Launcher.java:347)
> Caused by: org.apache.maven.plugin.MojoExecutionException: Could not resolve 
> local dependency org.apache.nifi:nifi-framework-api:jar:1.20.0
> at 
> org.apache.nifi.extension.

[jira] [Updated] (NIFI-11218) Upgrade dependencies in NAR Maven Plugin

2023-03-08 Thread Kevin Doran (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFI-11218:
---
Status: Patch Available  (was: Open)

> Upgrade dependencies in NAR Maven Plugin
> 
>
> Key: NIFI-11218
> URL: https://issues.apache.org/jira/browse/NIFI-11218
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: nifi-nar-maven-plugin-1.4.0
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>Priority: Minor
> Fix For: nifi-nar-maven-plugin-1.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In investigating NIFI-11217, we observed that a number of core dependencies 
> for the NiFi NAR Maven Plugin are far outdated, some a full major version 
> behind. 
> This task is to bring core maven dependencies for the NiFi NAR Maven Plugin 
> up to latest versions, which will require some code changes. Specifically, we 
> depend heavily on maven-dependency 2.x and will need code changes to update 
> to 3.x.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi-maven] kevdoran opened a new pull request, #30: NIFI-11218 Upgrade core dependencies

2023-03-08 Thread via GitHub


kevdoran opened a new pull request, #30:
URL: https://github.com/apache/nifi-maven/pull/30

   - Includes code changes to support migrating maven-dependency from 2.x to 3.x
   - Sets `1.5.0` as the next target version for the NiFi NAR Maven Plugin, as 
this is a major change in dependencies


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] slambrose commented on pull request #6993: NIFI-11231 Stateless NiFi sensitive parameter context support

2023-03-08 Thread via GitHub


slambrose commented on PR #6993:
URL: https://github.com/apache/nifi/pull/6993#issuecomment-1460261250

   > > Use case: We are attempting to use stateless NiFi in k8s to process data 
while providing a horizontally scaled approach. Some of our users' flows are 
using sensitive parameters, so we have to be able to support this in our 
implementation. @Dye357 is the project lead for the effort, and can give a more 
robust explanation if needed. I tested this with an SFTP processor using the 
secure environment variable parameter context, which worked successfully.
   > 
   > Thanks for addressing the code comments and providing some background on 
the use case @slambrose, that is helpful.
   > 
   > Have you or @Dye357 reviewed the [Parameter Value 
Provider](https://github.com/apache/nifi/tree/main/nifi-stateless/nifi-stateless-assembly#passing-parameters)
 implementations for NiFi Stateless? The Parameter Value Providers support the 
use case of supplying sensitive parameter values, which should make the 
proposed changes unnecessary. If there is some feature gap in the Parameter 
Value Providers, we should evaluate that for improvement, as opposed to 
introducing Parameter Providers, intended for traditional NiFi deployments.
   
   Hmm.. I'm not sure Parameter Value Provider would work for us since these 
have to be passed in on runtime and known beforehand. We're going to be 
supporting a variety of flows stored in Registry. The goal is to run stateless 
NiFi with any given Registry url, bucket id, flow id, and flow version without 
any knowledge of what properties/params users have configured in their flows. 
So those flows that utilize sensitive parameter contexts need to be able to 
work under any running statless nifi pod/container. We wouldn't be changing the 
run command or properties files for each flow that's processed with stateless 
NiFi. Let me chat some more with @Dye357 to see if there's anything I'm missing.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] exceptionfactory commented on pull request #6993: NIFI-11231 Stateless NiFi sensitive parameter context support

2023-03-08 Thread via GitHub


exceptionfactory commented on PR #6993:
URL: https://github.com/apache/nifi/pull/6993#issuecomment-1460246061

   > Use case: We are attempting to use stateless NiFi in k8s to process data 
while providing a horizontally scaled approach. Some of our users' flows are 
using sensitive parameters, so we have to be able to support this in our 
implementation. @Dye357 is the project lead for the effort, and can give a more 
robust explanation if needed. I tested this with an SFTP processor using the 
secure environment variable parameter context, which worked successfully.
   
   Thanks for addressing the code comments and providing some background on the 
use case @slambrose, that is helpful.
   
   Have you or @Dye357 reviewed the [Parameter Value 
Provider](https://github.com/apache/nifi/tree/main/nifi-stateless/nifi-stateless-assembly#passing-parameters)
 implementations for NiFi Stateless? The Parameter Value Providers support the 
use case of supplying sensitive parameter values, which should make the 
proposed changes unnecessary. If there is some feature gap in the Parameter 
Value Providers, we should evaluate that for improvement, as opposed to 
introducing Parameter Providers, intended for traditional NiFi deployments.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-11144) Fix failing tests for ConsumeJMS/PublishJMS

2023-03-08 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17697922#comment-17697922
 ] 

ASF subversion and git services commented on NIFI-11144:


Commit 9ee34eeb0179bb368667401b34f3f74505e3c545 in nifi's branch 
refs/heads/support/nifi-1.x from Nandor Soma Abonyi
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=9ee34eeb01 ]

NIFI-11144 Fix failing tests for ConsumeJMS/PublishJMS

This closes #6930.

Signed-off-by: Tamas Palfy 


> Fix failing tests for ConsumeJMS/PublishJMS
> ---
>
> Key: NIFI-11144
> URL: https://issues.apache.org/jira/browse/NIFI-11144
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Nandor Soma Abonyi
>Assignee: Nandor Soma Abonyi
>Priority: Major
>  Labels: JMS
> Fix For: 2.0.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1492: MINIFICPP-2021 Auto-generate extra doc sections in PROCESSORS.md

2023-03-08 Thread via GitHub


adamdebreceni commented on code in PR #1492:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1492#discussion_r1129475052


##
extensions/http-curl/processors/InvokeHTTP.cpp:
##
@@ -149,6 +149,13 @@ const core::Relationship InvokeHTTP::RelFailure("failure",
 "The original FlowFile will be routed on any type of connection failure, "
 "timeout or general exception. It will have new attributes detailing the 
request.");
 
+
+const core::OutputAttribute InvokeHTTP::StatusCode{"invokehttp.status.code", { 
Success, RelResponse, RelRetry, RelNoRetry }, "The status code that is 
returned"};

Review Comment:
   we could also work in the other direction in a subsequent PR and use these 
instead of constant strings



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1492: MINIFICPP-2021 Auto-generate extra doc sections in PROCESSORS.md

2023-03-08 Thread via GitHub


adamdebreceni commented on code in PR #1492:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1492#discussion_r1129461061


##
extensions/http-curl/processors/InvokeHTTP.cpp:
##
@@ -149,6 +149,13 @@ const core::Relationship InvokeHTTP::RelFailure("failure",
 "The original FlowFile will be routed on any type of connection failure, "
 "timeout or general exception. It will have new attributes detailing the 
request.");
 
+
+const core::OutputAttribute InvokeHTTP::StatusCode{"invokehttp.status.code", { 
Success, RelResponse, RelRetry, RelNoRetry }, "The status code that is 
returned"};

Review Comment:
   can we use the `STATUS_CODE` and other constants from `InvokeHTTP.h`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-11234) NPE in QuerySalesforceObject

2023-03-08 Thread Tamas Palfy (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Palfy updated NIFI-11234:
---
Fix Version/s: 1.21.0

> NPE in QuerySalesforceObject
> 
>
> Key: NIFI-11234
> URL: https://issues.apache.org/jira/browse/NIFI-11234
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Lehel Boér
>Assignee: Lehel Boér
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In case of Property Based queries if the RecordWriter service is not set an 
> NPE is thrown. The RecordWriter should be made mandatory.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11147) Allow QuerySalesforceObject to query all existing fields

2023-03-08 Thread Tamas Palfy (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Palfy updated NIFI-11147:
---
Fix Version/s: 1.21.0

> Allow QuerySalesforceObject to query all existing fields
> 
>
> Key: NIFI-11147
> URL: https://issues.apache.org/jira/browse/NIFI-11147
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Matt Burgess
>Assignee: Lehel Boér
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently the Field Names property of QuerySalesforceObject is required and 
> must contain the names of the fields the user wants to return. However in a 
> schema drift use case, the user may want to add a field to a Salesforce 
> object and have the NiFi flow continue without needing alteration.
> This Jira is to make it possible for QuerySalesforceObject to return all 
> fields from an object. A suggestion is to make Field Names optional and if it 
> is not set, all fields are queried. The documentation should be updated to 
> match the behavior.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-10966) Add the feature to QuerySalesforceObject to accept custom queries

2023-03-08 Thread Tamas Palfy (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Palfy updated NIFI-10966:
---
Fix Version/s: 1.21.0

> Add the feature to QuerySalesforceObject to accept custom queries
> -
>
> Key: NIFI-10966
> URL: https://issues.apache.org/jira/browse/NIFI-10966
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Lehel Boér
>Assignee: Lehel Boér
>Priority: Major
> Fix For: 2.0.0, 1.21.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Extend the QuerySalesforceObject processor with a new property that accepts 
> custom SOQL queries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-10966) Add the feature to QuerySalesforceObject to accept custom queries

2023-03-08 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17697912#comment-17697912
 ] 

ASF subversion and git services commented on NIFI-10966:


Commit 61a3ced5cc9e2f750be69bbdfc93689d06ca80f7 in nifi's branch 
refs/heads/support/nifi-1.x from Lehel Boér
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=61a3ced5cc ]

NIFI-10966: Add option to QuerySalesforceObject to run custom query

This closes #6794.

Signed-off-by: Tamas Palfy 


> Add the feature to QuerySalesforceObject to accept custom queries
> -
>
> Key: NIFI-10966
> URL: https://issues.apache.org/jira/browse/NIFI-10966
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Lehel Boér
>Assignee: Lehel Boér
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Extend the QuerySalesforceObject processor with a new property that accepts 
> custom SOQL queries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11147) Allow QuerySalesforceObject to query all existing fields

2023-03-08 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17697914#comment-17697914
 ] 

ASF subversion and git services commented on NIFI-11147:


Commit 03625bb679bc073e3913137d2417b96a4e9d75e7 in nifi's branch 
refs/heads/support/nifi-1.x from Lehel Boér
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=03625bb679 ]

NIFI-11147 Backport - Fix Java8 compatibility issues.


> Allow QuerySalesforceObject to query all existing fields
> 
>
> Key: NIFI-11147
> URL: https://issues.apache.org/jira/browse/NIFI-11147
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Matt Burgess
>Assignee: Lehel Boér
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently the Field Names property of QuerySalesforceObject is required and 
> must contain the names of the fields the user wants to return. However in a 
> schema drift use case, the user may want to add a field to a Salesforce 
> object and have the NiFi flow continue without needing alteration.
> This Jira is to make it possible for QuerySalesforceObject to return all 
> fields from an object. A suggestion is to make Field Names optional and if it 
> is not set, all fields are queried. The documentation should be updated to 
> match the behavior.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11234) NPE in QuerySalesforceObject

2023-03-08 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17697913#comment-17697913
 ] 

ASF subversion and git services commented on NIFI-11234:


Commit 203de77bb980e3168d6b0485f5af6575b25c2381 in nifi's branch 
refs/heads/support/nifi-1.x from Lehel Boér
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=203de77bb9 ]

NIFI-11234: Fix RecordWriter NPE in QuerySalesforceObject

This closes #6997.

Signed-off-by: Tamas Palfy 


> NPE in QuerySalesforceObject
> 
>
> Key: NIFI-11234
> URL: https://issues.apache.org/jira/browse/NIFI-11234
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Lehel Boér
>Assignee: Lehel Boér
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In case of Property Based queries if the RecordWriter service is not set an 
> NPE is thrown. The RecordWriter should be made mandatory.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11147) Allow QuerySalesforceObject to query all existing fields

2023-03-08 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17697911#comment-17697911
 ] 

ASF subversion and git services commented on NIFI-11147:


Commit e0c1f0a89b9595b7a4d40b51737e5f9da876c07f in nifi's branch 
refs/heads/support/nifi-1.x from Lehel Boér
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=e0c1f0a89b ]

NIFI-11147: Query all fields in QuerySalesforceObject

Fix review comments


> Allow QuerySalesforceObject to query all existing fields
> 
>
> Key: NIFI-11147
> URL: https://issues.apache.org/jira/browse/NIFI-11147
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Matt Burgess
>Assignee: Lehel Boér
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently the Field Names property of QuerySalesforceObject is required and 
> must contain the names of the fields the user wants to return. However in a 
> schema drift use case, the user may want to add a field to a Salesforce 
> object and have the NiFi flow continue without needing alteration.
> This Jira is to make it possible for QuerySalesforceObject to return all 
> fields from an object. A suggestion is to make Field Names optional and if it 
> is not set, all fields are queried. The documentation should be updated to 
> match the behavior.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] slambrose commented on pull request #6993: NIFI-11231 Stateless NiFi sensitive parameter context support

2023-03-08 Thread via GitHub


slambrose commented on PR #6993:
URL: https://github.com/apache/nifi/pull/6993#issuecomment-1460134990

   Use case:
   We are attempting to use stateless NiFi in k8s to process data while 
providing a horizontally scaled approach. Some of our users' flows are using 
sensitive parameters, so we have to be able to support this in our 
implementation. @Dye357 is the project lead for the effort, and can give a more 
robust explanation if needed. I tested this with an SFTP processor using the 
secure environment variable parameter context, which worked successfully. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1490: MINIFICPP-2022 Add valid repository size metrics for all repositories

2023-03-08 Thread via GitHub


adamdebreceni commented on code in PR #1490:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1490#discussion_r1129354900


##
libminifi/src/core/repository/VolatileRepository.cpp:
##
@@ -71,6 +71,9 @@ bool VolatileRepository::Put(const std::string& key, const 
uint8_t *buf, size_t
 }
   } while (!updated);
   repo_data_.current_size += size;
+  if (repo_data_.current_entry_count < repo_data_.max_count) {
+++repo_data_.current_entry_count;

Review Comment:
   why conditionally increment the `current_entry_count`? size seems to be 
always incremented



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] fgerlits opened a new pull request, #1527: MINIFICPP-2030 Expose InFlightMessageCounter in PublishMQTT as processor metric

2023-03-08 Thread via GitHub


fgerlits opened a new pull request, #1527:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1527

   https://issues.apache.org/jira/browse/MINIFICPP-2030
   
   ---
   
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [x] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically main)?
   
   - [x] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI 
results for build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1490: MINIFICPP-2022 Add valid repository size metrics for all repositories

2023-03-08 Thread via GitHub


adamdebreceni commented on code in PR #1490:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1490#discussion_r1129327118


##
extensions/rocksdb-repos/database/OpenRocksDb.cpp:
##
@@ -118,8 +114,15 @@ rocksdb::DB* OpenRocksDb::get() {
   return impl_.get();
 }
 
-}  // namespace internal
-}  // namespace minifi
-}  // namespace nifi
-}  // namespace apache
-}  // namespace org
+std::optional OpenRocksDb::getApproximateSizes() const {
+  const rocksdb::SizeApproximationOptions options{ .include_memtabtles = true 
};
+  const rocksdb::Range range("", "~");

Review Comment:
   could keys with leading `~` exist? content paths are prefixed with the 
content dir's path, I could not find if we turn that into absolute path or not, 
but if not, could the user set it relative to their home (e.g. 
`~/content_repository`)? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1490: MINIFICPP-2022 Add valid repository size metrics for all repositories

2023-03-08 Thread via GitHub


adamdebreceni commented on code in PR #1490:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1490#discussion_r1129300068


##
extensions/rocksdb-repos/ProvenanceRepository.cpp:
##
@@ -145,9 +112,12 @@ void ProvenanceRepository::destroy() {
 }
 
 uint64_t ProvenanceRepository::getKeyCount() const {
+  auto opendb = db_->open();
+  if (!opendb) {
+return 0;
+  }
   std::string key_count;
-  db_->GetProperty("rocksdb.estimate-num-keys", &key_count);
-
+  opendb->GetProperty("rocksdb.estimate-num-keys", &key_count);

Review Comment:
   is this different from `getRepositoryEntryCount`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-11260) Add SSL Context Service in AWSCredentialsProviderControllerService

2023-03-08 Thread Peter Turcsanyi (Jira)
Peter Turcsanyi created NIFI-11260:
--

 Summary: Add SSL Context Service in 
AWSCredentialsProviderControllerService 
 Key: NIFI-11260
 URL: https://issues.apache.org/jira/browse/NIFI-11260
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Peter Turcsanyi
Assignee: Peter Turcsanyi


AWSCredentialsProviderControllerService supports custom endpoints for Session 
Token Service used by Assume Role credential strategy. The custom endpoint may 
use HTTPS with a corporate certificate which is not signed by a public CA from 
the default truststore.
Add SSL Context Service to support custom endpoints with HTTPS.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1511: MINIFICPP-1716 Recover core dumps from CI

2023-03-08 Thread via GitHub


adamdebreceni commented on code in PR #1511:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1511#discussion_r1129254375


##
.github/workflows/ci.yml:
##
@@ -196,13 +247,36 @@ jobs:
   - id: build
 run: |
   if [ -d ~/.ccache ]; then mv ~/.ccache .; fi
-  mkdir build && cd build && cmake -DUSE_SHARED_LIBS=ON -DCI_BUILD=ON 
-DSTRICT_GSL_CHECKS=AUDIT -DFAIL_ON_WARNINGS=ON -DENABLE_AWS=ON 
-DENABLE_AZURE=ON \
+  mkdir build && cd build && cmake -DUSE_SHARED_LIBS=ON -DCI_BUILD=ON 
-DCMAKE_BUILD_TYPE=Release -DSTRICT_GSL_CHECKS=AUDIT -DFAIL_ON_WARNINGS=ON 
-DENABLE_AWS=ON -DENABLE_AZURE=ON \

Review Comment:
   should we add the comment about running out of size on centos here as well? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1511: MINIFICPP-1716 Recover core dumps from CI

2023-03-08 Thread via GitHub


adamdebreceni commented on code in PR #1511:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1511#discussion_r1129254375


##
.github/workflows/ci.yml:
##
@@ -196,13 +247,36 @@ jobs:
   - id: build
 run: |
   if [ -d ~/.ccache ]; then mv ~/.ccache .; fi
-  mkdir build && cd build && cmake -DUSE_SHARED_LIBS=ON -DCI_BUILD=ON 
-DSTRICT_GSL_CHECKS=AUDIT -DFAIL_ON_WARNINGS=ON -DENABLE_AWS=ON 
-DENABLE_AZURE=ON \
+  mkdir build && cd build && cmake -DUSE_SHARED_LIBS=ON -DCI_BUILD=ON 
-DCMAKE_BUILD_TYPE=Release -DSTRICT_GSL_CHECKS=AUDIT -DFAIL_ON_WARNINGS=ON 
-DENABLE_AWS=ON -DENABLE_AZURE=ON \

Review Comment:
   could we add the comment about running out of size on centos here as well? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1508: MINIFICPP-2040 - Avoid deserializing flow files just to be deleted

2023-03-08 Thread via GitHub


adamdebreceni commented on code in PR #1508:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1508#discussion_r1129244175


##
extensions/rocksdb-repos/FlowFileRepository.cpp:
##
@@ -43,50 +43,67 @@ void FlowFileRepository::flush() {
   auto batch = opendb->createWriteBatch();
   rocksdb::ReadOptions options;
 
-  std::vector> purgeList;
-
-  std::vector keys;
-  std::list keystrings;
-  std::vector values;
+  std::list flow_files;
 
   while (keys_to_delete.size_approx() > 0) {
-std::string key;
-if (keys_to_delete.try_dequeue(key)) {
-  keystrings.push_back(std::move(key));  // rocksdb::Slice doesn't copy 
the string, only grabs ptrs. Hacky, but have to ensure the required lifetime of 
the strings.
-  keys.push_back(keystrings.back());
+ExpiredFlowFileInfo info;
+if (keys_to_delete.try_dequeue(info)) {
+  flow_files.push_back(std::move(info));
 }
   }
-  auto multistatus = opendb->MultiGet(options, keys, &values);
 
-  for (size_t i = 0; i < keys.size() && i < values.size() && i < 
multistatus.size(); ++i) {
-if (!multistatus[i].ok()) {
-  logger_->log_error("Failed to read key from rocksdb: %s! DB is most 
probably in an inconsistent state!", keys[i].data());
-  keystrings.remove(keys[i].data());
-  continue;
+  {
+// deserialize flow files with missing content claim
+std::vector keys;
+std::vector::iterator> key_positions;
+for (auto it = flow_files.begin(); it != flow_files.end(); ++it) {
+  if (!it->content) {
+keys.push_back(it->key);
+key_positions.push_back(it);
+  }
 }
+if (!keys.empty()) {
+  std::vector values;
+  auto multistatus = opendb->MultiGet(options, keys, &values);
 
-utils::Identifier containerId;
-auto eventRead = 
FlowFileRecord::DeSerialize(gsl::make_span(values[i]).as_span(), content_repo_, containerId);
-if (eventRead) {
-  purgeList.push_back(eventRead);
+  for (size_t i = 0; i < keys.size() && i < values.size() && i < 
multistatus.size(); ++i) {
+if (!multistatus[i].ok()) {
+  logger_->log_error("Failed to read key from rocksdb: %s! DB is most 
probably in an inconsistent state!", keys[i].data());
+  flow_files.erase(key_positions.at(i));
+  continue;
+}
+
+utils::Identifier containerId;
+auto flow_file = 
FlowFileRecord::DeSerialize(gsl::make_span(values[i]).as_span(), content_repo_, containerId);
+if (flow_file) {
+  gsl_Expects(flow_file->getUUIDStr() == key_positions.at(i)->key);
+  key_positions.at(i)->content = flow_file->getResourceClaim();
+} else {
+  logger_->log_error("Could not deserialize flow file %s", 
key_positions.at(i)->key);
+}
+  }
 }
-logger_->log_debug("Issuing batch delete, including %s, Content path %s", 
eventRead->getUUIDStr(), eventRead->getContentFullPath());
-batch.Delete(keys[i]);
+  }
+
+  for (auto& ff : flow_files) {
+batch.Delete(ff.key);
+logger_->log_debug("Issuing batch delete, including %s, Content path %s", 
ff.key, ff.content ? ff.content->getContentFullPath() : "null");
   }
 
   auto operation = [&batch, &opendb]() { return 
opendb->Write(rocksdb::WriteOptions(), &batch); };
 
   if (!ExecuteWithRetry(operation)) {
-for (const auto& key : keystrings) {
-  keys_to_delete.enqueue(key);  // Push back the values that we could get 
but couldn't delete
+for (const auto& ff : flow_files) {
+  keys_to_delete.enqueue(ff);  // Push back the values that we could get 
but couldn't delete

Review Comment:
   done



##
extensions/rocksdb-repos/FlowFileRepository.cpp:
##
@@ -43,50 +43,67 @@ void FlowFileRepository::flush() {
   auto batch = opendb->createWriteBatch();
   rocksdb::ReadOptions options;
 
-  std::vector> purgeList;
-
-  std::vector keys;
-  std::list keystrings;
-  std::vector values;
+  std::list flow_files;
 
   while (keys_to_delete.size_approx() > 0) {

Review Comment:
   done



##
extensions/rocksdb-repos/FlowFileRepository.cpp:
##
@@ -43,50 +43,67 @@ void FlowFileRepository::flush() {
   auto batch = opendb->createWriteBatch();
   rocksdb::ReadOptions options;
 
-  std::vector> purgeList;
-
-  std::vector keys;
-  std::list keystrings;
-  std::vector values;
+  std::list flow_files;
 
   while (keys_to_delete.size_approx() > 0) {
-std::string key;
-if (keys_to_delete.try_dequeue(key)) {
-  keystrings.push_back(std::move(key));  // rocksdb::Slice doesn't copy 
the string, only grabs ptrs. Hacky, but have to ensure the required lifetime of 
the strings.
-  keys.push_back(keystrings.back());
+ExpiredFlowFileInfo info;
+if (keys_to_delete.try_dequeue(info)) {
+  flow_files.push_back(std::move(info));
 }
   }
-  auto multistatus = opendb->MultiGet(options, keys, &values);
 
-  for (size_t i = 0; i < keys.size() && i < values.size() && i < 
multistatus.size(); ++i) {
-if (!multistatus[

[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1508: MINIFICPP-2040 - Avoid deserializing flow files just to be deleted

2023-03-08 Thread via GitHub


adamdebreceni commented on code in PR #1508:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1508#discussion_r1129244376


##
extensions/rocksdb-repos/FlowFileRepository.cpp:
##
@@ -43,50 +43,67 @@ void FlowFileRepository::flush() {
   auto batch = opendb->createWriteBatch();
   rocksdb::ReadOptions options;
 
-  std::vector> purgeList;
-
-  std::vector keys;
-  std::list keystrings;
-  std::vector values;
+  std::list flow_files;
 
   while (keys_to_delete.size_approx() > 0) {
-std::string key;
-if (keys_to_delete.try_dequeue(key)) {
-  keystrings.push_back(std::move(key));  // rocksdb::Slice doesn't copy 
the string, only grabs ptrs. Hacky, but have to ensure the required lifetime of 
the strings.
-  keys.push_back(keystrings.back());
+ExpiredFlowFileInfo info;
+if (keys_to_delete.try_dequeue(info)) {
+  flow_files.push_back(std::move(info));
 }
   }
-  auto multistatus = opendb->MultiGet(options, keys, &values);
 
-  for (size_t i = 0; i < keys.size() && i < values.size() && i < 
multistatus.size(); ++i) {
-if (!multistatus[i].ok()) {
-  logger_->log_error("Failed to read key from rocksdb: %s! DB is most 
probably in an inconsistent state!", keys[i].data());
-  keystrings.remove(keys[i].data());
-  continue;
+  {
+// deserialize flow files with missing content claim

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (MINIFICPP-2072) Docker test for ListenSyslog/ReplaceText workflow

2023-03-08 Thread Martin Zink (Jira)
Martin Zink created MINIFICPP-2072:
--

 Summary: Docker test for ListenSyslog/ReplaceText workflow
 Key: MINIFICPP-2072
 URL: https://issues.apache.org/jira/browse/MINIFICPP-2072
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Martin Zink
Assignee: Martin Zink


It would be nice if we could check/demonstrate that e.g. 
ListenSyslog/ReplaceText (with expressionLanguage enabled) can create countless 
output formats



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi-minifi-cpp] lordgamez commented on a diff in pull request #1511: MINIFICPP-1716 Recover core dumps from CI

2023-03-08 Thread via GitHub


lordgamez commented on code in PR #1511:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1511#discussion_r1129199398


##
.github/workflows/ci.yml:
##
@@ -37,11 +37,28 @@ jobs:
   export LDFLAGS="-L/usr/local/opt/flex/lib"
   export CPPFLAGS="-I/usr/local/opt/flex/include"
   # CPPFLAGS are not recognized by cmake, so we have to force them to 
CFLAGS and CXXFLAGS to have flex 2.6 working
-  ./bootstrap.sh -e -t && cd build  && cmake 
-DCMAKE_BUILD_TYPE=Release -DCI_BUILD=ON -DCMAKE_C_FLAGS="${CPPFLAGS} 
${CFLAGS}" -DCMAKE_CXX_FLAGS="${CPPFLAGS} ${CXXFLAGS}" -DENABLE_SCRIPTING=ON 
-DENABLE_LUA_SCRIPTING=ON -DENABLE_SQL=ON -DUSE_REAL_ODBC_TEST_DRIVER=ON 
-DENABLE_AZURE=ON -DENABLE_GCP=ON -DCMAKE_VERBOSE_MAKEFILE=ON 
-DCMAKE_RULE_MESSAGES=OFF -DSTRICT_GSL_CHECKS=AUDIT -DFAIL_ON_WARNINGS=ON .. && 
cmake --build . --parallel 4
+  ./bootstrap.sh -e -t && cd build  && cmake 
-DCMAKE_BUILD_TYPE=RelWithDebInfo -DCI_BUILD=ON -DCMAKE_C_FLAGS="${CPPFLAGS} 
${CFLAGS}" -DCMAKE_CXX_FLAGS="${CPPFLAGS} ${CXXFLAGS}" -DENABLE_SCRIPTING=ON 
-DENABLE_LUA_SCRIPTING=ON -DENABLE_SQL=ON -DUSE_REAL_ODBC_TEST_DRIVER=ON 
-DENABLE_AZURE=ON -DENABLE_GCP=ON -DCMAKE_VERBOSE_MAKEFILE=ON 
-DCMAKE_RULE_MESSAGES=OFF -DSTRICT_GSL_CHECKS=AUDIT -DFAIL_ON_WARNINGS=ON .. && 
cmake --build . --parallel 4
   - name: test
-run: cd build && make test ARGS="--timeout 300 -j4 --output-on-failure"
+id: test
+run: |
+  ulimit -c unlimited

Review Comment:
   Sorry, I missed that, updated in 388a7f93bdefc2b22874f325d9ad80c32b4b973c



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] fgerlits commented on a diff in pull request #1511: MINIFICPP-1716 Recover core dumps from CI

2023-03-08 Thread via GitHub


fgerlits commented on code in PR #1511:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1511#discussion_r1129182106


##
.github/workflows/ci.yml:
##
@@ -37,11 +37,28 @@ jobs:
   export LDFLAGS="-L/usr/local/opt/flex/lib"
   export CPPFLAGS="-I/usr/local/opt/flex/include"
   # CPPFLAGS are not recognized by cmake, so we have to force them to 
CFLAGS and CXXFLAGS to have flex 2.6 working
-  ./bootstrap.sh -e -t && cd build  && cmake 
-DCMAKE_BUILD_TYPE=Release -DCI_BUILD=ON -DCMAKE_C_FLAGS="${CPPFLAGS} 
${CFLAGS}" -DCMAKE_CXX_FLAGS="${CPPFLAGS} ${CXXFLAGS}" -DENABLE_SCRIPTING=ON 
-DENABLE_LUA_SCRIPTING=ON -DENABLE_SQL=ON -DUSE_REAL_ODBC_TEST_DRIVER=ON 
-DENABLE_AZURE=ON -DENABLE_GCP=ON -DCMAKE_VERBOSE_MAKEFILE=ON 
-DCMAKE_RULE_MESSAGES=OFF -DSTRICT_GSL_CHECKS=AUDIT -DFAIL_ON_WARNINGS=ON .. && 
cmake --build . --parallel 4
+  ./bootstrap.sh -e -t && cd build  && cmake 
-DCMAKE_BUILD_TYPE=RelWithDebInfo -DCI_BUILD=ON -DCMAKE_C_FLAGS="${CPPFLAGS} 
${CFLAGS}" -DCMAKE_CXX_FLAGS="${CPPFLAGS} ${CXXFLAGS}" -DENABLE_SCRIPTING=ON 
-DENABLE_LUA_SCRIPTING=ON -DENABLE_SQL=ON -DUSE_REAL_ODBC_TEST_DRIVER=ON 
-DENABLE_AZURE=ON -DENABLE_GCP=ON -DCMAKE_VERBOSE_MAKEFILE=ON 
-DCMAKE_RULE_MESSAGES=OFF -DSTRICT_GSL_CHECKS=AUDIT -DFAIL_ON_WARNINGS=ON .. && 
cmake --build . --parallel 4
   - name: test
-run: cd build && make test ARGS="--timeout 300 -j4 --output-on-failure"
+id: test
+run: |
+  ulimit -c unlimited

Review Comment:
   looks good, but I would change the other three `ulimit`s to this, too



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] nandorsoma commented on a diff in pull request #6769: NIFI-10955 - Added JASN1Reader the ability to try to adjust for unsupported ASN features

2023-03-08 Thread via GitHub


nandorsoma commented on code in PR #6769:
URL: https://github.com/apache/nifi/pull/6769#discussion_r1129108350


##
nifi-nar-bundles/nifi-asn1-bundle/nifi-asn1-services/src/main/java/org/apache/nifi/jasn1/preprocess/preprocessors/ConstraintAsnPreprocessor.java:
##
@@ -0,0 +1,91 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.jasn1.preprocess.preprocessors;
+
+import org.apache.nifi.jasn1.preprocess.NiFiASNPreprocessor;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+public class ConstraintAsnPreprocessor implements NiFiASNPreprocessor {
+public static final String OPEN_BRACKET = "(";
+public static final String CLOSE_BRACKET = ")";
+
+public static final Pattern ALLOWED = Pattern.compile("^(\\d+\\))(.*)");
+
+@Override
+public List preprocessAsn(List lines) {
+List preprocessedLines = new ArrayList<>();
+
+AtomicInteger unclosedCounter = new AtomicInteger(0);
+lines.forEach(line -> {
+StringBuilder preprocessedLine = new StringBuilder();
+
+String contentToProcess = line;

Review Comment:
   Could you help me understand why?



##
nifi-nar-bundles/nifi-asn1-bundle/nifi-asn1-services/src/test/java/org/apache/nifi/jasn1/preprocess/AsnPreprocessorEngineTest.java:
##
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.jasn1.preprocess;
+
+import org.apache.nifi.logging.ComponentLog;
+import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+
+import java.io.File;
+import java.nio.charset.StandardCharsets;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.List;
+import java.util.StringJoiner;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.mockito.ArgumentMatchers.eq;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+public class AsnPreprocessorEngineTest {
+private AsnPreprocessorEngine testSubject;
+private AsnPreprocessorEngine helper;
+
+private AsnPreprocessor mockPreprocessor1;
+private AsnPreprocessor mockPreprocessor2;
+private List preprocessors;
+
+private ComponentLog log;
+
+@BeforeEach
+void setUp() throws Exception {
+mockPreprocessor1 = mock(AsnPreprocessor.class);
+mockPreprocessor2 = mock(AsnPreprocessor.class);
+
+preprocessors = Arrays.asList(
+mockPreprocessor1,
+mockPreprocessor2
+);
+
+log = mock(ComponentLog.class);
+
+helper = mock(AsnPreprocessorEngine.class);
+testSubject = new AsnPreprocessorEngine() {
+@Override
+List readAsnLines(ComponentLog componentLog, String 
inputFile, Path inputFilePath) {
+return helper.readAsnLines(componentLog, inputFile, 
inputFilePath);
+}
+
+@Override
+void writePreprocessedAsn(ComponentLog componentLog, String 
preprocessedAsn, Path preprocessedAsnPath) {
+helper.writePreprocessedAsn(componentLog, preprocessedAsn, 
preprocessedAsnPath);
+}
+
+   

[GitHub] [nifi-minifi-cpp] lordgamez commented on a diff in pull request #1511: MINIFICPP-1716 Recover core dumps from CI

2023-03-08 Thread via GitHub


lordgamez commented on code in PR #1511:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1511#discussion_r1129159524


##
.github/workflows/ci.yml:
##
@@ -37,11 +37,28 @@ jobs:
   export LDFLAGS="-L/usr/local/opt/flex/lib"
   export CPPFLAGS="-I/usr/local/opt/flex/include"
   # CPPFLAGS are not recognized by cmake, so we have to force them to 
CFLAGS and CXXFLAGS to have flex 2.6 working
-  ./bootstrap.sh -e -t && cd build  && cmake 
-DCMAKE_BUILD_TYPE=Release -DCI_BUILD=ON -DCMAKE_C_FLAGS="${CPPFLAGS} 
${CFLAGS}" -DCMAKE_CXX_FLAGS="${CPPFLAGS} ${CXXFLAGS}" -DENABLE_SCRIPTING=ON 
-DENABLE_LUA_SCRIPTING=ON -DENABLE_SQL=ON -DUSE_REAL_ODBC_TEST_DRIVER=ON 
-DENABLE_AZURE=ON -DENABLE_GCP=ON -DCMAKE_VERBOSE_MAKEFILE=ON 
-DCMAKE_RULE_MESSAGES=OFF -DSTRICT_GSL_CHECKS=AUDIT -DFAIL_ON_WARNINGS=ON .. && 
cmake --build . --parallel 4
+  ./bootstrap.sh -e -t && cd build  && cmake 
-DCMAKE_BUILD_TYPE=RelWithDebInfo -DCI_BUILD=ON -DCMAKE_C_FLAGS="${CPPFLAGS} 
${CFLAGS}" -DCMAKE_CXX_FLAGS="${CPPFLAGS} ${CXXFLAGS}" -DENABLE_SCRIPTING=ON 
-DENABLE_LUA_SCRIPTING=ON -DENABLE_SQL=ON -DUSE_REAL_ODBC_TEST_DRIVER=ON 
-DENABLE_AZURE=ON -DENABLE_GCP=ON -DCMAKE_VERBOSE_MAKEFILE=ON 
-DCMAKE_RULE_MESSAGES=OFF -DSTRICT_GSL_CHECKS=AUDIT -DFAIL_ON_WARNINGS=ON .. && 
cmake --build . --parallel 4
   - name: test
-run: cd build && make test ARGS="--timeout 300 -j4 --output-on-failure"
+id: test
+run: |
+  ulimit -c unlimited

Review Comment:
   Updated in 2a4c5f11348085e852fbf418f04038e08e864445



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] nandorsoma commented on a diff in pull request #6769: NIFI-10955 - Added JASN1Reader the ability to try to adjust for unsupported ASN features

2023-03-08 Thread via GitHub


nandorsoma commented on code in PR #6769:
URL: https://github.com/apache/nifi/pull/6769#discussion_r1129100470


##
nifi-nar-bundles/nifi-asn1-bundle/nifi-asn1-services/src/main/java/org/apache/nifi/jasn1/JASN1Reader.java:
##
@@ -134,17 +135,32 @@ public class JASN1Reader extends 
AbstractConfigurableComponent implements Record
 .required(false)
 .build();
 
+private static final PropertyDescriptor PREPROCESS_OUTPUT_DIRECTORY = new 
PropertyDescriptor.Builder()
+.name("additional-preprocesszing-output-directory")
+.displayName("Additional Preprocessing Output Directory")
+.description("When set, NiFi will do additional preprocessing steps 
that creates modified versions of the provided ASN files," +
+" removing unsupported features in a way that makes them less 
strict but otherwise should still be compatible with incoming data." +
+" The original files will remain intact and new ones will be 
created with the same names in the provided directory." +
+" For more information about these additional preprocessing 
steps please see Additional Details - Additional Preprocessing.")
+.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)

Review Comment:
   Any thoughts on this? It is still there but in the new property.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org