[jira] [Updated] (NIFI-13964) Upgrade camel-salesforce from 3.22.2 to 4.8.1

2024-11-05 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13964:
--
Fix Version/s: 2.1.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Upgrade camel-salesforce from 3.22.2 to 4.8.1
> -
>
> Key: NIFI-13964
> URL: https://issues.apache.org/jira/browse/NIFI-13964
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Peter Turcsanyi
>Assignee: Peter Turcsanyi
>Priority: Major
> Fix For: 2.1.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Apache Camel Salesforce 3.x library is EOL 
> (https://camel.apache.org/categories/Roadmap/). Upgrade to the 4.x line.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13951) Only Enforce policy is available in the UI for Flow Analysis Rules

2024-11-05 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13951:
--
Fix Version/s: 2.1.0

> Only Enforce policy is available in the UI for Flow Analysis Rules
> --
>
> Key: NIFI-13951
> URL: https://issues.apache.org/jira/browse/NIFI-13951
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Pierre Villard
>Assignee: Shane Ardell
>Priority: Minor
> Fix For: 2.1.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In the UI when configuring a Flow Analysis Rule, in the Settings tab, there 
> is a dropdown list for the enforcement policy. Right now, only ENFORCE is 
> available when two options should be available: ENFORCE and WARN.
> I believe this is located here:
> [https://github.com/apache/nifi/pull/8241/files#diff-9fe55d45b82880244c1dd961b38bd57e5108c5f2d1c8e6a7bc297efd5c7fcdb2R82-R88]
> Because of this, right now, when a rule is added and no modification of the 
> configuration is done, the rule will have a "WARN" enforcement policy because 
> this is the current default in the backend.
> [https://github.com/apache/nifi/blob/main/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/controller/flowanalysis/AbstractFlowAnalysisRuleNode.java#L104]
> Only in the case where the configuration of the rule is changed it will 
> switch to ENFORCE since this is the only option in the UI.
> The WARN enforcement policy should be added to the dropdown list and should 
> be the default to align with the backend.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13955) Filter out .directories in Git Registry clients

2024-10-31 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13955:
--
Status: Patch Available  (was: In Progress)

> Filter out .directories in Git Registry clients
> ---
>
> Key: NIFI-13955
> URL: https://issues.apache.org/jira/browse/NIFI-13955
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> If a Git Registry Client (GitHub/GitLab) is configured without a repo path 
> (looking at the root of the repo), we should filter out directories with 
> names starting with a dot (as we may have directories like .github), 
> otherwise they will be showing as potential buckets.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13955) Filter out .directories in Git Registry clients

2024-10-31 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-13955:
-

 Summary: Filter out .directories in Git Registry clients
 Key: NIFI-13955
 URL: https://issues.apache.org/jira/browse/NIFI-13955
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Pierre Villard
Assignee: Pierre Villard


If a Git Registry Client (GitHub/GitLab) is configured without a repo path 
(looking at the root of the repo), we should filter out directories with names 
starting with a dot (as we may have directories like .github), otherwise they 
will be showing as potential buckets.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13952) Flow Analysis Rule to restrict backpressure configuration

2024-10-31 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-13952:
-

 Summary: Flow Analysis Rule to restrict backpressure configuration
 Key: NIFI-13952
 URL: https://issues.apache.org/jira/browse/NIFI-13952
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Pierre Villard
Assignee: Pierre Villard


Add a rule allowing an admin to configure a min/max threshold for the 
backpressure object count setting as well as a min/max threshold for the 
backpressure data size setting.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13952) Flow Analysis Rule to restrict backpressure configuration

2024-10-31 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13952:
--
Status: Patch Available  (was: Open)

> Flow Analysis Rule to restrict backpressure configuration
> -
>
> Key: NIFI-13952
> URL: https://issues.apache.org/jira/browse/NIFI-13952
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> Add a rule allowing an admin to configure a min/max threshold for the 
> backpressure object count setting as well as a min/max threshold for the 
> backpressure data size setting.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13953) Improve Go To for Rule Violations

2024-10-31 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-13953:
-

 Summary: Improve Go To for Rule Violations
 Key: NIFI-13953
 URL: https://issues.apache.org/jira/browse/NIFI-13953
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Pierre Villard
 Attachments: Screenshot 2024-10-31 at 13.49.34.png, Screenshot 
2024-10-31 at 13.49.50.png

When violations are reported and the associated component is not a processor 
there is no Go To option. In the below examples, the violation is for a 
connection, and there is no Go To option. It would be useful to have the Go To 
option for any component.
{code:java}
        {
            "enforcementPolicy": "ENFORCE",
            "scope": "e26effa4-0192-1000-7826-18bfabd139ac",
            "subjectId": "e270eaa4-0192-1000-0622-8f9af5319328",
            "subjectDisplayName": "Funnel > ",
            "groupId": "e26effa4-0192-1000-7826-18bfabd139ac",
            "ruleId": "e23b4273-0192-1000-a903-f0bcfdcdc86f",
            "issueId": "xxx",
            "violationMessage": "xxx",
            "subjectComponentType": "CONNECTION",
            "subjectPermissionDto": {
                "canRead": true,
                "canWrite": true
            },
            "enabled": false
        }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13951) Only Enforce policy is available in the UI for Flow Analysis Rules

2024-10-31 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-13951:
-

 Summary: Only Enforce policy is available in the UI for Flow 
Analysis Rules
 Key: NIFI-13951
 URL: https://issues.apache.org/jira/browse/NIFI-13951
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Pierre Villard


In the UI when configuring a Flow Analysis Rule, in the Settings tab, there is 
a dropdown list for the enforcement policy. Right now, only ENFORCE is 
available when two options should be available: ENFORCE and WARN.

I believe this is located here:

[https://github.com/apache/nifi/pull/8241/files#diff-9fe55d45b82880244c1dd961b38bd57e5108c5f2d1c8e6a7bc297efd5c7fcdb2R82-R88]

Because of this, right now, when a rule is added and no modification of the 
configuration is done, the rule will have a "WARN" enforcement policy because 
this is the current default in the backend.

[https://github.com/apache/nifi/blob/main/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/controller/flowanalysis/AbstractFlowAnalysisRuleNode.java#L104]

Only in the case where the configuration of the rule is changed it will switch 
to ENFORCE since this is the only option in the UI.

The WARN enforcement policy should be added to the dropdown list and should be 
the default to align with the backend.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13950) NiFi CLI - add commands to list branch, bucket, flows, versions via reg-client

2024-10-30 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13950:
--
Status: Patch Available  (was: Open)

> NiFi CLI - add commands to list branch, bucket, flows, versions via reg-client
> --
>
> Key: NIFI-13950
> URL: https://issues.apache.org/jira/browse/NIFI-13950
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> Currently, the CLI only provides access to branches / buckets / flows and 
> versions via the use of
> {code:java}
> cli.sh registry ...{code}
> which only works when using the NiFi Registry.
> Now that Registry Client is an extension point and that we have additional 
> implementations, we should provide CLI commands to provide read-only access 
> through a given registry client for listing branches / buckets / flows and 
> versions. This will help users when they want to import a given flow with 
> {code:java}
> cli.sh nifi pg-import ...{code}
> In particular when it relates to versions because, when using a GitHub 
> Registry Client, the version is a commit ID and it might not always be easy 
> to retrieve / provide.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13950) NiFi CLI - add commands to list branch, bucket, flows, versions via reg-client

2024-10-30 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-13950:
-

 Summary: NiFi CLI - add commands to list branch, bucket, flows, 
versions via reg-client
 Key: NIFI-13950
 URL: https://issues.apache.org/jira/browse/NIFI-13950
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Pierre Villard
Assignee: Pierre Villard


Currently, the CLI only provides access to branches / buckets / flows and 
versions via the use of
{code:java}
cli.sh registry ...{code}
which only works when using the NiFi Registry.

Now that Registry Client is an extension point and that we have additional 
implementations, we should provide CLI commands to provide read-only access 
through a given registry client for listing branches / buckets / flows and 
versions. This will help users when they want to import a given flow with 
{code:java}
cli.sh nifi pg-import ...{code}
In particular when it relates to versions because, when using a GitHub Registry 
Client, the version is a commit ID and it might not always be easy to retrieve 
/ provide.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13905) NiFi Toolkit should allow creation of new-style external Registry Clients (e.g. GitHub/GitLab)

2024-10-30 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13905:
--
Status: Patch Available  (was: Open)

> NiFi Toolkit should allow creation of new-style external Registry Clients 
> (e.g. GitHub/GitLab)
> --
>
> Key: NIFI-13905
> URL: https://issues.apache.org/jira/browse/NIFI-13905
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 2.0.0-M4
>Reporter: Chris Sampson
>Assignee: Pierre Villard
>Priority: Minor
>
> It would be useful if the NiFi Toolkit was able to set non-{{NiFi Registry}} 
> Registry Clients, e.g. the new-style {{GitHub}} or {{GitLab}} clients.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-13905) NiFi Toolkit should allow creation of new-style external Registry Clients (e.g. GitHub/GitLab)

2024-10-30 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard reassigned NIFI-13905:
-

Assignee: Pierre Villard

> NiFi Toolkit should allow creation of new-style external Registry Clients 
> (e.g. GitHub/GitLab)
> --
>
> Key: NIFI-13905
> URL: https://issues.apache.org/jira/browse/NIFI-13905
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 2.0.0-M4
>Reporter: Chris Sampson
>Assignee: Pierre Villard
>Priority: Minor
>
> It would be useful if the NiFi Toolkit was able to set non-{{NiFi Registry}} 
> Registry Clients, e.g. the new-style {{GitHub}} or {{GitLab}} clients.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13948) NiFi CLI - Add command pg-list-processors

2024-10-29 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13948:
--
Status: Patch Available  (was: Open)

> NiFi CLI - Add command pg-list-processors
> -
>
> Key: NIFI-13948
> URL: https://issues.apache.org/jira/browse/NIFI-13948
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> Add a CLI command to recursively list the processors in a given process 
> group. Also add the option to specify a filter that expects either 'source' 
> or 'destination' as value.
> A source processor is defined as a processor that has no input relationship 
> (unless this is a self relationship loop) and has at least one output 
> relationship.
> A destination processor is defined as a processor that has no output 
> relationship (unless this is a self relationship loop) and has at least one 
> input relationship.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13948) NiFi CLI - Add command pg-list-processors

2024-10-29 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-13948:
-

 Summary: NiFi CLI - Add command pg-list-processors
 Key: NIFI-13948
 URL: https://issues.apache.org/jira/browse/NIFI-13948
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Pierre Villard
Assignee: Pierre Villard


Add a CLI command to recursively list the processors in a given process group. 
Also add the option to specify a filter that expects either 'source' or 
'destination' as value.

A source processor is defined as a processor that has no input relationship 
(unless this is a self relationship loop) and has at least one output 
relationship.

A destination processor is defined as a processor that has no output 
relationship (unless this is a self relationship loop) and has at least one 
input relationship.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13843) Unknown fields not dropped by JSON Writer as expected by specified schema

2024-10-29 Thread Pierre Villard (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17893993#comment-17893993
 ] 

Pierre Villard commented on NIFI-13843:
---

The change that has been introduced here is not limited to the JSON Reader, and 
the changed code is in place for many years in 1.x:

[https://github.com/apache/nifi/blame/support/nifi-1.x/nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/RecordReader.java#L50]

So if a fix is requested on 1.x, someone would need to identify what change 
introduced the regression for JSON specifically between 1.25 and 1.26.

> Unknown fields not dropped by JSON Writer as expected by specified schema
> -
>
> Key: NIFI-13843
> URL: https://issues.apache.org/jira/browse/NIFI-13843
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.27.0, 2.0.0-M4
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Consider the following use case:
>  * GFF Processor, generating a JSON with 3 fields: a, b, and c
>  * ConvertRecord with JSON Reader / JSON Writer
>  ** Both reader and writer are configured with a schema only specifying 
> fields a and b
> The expected result is a JSON that only contains fields a and b.
> We're following the below path in the code:
>  * AbstractRecordProcessor (L131)
> {code:java}
> Record firstRecord = reader.nextRecord(); {code}
> In this case, the default method for nextRecord() is defined in RecordReader 
> (L50)
> {code:java}
> default Record nextRecord() throws IOException, MalformedRecordException {
> return nextRecord(true, false);
> } {code}
> where we are NOT dropping the unknown fields (Java doc needs some fixing here 
> as it is saying the opposite)
> We get to 
> {code:java}
> writer.write(firstRecord); {code}
> which gets us to
>  * WriteJsonResult (L206)
> Here, we do a check
> {code:java}
> isUseSerializeForm(record, writeSchema) {code}
> which currently returns true when it should not. Because of this we write the 
> serialised form which ignores the writer schema.
> In this method isUseSerializeForm(), we do check
> {code:java}
> record.getSchema().equals(writeSchema) {code}
> But at this point record.getSchema() returns the schema defined in the reader 
> which is equal to the one defined in the writer - even though the record has 
> additional fields compared to the defined schema.
> The suggested fix is check is to also add a check on
> {code:java}
> record.isDropUnknownFields() {code}
> If dropUnknownFields is false, then we do not use the serialised form.
> While this does solve the issue, I'm a bit conflicted on the current 
> approach. Not only this could have a performance impact (we are likely going 
> to not use the serialized form as often), but it also feels like the default 
> should be to ignore the unknown fields when reading the record.
> If we consider the below scenario:
>  * GFF Processor, generating a JSON with 3 fields: {{{}a{}}}, {{{}b{}}}, and 
> {{c}}
>  * ConvertRecord with JSON Reader / JSON Writer
>  ** JSON reader with a schema only specifying fields {{a}} and {{b}}
>  ** JSON writer with a schema specifying fields {{{}a{}}}, {{{}b{}}}, and 
> {{c}} ({{{}c{}}} defaulting to {{{}null{}}})
> It feels like the expected result should be a JSON with the field {{c}} and a 
> {{null}} value, because the reader would drop the field when reading the JSON 
> and converting it into a record and pass it to the writer.
> If we agree on the above, then it may be easier to juste override 
> {{nextRecord()}} in {{AbstractJsonRowRecordReader}} and default to 
> {{{}nextRecord(true, true){}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-13362) JSONRecordSetWriter does not account for schema changes when writing serialized form

2024-10-29 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-13362.
---
Fix Version/s: 2.0.0
   Resolution: Fixed

> JSONRecordSetWriter does not account for schema changes when writing 
> serialized form
> 
>
> Key: NIFI-13362
> URL: https://issues.apache.org/jira/browse/NIFI-13362
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.26.0, 2.0.0-M3
>Reporter: Sander Bylemans
>Priority: Critical
> Fix For: 2.0.0
>
>
> When using the RemoveRecordField processor with the JsonRecordSetWriter as a 
> writer, I came across an issue where not all fields were removed in the 
> resulting records.
> When debugging, I noticed the JsonRecordSetWriter uses the WriteJsonResult, 
> which checks if there is a serialized form of the record. If there is, it 
> just uses that even though the serialized form may contain fields that are 
> not present anymore.
> There is a check on the schema of the record, if it is the same as the target 
> schema, but the serialized form does not account for this schema change 
> introduced by the RemoveRecordField processor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13843) Unknown fields not dropped by JSON Writer as expected by specified schema

2024-10-29 Thread Pierre Villard (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17893988#comment-17893988
 ] 

Pierre Villard commented on NIFI-13843:
---

Hey [~dstiegli1] - given that this can be considered as a breaking change, I'd 
not recommend backporting this into 1.x.

> Unknown fields not dropped by JSON Writer as expected by specified schema
> -
>
> Key: NIFI-13843
> URL: https://issues.apache.org/jira/browse/NIFI-13843
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.27.0, 2.0.0-M4
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Consider the following use case:
>  * GFF Processor, generating a JSON with 3 fields: a, b, and c
>  * ConvertRecord with JSON Reader / JSON Writer
>  ** Both reader and writer are configured with a schema only specifying 
> fields a and b
> The expected result is a JSON that only contains fields a and b.
> We're following the below path in the code:
>  * AbstractRecordProcessor (L131)
> {code:java}
> Record firstRecord = reader.nextRecord(); {code}
> In this case, the default method for nextRecord() is defined in RecordReader 
> (L50)
> {code:java}
> default Record nextRecord() throws IOException, MalformedRecordException {
> return nextRecord(true, false);
> } {code}
> where we are NOT dropping the unknown fields (Java doc needs some fixing here 
> as it is saying the opposite)
> We get to 
> {code:java}
> writer.write(firstRecord); {code}
> which gets us to
>  * WriteJsonResult (L206)
> Here, we do a check
> {code:java}
> isUseSerializeForm(record, writeSchema) {code}
> which currently returns true when it should not. Because of this we write the 
> serialised form which ignores the writer schema.
> In this method isUseSerializeForm(), we do check
> {code:java}
> record.getSchema().equals(writeSchema) {code}
> But at this point record.getSchema() returns the schema defined in the reader 
> which is equal to the one defined in the writer - even though the record has 
> additional fields compared to the defined schema.
> The suggested fix is check is to also add a check on
> {code:java}
> record.isDropUnknownFields() {code}
> If dropUnknownFields is false, then we do not use the serialised form.
> While this does solve the issue, I'm a bit conflicted on the current 
> approach. Not only this could have a performance impact (we are likely going 
> to not use the serialized form as often), but it also feels like the default 
> should be to ignore the unknown fields when reading the record.
> If we consider the below scenario:
>  * GFF Processor, generating a JSON with 3 fields: {{{}a{}}}, {{{}b{}}}, and 
> {{c}}
>  * ConvertRecord with JSON Reader / JSON Writer
>  ** JSON reader with a schema only specifying fields {{a}} and {{b}}
>  ** JSON writer with a schema specifying fields {{{}a{}}}, {{{}b{}}}, and 
> {{c}} ({{{}c{}}} defaulting to {{{}null{}}})
> It feels like the expected result should be a JSON with the field {{c}} and a 
> {{null}} value, because the reader would drop the field when reading the JSON 
> and converting it into a record and pass it to the writer.
> If we agree on the above, then it may be easier to juste override 
> {{nextRecord()}} in {{AbstractJsonRowRecordReader}} and default to 
> {{{}nextRecord(true, true){}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13937) NiFi CLI - Add command pg-empty-queues

2024-10-28 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-13937:
-

 Summary: NiFi CLI - Add command pg-empty-queues
 Key: NIFI-13937
 URL: https://issues.apache.org/jira/browse/NIFI-13937
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Pierre Villard
Assignee: Pierre Villard


Add a command to the NiFi CLI to recursively empty all queues in a process 
group.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13937) NiFi CLI - Add command pg-empty-queues

2024-10-28 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13937:
--
Status: Patch Available  (was: Open)

> NiFi CLI - Add command pg-empty-queues
> --
>
> Key: NIFI-13937
> URL: https://issues.apache.org/jira/browse/NIFI-13937
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> Add a command to the NiFi CLI to recursively empty all queues in a process 
> group.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13936) Remove GCP PubSub Lite components

2024-10-27 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13936:
--
Fix Version/s: 2.0.0

> Remove GCP PubSub Lite components
> -
>
> Key: NIFI-13936
> URL: https://issues.apache.org/jira/browse/NIFI-13936
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 2.0.0
>
>
> Pub/Sub Lite is deprecated. Effective March 18, 2026, Pub/Sub Lite will be 
> turned down.
>  * Current customers: Pub/Sub Lite remains functional until March 18, 2026. 
> If you have not used Pub/Sub Lite within the 90-day period preceding 
> September 24, 2024 (June 26, 2024 - September 24, 2024), you won't be able to 
> access Pub/Sub Lite starting on September 24, 2024.
>  * New customers: Pub/Sub Lite is no longer available for new customers after 
> September 24, 2024.
> See - 
> [https://cloud.google.com/pubsub/lite/docs/migrate-pubsub-lite-to-pubsub]
> Given the upcoming 2.0 release, it's better to just remove the components. If 
> needed by someone, they can still download the NAR from 2.0-M4 to get these 
> specific processors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13936) Remove GCP PubSub Lite components

2024-10-27 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13936:
--
Status: Patch Available  (was: Open)

> Remove GCP PubSub Lite components
> -
>
> Key: NIFI-13936
> URL: https://issues.apache.org/jira/browse/NIFI-13936
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> Pub/Sub Lite is deprecated. Effective March 18, 2026, Pub/Sub Lite will be 
> turned down.
>  * Current customers: Pub/Sub Lite remains functional until March 18, 2026. 
> If you have not used Pub/Sub Lite within the 90-day period preceding 
> September 24, 2024 (June 26, 2024 - September 24, 2024), you won't be able to 
> access Pub/Sub Lite starting on September 24, 2024.
>  * New customers: Pub/Sub Lite is no longer available for new customers after 
> September 24, 2024.
> See - 
> [https://cloud.google.com/pubsub/lite/docs/migrate-pubsub-lite-to-pubsub]
> Given the upcoming 2.0 release, it's better to just remove the components. If 
> needed by someone, they can still download the NAR from 2.0-M4 to get these 
> specific processors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13936) Remove GCP PubSub Lite components

2024-10-27 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-13936:
-

 Summary: Remove GCP PubSub Lite components
 Key: NIFI-13936
 URL: https://issues.apache.org/jira/browse/NIFI-13936
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Pierre Villard
Assignee: Pierre Villard


Pub/Sub Lite is deprecated. Effective March 18, 2026, Pub/Sub Lite will be 
turned down.
 * Current customers: Pub/Sub Lite remains functional until March 18, 2026. If 
you have not used Pub/Sub Lite within the 90-day period preceding September 24, 
2024 (June 26, 2024 - September 24, 2024), you won't be able to access Pub/Sub 
Lite starting on September 24, 2024.
 * New customers: Pub/Sub Lite is no longer available for new customers after 
September 24, 2024.

See - [https://cloud.google.com/pubsub/lite/docs/migrate-pubsub-lite-to-pubsub]

Given the upcoming 2.0 release, it's better to just remove the components. If 
needed by someone, they can still download the NAR from 2.0-M4 to get these 
specific processors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13930) PutAzureDataLakeStorage does not cause Azure to emit a FlushWithClose event on file write

2024-10-27 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13930:
--
Fix Version/s: 2.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> PutAzureDataLakeStorage does not cause Azure to emit a FlushWithClose event 
> on file write
> -
>
> Key: NIFI-13930
> URL: https://issues.apache.org/jira/browse/NIFI-13930
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.23.2
> Environment: amd64, Windows Server 2022, Java 21.0.3
>Reporter: Mark Ward
>Assignee: Peter Turcsanyi
>Priority: Minor
> Fix For: 2.0.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Hi,
> Note: I am raising this issue on Databrick's behalf, they've requested Jira 
> access and are currently awaiting approval.
> We use NiFi to write files to an Azure Storage account where our Databricks 
> workspace can ingest the files using an Azure Queue and Databrick's File 
> Notification feature, which then initiates a workflow/job.
> However, files are not being picked up in a timely manner by Databricks.
> We raised with Databricks and they've investigated, with the conclusion being 
> that NiFi's behaviour when completing the file write, and then subsequent 
> rename, does not emit and event type that would normally be expected.
> Please see Databricks's summary below for information:
> {quote} # Customer uses Apache Nifi 1.23.2, which performs the following 
> operations via the Azure API 
> ([src|https://github.com/apache/nifi/blob/rel/nifi-1.23.2/nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/PutAzureDataLakeStorage.java#L149-L166])
>  ## Create a temp file using the [Path 
> Create|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/create?view=rest-storageservices-datalakestoragegen2-2019-12-12]
>  API. In this case Azure emits a 
> [BlobCreated|https://learn.microsoft.com/en-us/azure/event-grid/event-schema-blob-storage?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=event-grid-event-schema#microsoftstorageblobcreated-event-data-lake-storage-gen2-1]
>  event with the {{api}} field set to {{CreateFile}} (which is purposefully 
> not processed by CSMS - 
> [source|https://github.com/databricks/universe/blob/fc4a34b61abe58fbe363cc55cdb67edd480f985e/jobs-cloud-storage-meta/src/cloud/azure/resourcemanagement/AqsEventGridResourceManagementClient.scala#L366]).
>  ## 
> [Append|https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/storage/azure-storage-file-datalake/src/main/java/com/azure/storage/file/datalake/DataLakeFileClient.java#L656]
>  content to the file, there is no file event for this operation.
>  ## 
> [Flush|https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/storage/azure-storage-file-datalake/src/main/java/com/azure/storage/file/datalake/DataLakeFileClient.java#L930]
>  the appended content file content. Nifi’s implementation flushes the file 
> without closing it 
> ([source|https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/storage/azure-storage-file-datalake/src/main/java/com/azure/storage/file/datalake/DataLakeFileClient.java#L935]),
>  as a result Azure +doesn’t+ emit a {{FlushWithClose}} event (CSMS processes 
> this event type).
>  ## 
> [Rename|https://learn.microsoft.com/en-us/dotnet/api/azure.storage.files.datalake.datalakefileclient.rename?view=azure-dotnet]
>  the temp file to its final name, this generates a 
> [BlobRenamed|https://learn.microsoft.com/en-us/azure/event-grid/event-schema-blob-storage?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=event-grid-event-schema#microsoftstorageblobrenamed-event-data-lake-storage-gen2-1]
>  that’s processed by CSMS (but ignored, see below).
>  # The behaviour observed by the customer is explained by:
>  ## When (1d) happens, CSMS gets the BlobRenamed event for a file for which 
> it doesn’t know the metadata about the source file. The BlobRenamed event 
> doesn’t contain all the information needed by CSMS (it misses the {{etag}} 
> and the {{{}blob size{}}}) and therefore [CSMS ignores the 
> event|https://github.com/databricks/universe/blob/master/jobs-cloud-storage-meta/src/storage/StorageHelper.scala#L600]
>  and no object is created.
>  ## When CSMS performs the daily full scan, it finds the renamed file and 
> creates an object for it in its database. This causes the file arrival 
> trigger to find the file and trigger a (delayed) run.{quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-13927) PublishGCPPubSub processor stop working and is stucked when using Record Oriented mode

2024-10-25 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard reassigned NIFI-13927:
-

Assignee: Pierre Villard

> PublishGCPPubSub processor stop working and is stucked when using Record 
> Oriented mode
> --
>
> Key: NIFI-13927
> URL: https://issues.apache.org/jira/browse/NIFI-13927
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.26.0
>Reporter: Julien G.
>Assignee: Pierre Villard
>Priority: Major
> Attachments: EXAMPLE.json, nifi.tdump
>
>
> When using the PublishGCPPubSub processor with the Record Oriented mode the 
> processor will stay stucked. It won't processor any FlowFile. When the 
> processor is terminated the thread isn't released by the processor.
> It FlowFile Oriented mode, it works fine.
> It seems to be linked to how many records are in the FlowFile, becase when 
> having a single record in the FlowFile, it seems to have no issue (may be it 
> take more time to appear).
> But if you start to have more records in the FlowFile, it won't work. To make 
> the processor works again you either need to restart the node or removing and 
> recreating the processor. But it's temporary, because it some point it will 
> be stucked again. When stucked the processor won't be unstucked even after 
> days (we had a processor stucked for 3 days straight).
> You can find attached a thread dump when the processor was running and 
> stucked, and with ~10 threads not released . And you can also find an example 
> that will show the issue with a FlowFile of 500 records that instantly 
> stucked in the publish.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13927) PublishGCPPubSub processor stop working and is stucked when using Record Oriented mode

2024-10-25 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13927:
--
Issue Type: Bug  (was: Improvement)

> PublishGCPPubSub processor stop working and is stucked when using Record 
> Oriented mode
> --
>
> Key: NIFI-13927
> URL: https://issues.apache.org/jira/browse/NIFI-13927
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.26.0
>Reporter: Julien G.
>Priority: Major
> Attachments: EXAMPLE.json, nifi.tdump
>
>
> When using the PublishGCPPubSub processor with the Record Oriented mode the 
> processor will stay stucked. It won't processor any FlowFile. When the 
> processor is terminated the thread isn't released by the processor.
> It FlowFile Oriented mode, it works fine.
> It seems to be linked to how many records are in the FlowFile, becase when 
> having a single record in the FlowFile, it seems to have no issue (may be it 
> take more time to appear).
> But if you start to have more records in the FlowFile, it won't work. To make 
> the processor works again you either need to restart the node or removing and 
> recreating the processor. But it's temporary, because it some point it will 
> be stucked again. When stucked the processor won't be unstucked even after 
> days (we had a processor stucked for 3 days straight).
> You can find attached a thread dump when the processor was running and 
> stucked, and with ~10 threads not released . And you can also find an example 
> that will show the issue with a FlowFile of 500 records that instantly 
> stucked in the publish.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13927) PublishGCPPubSub processor stop working and is stucked when using Record Oriented mode

2024-10-24 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13927:
--
Affects Version/s: 2.0.0-M4

> PublishGCPPubSub processor stop working and is stucked when using Record 
> Oriented mode
> --
>
> Key: NIFI-13927
> URL: https://issues.apache.org/jira/browse/NIFI-13927
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.26.0, 2.0.0-M4
>Reporter: Julien G.
>Assignee: Pierre Villard
>Priority: Major
> Attachments: EXAMPLE.json, nifi.tdump
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When using the PublishGCPPubSub processor with the Record Oriented mode the 
> processor will stay stucked. It won't processor any FlowFile. When the 
> processor is terminated the thread isn't released by the processor.
> It FlowFile Oriented mode, it works fine.
> It seems to be linked to how many records are in the FlowFile, becase when 
> having a single record in the FlowFile, it seems to have no issue (may be it 
> take more time to appear).
> But if you start to have more records in the FlowFile, it won't work. To make 
> the processor works again you either need to restart the node or removing and 
> recreating the processor. But it's temporary, because it some point it will 
> be stucked again. When stucked the processor won't be unstucked even after 
> days (we had a processor stucked for 3 days straight).
> You can find attached a thread dump when the processor was running and 
> stucked, and with ~10 threads not released . And you can also find an example 
> that will show the issue with a FlowFile of 500 records that instantly 
> stucked in the publish.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13927) PublishGCPPubSub processor stop working and is stucked when using Record Oriented mode

2024-10-24 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13927:
--
Status: Patch Available  (was: Open)

> PublishGCPPubSub processor stop working and is stucked when using Record 
> Oriented mode
> --
>
> Key: NIFI-13927
> URL: https://issues.apache.org/jira/browse/NIFI-13927
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.26.0
>Reporter: Julien G.
>Assignee: Pierre Villard
>Priority: Major
> Attachments: EXAMPLE.json, nifi.tdump
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When using the PublishGCPPubSub processor with the Record Oriented mode the 
> processor will stay stucked. It won't processor any FlowFile. When the 
> processor is terminated the thread isn't released by the processor.
> It FlowFile Oriented mode, it works fine.
> It seems to be linked to how many records are in the FlowFile, becase when 
> having a single record in the FlowFile, it seems to have no issue (may be it 
> take more time to appear).
> But if you start to have more records in the FlowFile, it won't work. To make 
> the processor works again you either need to restart the node or removing and 
> recreating the processor. But it's temporary, because it some point it will 
> be stucked again. When stucked the processor won't be unstucked even after 
> days (we had a processor stucked for 3 days straight).
> You can find attached a thread dump when the processor was running and 
> stucked, and with ~10 threads not released . And you can also find an example 
> that will show the issue with a FlowFile of 500 records that instantly 
> stucked in the publish.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13927) PublishGCPPubSub processor stop working and is stucked when using Record Oriented mode

2024-10-24 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13927:
--
Component/s: Extensions

> PublishGCPPubSub processor stop working and is stucked when using Record 
> Oriented mode
> --
>
> Key: NIFI-13927
> URL: https://issues.apache.org/jira/browse/NIFI-13927
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.26.0
>Reporter: Julien G.
>Priority: Major
> Attachments: EXAMPLE.json, nifi.tdump
>
>
> When using the PublishGCPPubSub processor with the Record Oriented mode the 
> processor will stay stucked. It won't processor any FlowFile. When the 
> processor is terminated the thread isn't released by the processor.
> It FlowFile Oriented mode, it works fine.
> It seems to be linked to how many records are in the FlowFile, becase when 
> having a single record in the FlowFile, it seems to have no issue (may be it 
> take more time to appear).
> But if you start to have more records in the FlowFile, it won't work. To make 
> the processor works again you either need to restart the node or removing and 
> recreating the processor. But it's temporary, because it some point it will 
> be stucked again. When stucked the processor won't be unstucked even after 
> days (we had a processor stucked for 3 days straight).
> You can find attached a thread dump when the processor was running and 
> stucked, and with ~10 threads not released . And you can also find an example 
> that will show the issue with a FlowFile of 500 records that instantly 
> stucked in the publish.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13910) Upgrade Tika to 3.0.0

2024-10-23 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13910:
--
Fix Version/s: 2.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Upgrade Tika to 3.0.0
> -
>
> Key: NIFI-13910
> URL: https://issues.apache.org/jira/browse/NIFI-13910
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Apache Tika dependencies should be upgraded to 
> [3.0.0|https://dist.apache.org/repos/dist/release/tika/3.0.0/CHANGES-3.0.0.txt]
>  to incorporate current major version features and fixes.
> Tika 3 requires Java 11 as the minimum version.
> Notable changes in Tika 3 include looking in the root class path, instead of 
> {{org/apache/tika/mime}} for {{custom-mimetypes.xml}} as described in 
> TIKA-4147.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13903) NiFi Toolkit to include Controller Services when getting Process Groups

2024-10-23 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13903:
--
Fix Version/s: (was: 2.0.0-M4)

> NiFi Toolkit to include Controller Services when getting Process Groups
> ---
>
> Key: NIFI-13903
> URL: https://issues.apache.org/jira/browse/NIFI-13903
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Chris Sampson
>Priority: Minor
>
> It would be useful if NiFi Toolkit could (optionally) include the Controller 
> Services configured within a Process Group when calling {{nifi 
> get-process-group}}
> This could be made optinal with a command line flag, if preferred over always 
> returning the extra component details.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13905) NiFi Toolkit should allow creation of new-style external Registry Clients (e.g. GitHub/GitLab)

2024-10-23 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13905:
--
Fix Version/s: (was: 2.0.0-M4)

> NiFi Toolkit should allow creation of new-style external Registry Clients 
> (e.g. GitHub/GitLab)
> --
>
> Key: NIFI-13905
> URL: https://issues.apache.org/jira/browse/NIFI-13905
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Chris Sampson
>Priority: Minor
>
> It would be useful if the NiFi Toolkit was able to set non-{{NiFi Registry}} 
> Registry Clients, e.g. the new-style {{GitHub}} or {{GitLab}} clients.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13906) NiFi Toolkit should allow referencing Controller Services and Processors to be disable/stopped then re-enabled/started when updating Parameters

2024-10-23 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13906:
--
Fix Version/s: (was: 2.0.0-M4)

> NiFi Toolkit should allow referencing Controller Services and Processors to 
> be disable/stopped then re-enabled/started when updating Parameters
> ---
>
> Key: NIFI-13906
> URL: https://issues.apache.org/jira/browse/NIFI-13906
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Chris Sampson
>Priority: Major
>
> A useful addition to the {{set-param}} command for {{nifi}} in the NiFi 
> Toolkit would be if it:
> * stopped any referencing processors
> * disabled any referencing controller services
> ** first stopping any procesors that reference those controller services
> * re-enabled the referencing controller services
> ** then re-start the processors that reference those controller services
> * re-start any referencing processors
> This could be a command-line option, and should only re-start/enable 
> components that were actually stopped as part of the {{set-param}} operation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13924) NiFi CLI - delete-param has no effect

2024-10-23 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13924:
--
Status: Patch Available  (was: Open)

> NiFi CLI - delete-param has no effect
> -
>
> Key: NIFI-13924
> URL: https://issues.apache.org/jira/browse/NIFI-13924
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> While doing some testing for NIFI-13904, I noticed that the delete-param CLI 
> command has no effect. After some digging, this is due to NIFI-12898 and the 
> addition of asset support. In the CLI, the referenced asset needs to set to 
> null to let the backend know that this is a deletion request.
> See: 
> https://github.com/apache/nifi/blame/main/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/dao/impl/StandardParameterContextDAO.java#L209



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13924) NiFi CLI - delete-param has no effect

2024-10-23 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13924:
--
Description: 
While doing some testing for NIFI-13904, I noticed that the delete-param CLI 
command has no effect. After some digging, this is due to NIFI-12898 and the 
addition of asset support. In the CLI, the referenced asset needs to set to 
null to let the backend know that this is a deletion request.

See: 
https://github.com/apache/nifi/blame/main/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/dao/impl/StandardParameterContextDAO.java#L209

  was:While doing some testing for NIFI-13904, I noticed that the delete-param 
CLI command has no effect. After some digging, this is due to NIFI-12898 and 
the addition of asset support. In the CLI, the referenced asset needs to set to 
null to let the backend know that this is a deletion request.


> NiFi CLI - delete-param has no effect
> -
>
> Key: NIFI-13924
> URL: https://issues.apache.org/jira/browse/NIFI-13924
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> While doing some testing for NIFI-13904, I noticed that the delete-param CLI 
> command has no effect. After some digging, this is due to NIFI-12898 and the 
> addition of asset support. In the CLI, the referenced asset needs to set to 
> null to let the backend know that this is a deletion request.
> See: 
> https://github.com/apache/nifi/blame/main/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/dao/impl/StandardParameterContextDAO.java#L209



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13924) NiFi CLI - delete-param has no effect

2024-10-23 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-13924:
-

 Summary: NiFi CLI - delete-param has no effect
 Key: NIFI-13924
 URL: https://issues.apache.org/jira/browse/NIFI-13924
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Pierre Villard
Assignee: Pierre Villard


While doing some testing for NIFI-13904, I noticed that the delete-param CLI 
command has no effect. After some digging, this is due to NIFI-12898 and the 
addition of asset support. In the CLI, the referenced asset needs to set to 
null to let the backend know that this is a deletion request.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13904) Linking Parameter Contexts using NiFi Toolkit causes sensitive parameters to become invalid

2024-10-22 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13904:
--
Status: Patch Available  (was: Open)

> Linking Parameter Contexts using NiFi Toolkit causes sensitive parameters to 
> become invalid
> ---
>
> Key: NIFI-13904
> URL: https://issues.apache.org/jira/browse/NIFI-13904
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Chris Sampson
>Assignee: Pierre Villard
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Linking Parameter Contexts using the NiFi Toolkit causes sensitive parameters 
> to become invalid in the dependent/parent Parameter Context.
> For example:
> * create-param-provider
> * fetch-params (including {{apply}})
> * create-param-context
> * set-param-value (including {{sensitive}} value params)
> * set-inherited-param-contexts (linking the PC from the Param Provider, to 
> the separately created PC)
> The {{sensitive}} params in the second PC will now be invlaid.
> This might be because the Toolkit [resets the parameters on the "parent" 
> PC|https://github.com/apache/nifi/blob/main/nifi-toolkit/nifi-toolkit-cli/src/main/java/org/apache/nifi/toolkit/cli/impl/command/nifi/params/SetInheritedParamContexts.java#L88],
>  which maybe doesn't correctly get/set the {{sensitive}} param values



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13904) Linking Parameter Contexts using NiFi Toolkit causes sensitive parameters to become invalid

2024-10-22 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13904:
--
Fix Version/s: (was: 2.0.0-M4)

> Linking Parameter Contexts using NiFi Toolkit causes sensitive parameters to 
> become invalid
> ---
>
> Key: NIFI-13904
> URL: https://issues.apache.org/jira/browse/NIFI-13904
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Chris Sampson
>Assignee: Pierre Villard
>Priority: Major
>
> Linking Parameter Contexts using the NiFi Toolkit causes sensitive parameters 
> to become invalid in the dependent/parent Parameter Context.
> For example:
> * create-param-provider
> * fetch-params (including {{apply}})
> * create-param-context
> * set-param-value (including {{sensitive}} value params)
> * set-inherited-param-contexts (linking the PC from the Param Provider, to 
> the separately created PC)
> The {{sensitive}} params in the second PC will now be invlaid.
> This might be because the Toolkit [resets the parameters on the "parent" 
> PC|https://github.com/apache/nifi/blob/main/nifi-toolkit/nifi-toolkit-cli/src/main/java/org/apache/nifi/toolkit/cli/impl/command/nifi/params/SetInheritedParamContexts.java#L88],
>  which maybe doesn't correctly get/set the {{sensitive}} param values



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-13904) Linking Parameter Contexts using NiFi Toolkit causes sensitive parameters to become invalid

2024-10-22 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard reassigned NIFI-13904:
-

Assignee: Pierre Villard

> Linking Parameter Contexts using NiFi Toolkit causes sensitive parameters to 
> become invalid
> ---
>
> Key: NIFI-13904
> URL: https://issues.apache.org/jira/browse/NIFI-13904
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Chris Sampson
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 2.0.0-M4
>
>
> Linking Parameter Contexts using the NiFi Toolkit causes sensitive parameters 
> to become invalid in the dependent/parent Parameter Context.
> For example:
> * create-param-provider
> * fetch-params (including {{apply}})
> * create-param-context
> * set-param-value (including {{sensitive}} value params)
> * set-inherited-param-contexts (linking the PC from the Param Provider, to 
> the separately created PC)
> The {{sensitive}} params in the second PC will now be invlaid.
> This might be because the Toolkit [resets the parameters on the "parent" 
> PC|https://github.com/apache/nifi/blob/main/nifi-toolkit/nifi-toolkit-cli/src/main/java/org/apache/nifi/toolkit/cli/impl/command/nifi/params/SetInheritedParamContexts.java#L88],
>  which maybe doesn't correctly get/set the {{sensitive}} param values



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-13869) Enhance QuerySalesforceObject Processor to Support Querying Deleted Records

2024-10-22 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-13869.
---
Fix Version/s: 2.1.0
   Resolution: Fixed

> Enhance QuerySalesforceObject Processor to Support Querying Deleted Records
> ---
>
> Key: NIFI-13869
> URL: https://issues.apache.org/jira/browse/NIFI-13869
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.25.0
>Reporter: Nicolae Puica
>Assignee: Nicolae Puica
>Priority: Major
> Fix For: 2.1.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, the {{QuerySalesforceObject}} processor in NiFi is capable of 
> querying active records from Salesforce. However, it lacks the functionality 
> to retrieve deleted records, which is critical for scenarios where we need to 
> track changes or removals of records, especially for audit and compliance 
> purposes. This improvement will add support for querying deleted records 
> (soft deletes in Salesforce), leveraging the {{isDeleted}} field in 
> Salesforce's SOQL queries.
> *Requirements:*
>  # *New Processor Property:*
>  ** Add a Boolean property: {{{}Include Deleted Records{}}}.
>  ** When set to {{{}true{}}}, the processor should include records where the 
> {{isDeleted}} field is {{{}true{}}}.
>  # *SOQL Query Modification:*
>  ** Modify the query structure to include the {{isDeleted}} field in the 
> WHERE clause when querying the object. For example:
>  *** {{{}SELECT Id, Name, IsDeleted FROM ContentVersion WHERE IsDeleted = 
> true{}}}.
>  # *Backward Compatibility:*
>  ** Ensure backward compatibility by keeping the default behavior of 
> excluding deleted records unless the new property is enabled.
> *Acceptance Criteria:*
>  * Users should be able to enable or disable the querying of deleted records 
> via a new processor property.
>  * The processor should return both deleted and non-deleted records when 
> appropriate.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13909) Refresh Project README

2024-10-21 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13909:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Refresh Project README
> --
>
> Key: NIFI-13909
> URL: https://issues.apache.org/jira/browse/NIFI-13909
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0-M5
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The main project README.md includes a number of historical links and examples 
> that should be updated or removed.
> Beyond updating existing information, project details should be streamlined 
> to focus on the basic steps for building and running the application. 
> Additional details for subprojects can be provided using links.
> Project badges should also be updated to include more prominent links to 
> resources for submitting issues and interacting with the community.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13891) Update Documentation for Bootstrap Network Address

2024-10-17 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13891:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Update Documentation for Bootstrap Network Address
> --
>
> Key: NIFI-13891
> URL: https://issues.apache.org/jira/browse/NIFI-13891
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Documentation & Website
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0-M5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> With recent refactoring of the NiFi Bootstrap process, the 
> {{nifi.listener.bootstrap.port}} property is no longer used. References to 
> this property in configuration files and documentation should be removed.
> The new management.server.address property for bootstrap.conf should be added 
> to admin guide documentation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13892) Suppress JVM Logging from Lucene Classes

2024-10-17 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13892:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Suppress JVM Logging from Lucene Classes
> 
>
> Key: NIFI-13892
> URL: https://issues.apache.org/jira/browse/NIFI-13892
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 2.0.0-M5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Lucene libraries supporting the Provenance Repository log the following 
> messages related to experimental and native access settings at the JVM level.
> {noformat}
> INFO [Index Provenance Events-1] o.a.l.s.MemorySegmentIndexInputProvider 
> Using MemorySegmentIndexInput and native madvise support with Java 21 or 
> later; to disable start with 
> -Dorg.apache.lucene.store.MMapDirectory.enableMemorySegments=false
> WARN [Index Provenance Events-1] o.a.l.i.v.VectorizationProvider Java vector 
> incubator module is not readable. For optimal vector performance, pass 
> '--add-modules jdk.incubator.vector' to enable Vector API.
> {noformat}
> These messages reflect parameters that should be changed for standard 
> operation and can be confusing given the Lucene is an implementation detail 
> of the Provenance Repository. For this reason, these log messages should be 
> suppressed in the default Logback configuration



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13889) Remove Protected Properties Abstraction

2024-10-17 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13889:
--
Fix Version/s: 2.0.0-M5
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Remove Protected Properties Abstraction
> ---
>
> Key: NIFI-13889
> URL: https://issues.apache.org/jira/browse/NIFI-13889
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0-M5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The {{nifi-property-utils}} module includes the {{ProtectedProperties}} 
> interface along with supporting components with handling sensitive properties 
> in the nifi.properties and nifi-registry.properties files. With the removal 
> of support for encrypting properties using a root key in bootstrap.conf, this 
> interface and associated abstractions should be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13890) No referencing component for quoted parameter

2024-10-17 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13890:
--
Status: Patch Available  (was: Open)

> No referencing component for quoted parameter
> -
>
> Key: NIFI-13890
> URL: https://issues.apache.org/jira/browse/NIFI-13890
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If a parameter is referenced in a property using quotes:
> {code:java}
> ${#{'My Parameter'}:toLower():contains("foo")} {code}
> or just:
> {code:java}
> #{'My Parameter'}  {code}
> No referencing component will be found for the parameter making it as 
> writeable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-13888) Provenance TestEventIndexTask is Unstable

2024-10-17 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-13888.
---
Fix Version/s: 2.0.0-M5
   Resolution: Fixed

> Provenance TestEventIndexTask is Unstable
> -
>
> Key: NIFI-13888
> URL: https://issues.apache.org/jira/browse/NIFI-13888
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 2.0.0-M4
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0-M5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The TestEventIndexTask has a test method 
> testIndexWriterCommittedWhenAppropriate that often fails in local 
> multi-thread builds on Ubuntu Linux. The test method asserts an expected 
> completion in 5 seconds. This works in some circumstances, but often fails by 
> a second or two on multi-threaded Maven builds due to resource contention.
> Increasing the timeout value to be more lenient should provide greater 
> stability.
> {noformat}
> [ERROR] Failures: 
> [ERROR]   TestEventIndexTask.testIndexWriterCommittedWhenAppropriate:47 
> execution exceeded timeout of 5000 ms by 1791 ms
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13886) Upgrade Spring Framework to 6.1.14 along with common dependencies

2024-10-17 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13886:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Upgrade Spring Framework to 6.1.14 along with common dependencies
> -
>
> Key: NIFI-13886
> URL: https://issues.apache.org/jira/browse/NIFI-13886
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0-M5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Spring Framework dependencies should be upgraded to 
> [6.1.14|https://github.com/spring-projects/spring-framework/releases/tag/v6.1.14]
>  and other common dependencies including cloud provider bundles, Google 
> Guava, Apache SSHD, and others.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13890) No referencing component for quoted parameter

2024-10-17 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-13890:
-

 Summary: No referencing component for quoted parameter
 Key: NIFI-13890
 URL: https://issues.apache.org/jira/browse/NIFI-13890
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Pierre Villard
Assignee: Pierre Villard


If a parameter is referenced in a property using quotes:
{code:java}
${#{'My Parameter'}:toLower():contains("foo")} {code}
or just:
{code:java}
#{'My Parameter'}  {code}
No referencing component will be found for the parameter making it as writeable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13881) Upgrade JUnit to 5.11.2 along with Maven Plugins

2024-10-17 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13881:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Upgrade JUnit to 5.11.2 along with Maven Plugins
> 
>
> Key: NIFI-13881
> URL: https://issues.apache.org/jira/browse/NIFI-13881
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0-M5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> JUnit dependencies should be upgraded to 
> [5.11.2|https://junit.org/junit5/docs/5.11.2/release-notes/] along with Maven 
> Surefire Plugin 3.5.1 and other Maven plugin dependencies.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13596) Rename DistributedMapCacheServer and DistributedMapCacheClientService

2024-10-17 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13596:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Rename DistributedMapCacheServer and DistributedMapCacheClientService
> -
>
> Key: NIFI-13596
> URL: https://issues.apache.org/jira/browse/NIFI-13596
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0-M5
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> It's been mentioned many times (mailing lists, Slack, etc) that 
> DistributedMapCacheServer and DistributedMapCacheClientService components are 
> not really well named as there is nothing really being distributed. This can 
> be extremely confusing.
> As part of the NiFi 2.0 release, we should leverage this opportunity to 
> rename those components. This is a breaking change but I believe this worth 
> the breaking change to provide a much better user experience. A bit similar 
> to NIFI-13454.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13882) Upgrade Kotlin to 2.0.21

2024-10-17 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13882:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Upgrade Kotlin to 2.0.21
> 
>
> Key: NIFI-13882
> URL: https://issues.apache.org/jira/browse/NIFI-13882
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 2.0.0-M5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The core framework uses JetBrains Xodus for Flow Configuration History, and 
> InvokeHTTP uses the OkHttp library, both which depend on the Kotlin standard 
> library. Kotlin 2 is the latest major release version and provides 
> compatibility with existing libraries. These Kotlin dependencies should be 
> upgraded to [2.0.21|https://github.com/JetBrains/kotlin/releases/tag/v2.0.21] 
> to align with the current major version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13875) Upgrade Calcite to 1.38.0 and Curator to 5.7.1

2024-10-16 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13875:
--
Fix Version/s: 2.0.0-M5
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Upgrade Calcite to 1.38.0 and Curator to 5.7.1
> --
>
> Key: NIFI-13875
> URL: https://issues.apache.org/jira/browse/NIFI-13875
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 2.0.0-M5
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Apache Calcite dependencies should be upgraded to 1.38.0 and Apache Curator 
> dependencies should be upgraded to 5.7.1.
> [Calcite 1.38.0|https://calcite.apache.org/docs/history.html#breaking-1-38-0] 
> includes changes around decimal precision handling that require minor 
> adjustments to some unit test expected values.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-10002) FetchFile can delete configuration files

2024-10-16 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-10002:
--
Fix Version/s: 2.0.0-M5
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> FetchFile can delete configuration files
> 
>
> Key: NIFI-10002
> URL: https://issues.apache.org/jira/browse/NIFI-10002
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration, Extensions
>Affects Versions: 1.15.3, 2.0.0-M4
>Reporter: Filip Maretić
>Assignee: David Handermann
>Priority: Minor
> Fix For: 2.0.0-M5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There could be a possible problem with the NiFi ListFile -> FetchFile 
> combination when using variables that do not exist in the ListFile processor.
> Steps to reproduce:
>  * configure the ListFile's input directory to some variable that does not 
> exist ${foo}
>  * connect the ListFile to FetchFile and configure FetchFile to delete files
>  * start the processors
>  * NiFi resolved the ${foo} to empty String or null (not sure) and starts to 
> fetch files from the NiFi working directory
>  * NiFi autodigests itself



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13864) scientific notation issue of invokeHTTP in returned json -- NiFi2.0-M4

2024-10-11 Thread Pierre Villard (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17888686#comment-17888686
 ] 

Pierre Villard commented on NIFI-13864:
---

Can you share more details?

Is the problematic scientific notation right after the InvokeHTTP? Or is it 
after another processor? If another processor, which one? Is this processor 
using record reader/writer? If yes, which record reader/writer? and with which 
configuration?

Ideally, if you could share a JSON flow definition made of something like 
GenerateFlowFile -> ConvertRecord that reproduces the issue, this would be 
helpful.

Thanks

> scientific notation issue of invokeHTTP in returned json -- NiFi2.0-M4
> --
>
> Key: NIFI-13864
> URL: https://issues.apache.org/jira/browse/NIFI-13864
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 2.0.0-M4
> Environment: Rocky9, NiFi2.0.0-M4
>Reporter: Tao Li
>Priority: Major
> Attachments: image (2).png, image (3).png
>
>
> seems M4 still has scientific notation issue.  invokeHTTP returned json from 
> API.  number in the json was converted to e-notation by processor.
> we tried swagger, it return correct value.  the image(2).png --  invokeHTTP,  
> image(3).png -- swagger.
> Is there any workaround way to fix this issue?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13843) Unknown fields not dropped by JSON Writer as expected by specified schema

2024-10-08 Thread Pierre Villard (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17887642#comment-17887642
 ] 

Pierre Villard commented on NIFI-13843:
---

Good catch [~dstiegli1] - this is definitely the same thing.

> Unknown fields not dropped by JSON Writer as expected by specified schema
> -
>
> Key: NIFI-13843
> URL: https://issues.apache.org/jira/browse/NIFI-13843
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.27.0, 2.0.0-M4
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> Consider the following use case:
>  * GFF Processor, generating a JSON with 3 fields: a, b, and c
>  * ConvertRecord with JSON Reader / JSON Writer
>  ** Both reader and writer are configured with a schema only specifying 
> fields a and b
> The expected result is a JSON that only contains fields a and b.
> We're following the below path in the code:
>  * AbstractRecordProcessor (L131)
> {code:java}
> Record firstRecord = reader.nextRecord(); {code}
> In this case, the default method for nextRecord() is defined in RecordReader 
> (L50)
> {code:java}
> default Record nextRecord() throws IOException, MalformedRecordException {
> return nextRecord(true, false);
> } {code}
> where we are NOT dropping the unknown fields (Java doc needs some fixing here 
> as it is saying the opposite)
> We get to 
> {code:java}
> writer.write(firstRecord); {code}
> which gets us to
>  * WriteJsonResult (L206)
> Here, we do a check
> {code:java}
> isUseSerializeForm(record, writeSchema) {code}
> which currently returns true when it should not. Because of this we write the 
> serialised form which ignores the writer schema.
> In this method isUseSerializeForm(), we do check
> {code:java}
> record.getSchema().equals(writeSchema) {code}
> But at this point record.getSchema() returns the schema defined in the reader 
> which is equal to the one defined in the writer - even though the record has 
> additional fields compared to the defined schema.
> The suggested fix is check is to also add a check on
> {code:java}
> record.isDropUnknownFields() {code}
> If dropUnknownFields is false, then we do not use the serialised form.
> While this does solve the issue, I'm a bit conflicted on the current 
> approach. Not only this could have a performance impact (we are likely going 
> to not use the serialized form as often), but it also feels like the default 
> should be to ignore the unknown fields when reading the record.
> If we consider the below scenario:
>  * GFF Processor, generating a JSON with 3 fields: {{{}a{}}}, {{{}b{}}}, and 
> {{c}}
>  * ConvertRecord with JSON Reader / JSON Writer
>  ** JSON reader with a schema only specifying fields {{a}} and {{b}}
>  ** JSON writer with a schema specifying fields {{{}a{}}}, {{{}b{}}}, and 
> {{c}} ({{{}c{}}} defaulting to {{{}null{}}})
> It feels like the expected result should be a JSON with the field {{c}} and a 
> {{null}} value, because the reader would drop the field when reading the JSON 
> and converting it into a record and pass it to the writer.
> If we agree on the above, then it may be easier to juste override 
> {{nextRecord()}} in {{AbstractJsonRowRecordReader}} and default to 
> {{{}nextRecord(true, true){}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13849) Convert property to parameter tries to delete inherited parameters

2024-10-08 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13849:
--
Description: 
Steps to reproduce:
 * Create a Parameter Context PCi with parameter A
 * Create a Parameter Context PC that inherits PCi
 * Create a Process Group that uses PC
 * In the process group, create a flow GFF -> InvokeHTTP, configure InvokeHTTP 
to use parameter A, and start InvokeHTTP processor (add a dummy endpoint and 
autoterminate relationships). In GFF, add a dynamic property foo => foo. Then 
click the Convert to Parameter option and click OK.

At this point it will fail saying that the parameter A cannot be deleted 
because InvokeHTTP is running.

*Important note: if InvokeHTTP is NOT running, the conversion of the property 
value into a parameter works as expected and nothing gets deleted in the 
inherited parameter contexts.*

Debugging shows that this is because the ParameterContextDTO object sent in the 
update request specified null for the inherited parameter contexts and it makes 
the current logic think that all of the parameters from inherited parameter 
contexts should be deleted.

When adding the parameter directly in the parameter context view, the same 
ParameterContextDTO object does correctly reference the inherited parameter 
context.

It feels like a fix in the UI is needed so that the update request includes the 
references to the inherited parameter contexts.

  was:
Steps to reproduce:
 * Create a Parameter Context PCi with parameter A
 * Create a Parameter Context PC that inherits PCi
 * Create a Process Group that uses PC
 * In the process group, create a flow GFF -> InvokeHTTP, configure InvokeHTTP 
to use parameter A, and start InvokeHTTP processor (add a dummy endpoint and 
autoterminate relationships). In GFF, add a dynamic property foo => foo. Then 
click the Convert to Parameter option and click OK.

At this point it will fail saying that the parameter A cannot be deleted 
because InvokeHTTP is running.

Debugging shows that this is because the ParameterContextDTO object sent in the 
update request specified null for the inherited parameter contexts and it makes 
the current logic think that all of the parameters from inherited parameter 
contexts should be deleted.

When adding the parameter directly in the parameter context view, the same 
ParameterContextDTO object does correctly reference the inherited parameter 
context.

It feels like a fix in the UI is needed so that the update request includes the 
references to the inherited parameter contexts.


> Convert property to parameter tries to delete inherited parameters
> --
>
> Key: NIFI-13849
> URL: https://issues.apache.org/jira/browse/NIFI-13849
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 2.0.0-M4
>Reporter: Pierre Villard
>Priority: Major
>
> Steps to reproduce:
>  * Create a Parameter Context PCi with parameter A
>  * Create a Parameter Context PC that inherits PCi
>  * Create a Process Group that uses PC
>  * In the process group, create a flow GFF -> InvokeHTTP, configure 
> InvokeHTTP to use parameter A, and start InvokeHTTP processor (add a dummy 
> endpoint and autoterminate relationships). In GFF, add a dynamic property foo 
> => foo. Then click the Convert to Parameter option and click OK.
> At this point it will fail saying that the parameter A cannot be deleted 
> because InvokeHTTP is running.
> *Important note: if InvokeHTTP is NOT running, the conversion of the property 
> value into a parameter works as expected and nothing gets deleted in the 
> inherited parameter contexts.*
> Debugging shows that this is because the ParameterContextDTO object sent in 
> the update request specified null for the inherited parameter contexts and it 
> makes the current logic think that all of the parameters from inherited 
> parameter contexts should be deleted.
> When adding the parameter directly in the parameter context view, the same 
> ParameterContextDTO object does correctly reference the inherited parameter 
> context.
> It feels like a fix in the UI is needed so that the update request includes 
> the references to the inherited parameter contexts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13849) Convert property to parameter tries to delete inherited parameters

2024-10-08 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13849:
--
Priority: Minor  (was: Major)

> Convert property to parameter tries to delete inherited parameters
> --
>
> Key: NIFI-13849
> URL: https://issues.apache.org/jira/browse/NIFI-13849
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 2.0.0-M4
>Reporter: Pierre Villard
>Priority: Minor
>
> Steps to reproduce:
>  * Create a Parameter Context PCi with parameter A
>  * Create a Parameter Context PC that inherits PCi
>  * Create a Process Group that uses PC
>  * In the process group, create a flow GFF -> InvokeHTTP, configure 
> InvokeHTTP to use parameter A, and start InvokeHTTP processor (add a dummy 
> endpoint and autoterminate relationships). In GFF, add a dynamic property foo 
> => foo. Then click the Convert to Parameter option and click OK.
> At this point it will fail saying that the parameter A cannot be deleted 
> because InvokeHTTP is running.
> *Important note: if InvokeHTTP is NOT running, the conversion of the property 
> value into a parameter works as expected and nothing gets deleted in the 
> inherited parameter contexts.*
> Debugging shows that this is because the ParameterContextDTO object sent in 
> the update request specified null for the inherited parameter contexts and it 
> makes the current logic think that all of the parameters from inherited 
> parameter contexts should be deleted.
> When adding the parameter directly in the parameter context view, the same 
> ParameterContextDTO object does correctly reference the inherited parameter 
> context.
> It feels like a fix in the UI is needed so that the update request includes 
> the references to the inherited parameter contexts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13849) Convert property to parameter tries to delete inherited parameters

2024-10-08 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13849:
--
Summary: Convert property to parameter tries to delete inherited parameters 
 (was: Convert property to parameter deletes inherited parameters)

> Convert property to parameter tries to delete inherited parameters
> --
>
> Key: NIFI-13849
> URL: https://issues.apache.org/jira/browse/NIFI-13849
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 2.0.0-M4
>Reporter: Pierre Villard
>Priority: Major
>
> Steps to reproduce:
>  * Create a Parameter Context PCi with parameter A
>  * Create a Parameter Context PC that inherits PCi
>  * Create a Process Group that uses PC
>  * In the process group, create a flow GFF -> InvokeHTTP, configure 
> InvokeHTTP to use parameter A, and start InvokeHTTP processor (add a dummy 
> endpoint and autoterminate relationships). In GFF, add a dynamic property foo 
> => foo. Then click the Convert to Parameter option and click OK.
> At this point it will fail saying that the parameter A cannot be deleted 
> because InvokeHTTP is running.
> Debugging shows that this is because the ParameterContextDTO object sent in 
> the update request specified null for the inherited parameter contexts and it 
> makes the current logic think that all of the parameters from inherited 
> parameter contexts should be deleted.
> When adding the parameter directly in the parameter context view, the same 
> ParameterContextDTO object does correctly reference the inherited parameter 
> context.
> It feels like a fix in the UI is needed so that the update request includes 
> the references to the inherited parameter contexts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13849) Convert property to parameter deletes inherited parameters

2024-10-08 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-13849:
-

 Summary: Convert property to parameter deletes inherited parameters
 Key: NIFI-13849
 URL: https://issues.apache.org/jira/browse/NIFI-13849
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core UI
Affects Versions: 2.0.0-M4
Reporter: Pierre Villard


Steps to reproduce:
 * Create a Parameter Context PCi with parameter A
 * Create a Parameter Context PC that inherits PCi
 * Create a Process Group that uses PC
 * In the process group, create a flow GFF -> InvokeHTTP, configure InvokeHTTP 
to use parameter A, and start InvokeHTTP processor (add a dummy endpoint and 
autoterminate relationships). In GFF, add a dynamic property foo => foo. Then 
click the Convert to Parameter option and click OK.

At this point it will fail saying that the parameter A cannot be deleted 
because InvokeHTTP is running.

Debugging shows that this is because the ParameterContextDTO object sent in the 
update request specified null for the inherited parameter contexts and it makes 
the current logic think that all of the parameters from inherited parameter 
contexts should be deleted.

When adding the parameter directly in the parameter context view, the same 
ParameterContextDTO object does correctly reference the inherited parameter 
context.

It feels like a fix in the UI is needed so that the update request includes the 
references to the inherited parameter contexts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13848) Migrate AWSCredentialsProviderControllerService's Proxy properties to ProxyConfigurationService

2024-10-08 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13848:
--
Fix Version/s: 2.0.0-M5
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Migrate AWSCredentialsProviderControllerService's Proxy properties to 
> ProxyConfigurationService
> ---
>
> Key: NIFI-13848
> URL: https://issues.apache.org/jira/browse/NIFI-13848
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Peter Turcsanyi
>Assignee: Peter Turcsanyi
>Priority: Major
> Fix For: 2.0.0-M5
>
>
> Get rid of the obsolete component level proxy host/port properties in 
> AWSCredentialsProviderControllerService and add migration code to convert 
> them to ProxyConfigurationService which also supports proxy username/password 
> for authentication.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13335) XMLReader drops values from name-value content if values are mixture of strings and numbers

2024-10-08 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13335:
--
Fix Version/s: 2.0.0-M5
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> XMLReader drops values from name-value content if values are mixture of 
> strings and numbers
> ---
>
> Key: NIFI-13335
> URL: https://issues.apache.org/jira/browse/NIFI-13335
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.24.0, 1.27.0, 2.0.0-M4
> Environment: Docker
>Reporter: Stephen Jeffrey Hindmarch
>Assignee: Daniel Stieglitz
>Priority: Major
> Fix For: 2.0.0-M5
>
>
> This is similar to NIFI-13334, but does not require an array of records to 
> demonstrate.
> If you create an XMLReader service and set the following:
>  * Parse XML Attributes = true
>  * Expect Records as Arrays = false
>  * Field Name for Content = Value
> Then use the reader in a ConvertRecord processor with a JSONRecordSetWriter
> When parsing a flow file such as
> {noformat}
> 
>   
>     0x0001
>   
>   
>     String1
>     String2
>     String3
>   
> {noformat}
> then the data tags all get parsed with the correct values.
> {noformat}
> [ {
>   "Type" : "foo",
>   "System" : {
>     "EventID" : "0x0001"
>   },
>   "UserData" : {
>     "Data" : [ {
>         "Name" : "Param1",
>         "Value" : "String1"
>     }, {
>         "Name" : "Param2",
>         "Value" : "String2"
>     }, {
>         "Name" : "Param3",
>         "Value" : "String3"
>     } ]
>   }
> } ]{noformat}
> But if one of those data tags has a numeric value then all of the values are 
> dropped and are replaced with null. For example
> {noformat}
> 
>   
>     0x0001
>   
>   
>     String1
>     2
>     String3
>   
> 
> {noformat}
> parses to
> {noformat}
> [ {
>   "Type" : "foo",
>   "System" : {
>     "EventID" : "0x0001"
>   },
>   "UserData" : {
>     "Data" : [ {
>         "Name" : "Param1",
>         "Value" : null
>     }, {
>         "Name" : "Param2",
>         "Value" : null
>     }, {
>         "Name" : "Param3",
>         "Value" : null
>     } ]
>   }
> } ]{noformat}
> and all of the tag data is lost.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13841) Restore proxy support in AWS PutSNS processor

2024-10-08 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13841:
--
Fix Version/s: 2.0.0-M5
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Restore proxy support in AWS PutSNS processor
> -
>
> Key: NIFI-13841
> URL: https://issues.apache.org/jira/browse/NIFI-13841
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Peter Turcsanyi
>Assignee: Peter Turcsanyi
>Priority: Minor
> Fix For: 2.0.0-M5
>
>
> Similar to other AWS processors, PutSNS supports proxy connections in NiFi 
> 1.x. It seems to have been lost in NIFI-12220 and is not available in NiFi 2 
> now. The old processor level proxy configuration properties were deleted but 
> no Proxy Configuration Service was added in this processor. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13847) Small typos in ReplaceText

2024-10-07 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13847:
--
Status: Patch Available  (was: Open)

> Small typos in ReplaceText
> --
>
> Key: NIFI-13847
> URL: https://issues.apache.org/jira/browse/NIFI-13847
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Trivial
>
> Minor typos in ReplaceText processor for PREPEND_TEXT and APPEND_TEXT 
> properties.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13847) Small typos in ReplaceText

2024-10-07 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-13847:
-

 Summary: Small typos in ReplaceText
 Key: NIFI-13847
 URL: https://issues.apache.org/jira/browse/NIFI-13847
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Pierre Villard
Assignee: Pierre Villard


Minor typos in ReplaceText processor for PREPEND_TEXT and APPEND_TEXT 
properties.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13840) AWS v2 processors fail to configure proxy

2024-10-07 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13840:
--
Fix Version/s: 2.0.0-M5
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> AWS v2 processors fail to configure proxy
> -
>
> Key: NIFI-13840
> URL: https://issues.apache.org/jira/browse/NIFI-13840
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Peter Turcsanyi
>Assignee: Peter Turcsanyi
>Priority: Major
> Fix For: 2.0.0-M5
>
>
> AWS processors using the v2 client library (SQS, Lambda, DynamoDB, Kinesis) 
> fail to set up proxy access properly. The proxy endpoint URI needs to be 
> constructed and passed in the AWS client. Despite the 
> [javadoc|https://github.com/aws/aws-sdk-java-v2/blob/13f2c813e861a510e6a19c4ded047bf845f96ec0/http-clients/apache-client/src/main/java/software/amazon/awssdk/http/apache/ProxyConfiguration.java#L234-L238]
>  says "the endpoint is limited to a host and port", the scheme must also be 
> provided in the URI. Otherwise it leads to error (in case of IP) or the proxy 
> config is just ignored (in case of localhost).
> {code:java}
> java.lang.IllegalArgumentException: Illegal character in scheme name at index 
> 0: 192.168.0.10:8080
>     at java.base/java.net.URI.create(URI.java:932)
>     at 
> org.apache.nifi.processors.aws.v2.AbstractAwsSyncProcessor$1.configureProxy(AbstractAwsSyncProcessor.java:94)
>     at 
> org.apache.nifi.processors.aws.v2.AbstractAwsProcessor.configureSdkHttpClient(AbstractAwsProcessor.java:309)
>     at 
> org.apache.nifi.processors.aws.v2.AbstractAwsSyncProcessor.createSdkHttpClient(AbstractAwsSyncProcessor.java:104)
>     at 
> org.apache.nifi.processors.aws.v2.AbstractAwsSyncProcessor.configureHttpClient(AbstractAwsSyncProcessor.java:71)
>     at 
> org.apache.nifi.processors.aws.v2.AbstractAwsProcessor.configureClientBuilder(AbstractAwsProcessor.java:275)
>     at 
> org.apache.nifi.processors.aws.v2.AbstractAwsProcessor.configureClientBuilder(AbstractAwsProcessor.java:268)
>     at 
> org.apache.nifi.processors.aws.v2.AbstractAwsSyncProcessor.createClient(AbstractAwsSyncProcessor.java:65){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13842) AWS v2 processors fail to configure truststore

2024-10-07 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13842:
--
Fix Version/s: 2.0.0-M5
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> AWS v2 processors fail to configure truststore
> --
>
> Key: NIFI-13842
> URL: https://issues.apache.org/jira/browse/NIFI-13842
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Peter Turcsanyi
>Assignee: Peter Turcsanyi
>Priority: Major
> Fix For: 2.0.0-M5
>
>
> AWS processors using the v2 client library (SQS, SNS, DynamoDB) fail to set 
> up custom truststore configured via SSL Context Service. Both keystore and 
> truststore are expected in AbstractAwsProcessor but only one of them is 
> mandatory in SSL Context Service.
> {code:java}
> java.lang.NullPointerException: null
>     at java.base/java.util.Objects.requireNonNull(Objects.java:233)
>     at java.base/sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:296)
>     at java.base/java.nio.file.Path.of(Path.java:148)
>     at 
> org.apache.nifi.processors.aws.v2.AbstractAwsProcessor.configureSdkHttpClient(AbstractAwsProcessor.java:295)
>     at 
> org.apache.nifi.processors.aws.v2.AbstractAwsSyncProcessor.createSdkHttpClient(AbstractAwsSyncProcessor.java:104)
>     at 
> org.apache.nifi.processors.aws.v2.AbstractAwsSyncProcessor.configureHttpClient(AbstractAwsSyncProcessor.java:71)
>     at 
> org.apache.nifi.processors.aws.v2.AbstractAwsProcessor.configureClientBuilder(AbstractAwsProcessor.java:275)
>     at 
> org.apache.nifi.processors.aws.v2.AbstractAwsProcessor.configureClientBuilder(AbstractAwsProcessor.java:268)
>     at 
> org.apache.nifi.processors.aws.v2.AbstractAwsSyncProcessor.createClient(AbstractAwsSyncProcessor.java:65)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13843) Unknown fields not dropped by JSON Writer as expected by specified schema

2024-10-04 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13843:
--
Description: 
Consider the following use case:
 * GFF Processor, generating a JSON with 3 fields: a, b, and c
 * ConvertRecord with JSON Reader / JSON Writer
 ** Both reader and writer are configured with a schema only specifying fields 
a and b

The expected result is a JSON that only contains fields a and b.

We're following the below path in the code:
 * AbstractRecordProcessor (L131)

{code:java}
Record firstRecord = reader.nextRecord(); {code}
In this case, the default method for nextRecord() is defined in RecordReader 
(L50)
{code:java}
default Record nextRecord() throws IOException, MalformedRecordException {
return nextRecord(true, false);
} {code}
where we are NOT dropping the unknown fields (Java doc needs some fixing here 
as it is saying the opposite)

We get to 
{code:java}
writer.write(firstRecord); {code}
which gets us to
 * WriteJsonResult (L206)

Here, we do a check
{code:java}
isUseSerializeForm(record, writeSchema) {code}
which currently returns true when it should not. Because of this we write the 
serialised form which ignores the writer schema.

In this method isUseSerializeForm(), we do check
{code:java}
record.getSchema().equals(writeSchema) {code}
But at this point record.getSchema() returns the schema defined in the reader 
which is equal to the one defined in the writer - even though the record has 
additional fields compared to the defined schema.

The suggested fix is check is to also add a check on
{code:java}
record.isDropUnknownFields() {code}
If dropUnknownFields is false, then we do not use the serialised form.

While this does solve the issue, I'm a bit conflicted on the current approach. 
Not only this could have a performance impact (we are likely going to not use 
the serialized form as often), but it also feels like the default should be to 
ignore the unknown fields when reading the record.

If we consider the below scenario:
 * GFF Processor, generating a JSON with 3 fields: {{{}a{}}}, {{{}b{}}}, and 
{{c}}
 * ConvertRecord with JSON Reader / JSON Writer
 ** JSON reader with a schema only specifying fields {{a}} and {{b}}
 ** JSON writer with a schema specifying fields {{{}a{}}}, {{{}b{}}}, and {{c}} 
({{{}c{}}} defaulting to {{{}null{}}})

It feels like the expected result should be a JSON with the field {{c}} and a 
{{null}} value, because the reader would drop the field when reading the JSON 
and converting it into a record and pass it to the writer.

If we agree on the above, then it may be easier to juste override 
{{nextRecord()}} in {{AbstractJsonRowRecordReader}} and default to 
{{{}nextRecord(true, true){}}}.

  was:
Consider the following use case:
 * GFF Processor, generating a JSON with 3 fields: a, b, and c
 * ConvertRecord with JSON Reader / JSON Writer
 ** Both reader and writer are configured with a schema only specifying fields 
a and b

The expected result is a JSON that only contains fields a and b.

We're following the below path in the code:
 * AbstractRecordProcessor (L131)

{code:java}
Record firstRecord = reader.nextRecord(); {code}
In this case, the default method for nextRecord() is defined in RecordReader 
(L50)
{code:java}
default Record nextRecord() throws IOException, MalformedRecordException {
return nextRecord(true, false);
} {code}
where we are NOT dropping the unknown fields (Java doc needs some fixing here 
as it is saying the opposite)

We get to 
{code:java}
writer.write(firstRecord); {code}
which gets us to
 * WriteJsonResult (L206)

Here, we do a check
{code:java}
isUseSerializeForm(record, writeSchema) {code}
which currently returns true when it should not. Because of this we write the 
serialised form which ignores the writer schema.

In this method isUseSerializeForm(), we do check
{code:java}
record.getSchema().equals(writeSchema) {code}
But at this point record.getSchema() returns the schema defined in the reader 
which is equal to the one defined in the writer - even though the record has 
additional fields compared to the defined schema.

The suggested fix is check is to also add a check on
{code:java}
record.isDropUnknownFields() {code}
If dropUnknownFields is false, then we do not use the serialised form.

 


> Unknown fields not dropped by JSON Writer as expected by specified schema
> -
>
> Key: NIFI-13843
> URL: https://issues.apache.org/jira/browse/NIFI-13843
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.27.0, 2.0.0-M4
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> Consider the following use case:
>  * GFF Processor, generating a JSON with 3 fields: a, b, and c
>  * Conve

[jira] [Updated] (NIFI-13843) Unknown fields not dropped by JSON Writer as expected by specified schema

2024-10-04 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13843:
--
Status: Patch Available  (was: Open)

> Unknown fields not dropped by JSON Writer as expected by specified schema
> -
>
> Key: NIFI-13843
> URL: https://issues.apache.org/jira/browse/NIFI-13843
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 2.0.0-M4, 1.27.0
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> Consider the following use case:
>  * GFF Processor, generating a JSON with 3 fields: a, b, and c
>  * ConvertRecord with JSON Reader / JSON Writer
>  ** Both reader and writer are configured with a schema only specifying 
> fields a and b
> The expected result is a JSON that only contains fields a and b.
> We're following the below path in the code:
>  * AbstractRecordProcessor (L131)
> {code:java}
> Record firstRecord = reader.nextRecord(); {code}
> In this case, the default method for nextRecord() is defined in RecordReader 
> (L50)
> {code:java}
> default Record nextRecord() throws IOException, MalformedRecordException {
> return nextRecord(true, false);
> } {code}
> where we are NOT dropping the unknown fields (Java doc needs some fixing here 
> as it is saying the opposite)
> We get to 
> {code:java}
> writer.write(firstRecord); {code}
> which gets us to
>  * WriteJsonResult (L206)
> Here, we do a check
> {code:java}
> isUseSerializeForm(record, writeSchema) {code}
> which currently returns true when it should not. Because of this we write the 
> serialised form which ignores the writer schema.
> In this method isUseSerializeForm(), we do check
> {code:java}
> record.getSchema().equals(writeSchema) {code}
> But at this point record.getSchema() returns the schema defined in the reader 
> which is equal to the one defined in the writer - even though the record has 
> additional fields compared to the defined schema.
> The suggested fix is check is to also add a check on
> {code:java}
> record.isDropUnknownFields() {code}
> If dropUnknownFields is false, then we do not use the serialised form.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13843) Unknown fields not dropped by JSON Writer as expected by specified schema

2024-10-04 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-13843:
-

 Summary: Unknown fields not dropped by JSON Writer as expected by 
specified schema
 Key: NIFI-13843
 URL: https://issues.apache.org/jira/browse/NIFI-13843
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 2.0.0-M4, 1.27.0
Reporter: Pierre Villard
Assignee: Pierre Villard


Consider the following use case:
 * GFF Processor, generating a JSON with 3 fields: a, b, and c
 * ConvertRecord with JSON Reader / JSON Writer
 ** Both reader and writer are configured with a schema only specifying fields 
a and b

The expected result is a JSON that only contains fields a and b.

We're following the below path in the code:
 * AbstractRecordProcessor (L131)

{code:java}
Record firstRecord = reader.nextRecord(); {code}
In this case, the default method for nextRecord() is defined in RecordReader 
(L50)
{code:java}
default Record nextRecord() throws IOException, MalformedRecordException {
return nextRecord(true, false);
} {code}
where we are NOT dropping the unknown fields (Java doc needs some fixing here 
as it is saying the opposite)

We get to 
{code:java}
writer.write(firstRecord); {code}
which gets us to
 * WriteJsonResult (L206)

Here, we do a check
{code:java}
isUseSerializeForm(record, writeSchema) {code}
which currently returns true when it should not. Because of this we write the 
serialised form which ignores the writer schema.

In this method isUseSerializeForm(), we do check
{code:java}
record.getSchema().equals(writeSchema) {code}
But at this point record.getSchema() returns the schema defined in the reader 
which is equal to the one defined in the writer - even though the record has 
additional fields compared to the defined schema.

The suggested fix is check is to also add a check on
{code:java}
record.isDropUnknownFields() {code}
If dropUnknownFields is false, then we do not use the serialised form.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13826) Bump various dependencies including jetty, netty, jackson, aws and more

2024-10-02 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13826:
--
Status: Patch Available  (was: In Progress)

> Bump various dependencies including jetty, netty, jackson, aws and more
> ---
>
> Key: NIFI-13826
> URL: https://issues.apache.org/jira/browse/NIFI-13826
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core Framework, Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 2.0.0-M5
>
>
> * Jackson from 2.17.2 to 2.18.0 - 
> [https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.18]
>  * zstd-jni from 1.5.6-5 to 1.5.6-6
>  * gson from 2.10.1 to 2.11.0 - 
> [https://github.com/google/gson/releases/tag/gson-parent-2.11.0]
>  * okio from 3.9.0 to 3.9.1
>  * io.fabric8 from 6.13.1 to 6.13.4
>  * netty from 4.1.113.Final to 4.1.114.Final
>  * swagger-annotations from 2.2.23 to 2.2.24
>  * avro from 1.11.3 to 1.11.4
>  * commons-lang3 from 3.16.0 to 3.17.0
>  * log4j from 2.24.0 to 2.24.1
>  * jetty from 12.0.13 to 12.0.14
>  * junit-bom from 5.10.3 to 5.10.4
>  * junit-platform-commons from 1.10.3 to 1.10.4
>  * mockito from 5.13.0 to 5.14.1 - 
> [https://github.com/mockito/mockito/releases]
>  * testcontainers from 1.20.1 to 1.20.2
>  * snakeyaml from 2.2 to 2.3
>  * AWS SDK v2 from 2.28.4 to 2.28.13
>  * flyway from 10.18.0 to 10.18.2
>  * jline from 3.26.3 to 3.27.0
>  * neo4j driver from 5.24.0 to 5.25.0
>  * maxmind from 3.1.0 to 3.1.1
>  * geoip2 from 4.2.0 to 4.2.1
>  * amqp-client from 5.21.0 to 5.22.0
>  * commons-csv from 1.11.0 to 1.12.0
>  * splunk from 1.9.4 to 1.9.5
>  * lucene from 9.11.1 to 9.12.0
>  * google bom from 26.46.0 to 26.47.0
>  * azure bom from 1.2.26 to 1.2.28
>  * msal4j from 1.16.1 to 1.17.1
>  * json-schema-validator from 1.5.1 to 1.5.2
>  * checker-qual from 3.45.0 to 3.47.0
>  * checkstyle from 10.16.0 to 10.18.2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13826) Bump various dependencies including jetty, netty, jackson, aws and more

2024-10-02 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13826:
--
Description: 
* Jackson from 2.17.2 to 2.18.0 - 
[https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.18]
 * zstd-jni from 1.5.6-5 to 1.5.6-6
 * gson from 2.10.1 to 2.11.0 - 
[https://github.com/google/gson/releases/tag/gson-parent-2.11.0]
 * okio from 3.9.0 to 3.9.1
 * io.fabric8 from 6.13.1 to 6.13.4
 * netty from 4.1.113.Final to 4.1.114.Final
 * swagger-annotations from 2.2.23 to 2.2.24
 * avro from 1.11.3 to 1.11.4
 * commons-lang3 from 3.16.0 to 3.17.0
 * log4j from 2.24.0 to 2.24.1
 * jetty from 12.0.13 to 12.0.14
 * junit-bom from 5.10.3 to 5.10.4
 * junit-platform-commons from 1.10.3 to 1.10.4
 * mockito from 5.13.0 to 5.14.1 - [https://github.com/mockito/mockito/releases]
 * testcontainers from 1.20.1 to 1.20.2
 * snakeyaml from 2.2 to 2.3
 * AWS SDK v2 from 2.28.4 to 2.28.13
 * flyway from 10.18.0 to 10.18.2
 * jline from 3.26.3 to 3.27.0
 * neo4j driver from 5.24.0 to 5.25.0
 * maxmind from 3.1.0 to 3.1.1
 * geoip2 from 4.2.0 to 4.2.1
 * amqp-client from 5.21.0 to 5.22.0
 * commons-csv from 1.11.0 to 1.12.0
 * splunk from 1.9.4 to 1.9.5
 * lucene from 9.11.1 to 9.12.0
 * google bom from 26.46.0 to 26.47.0
 * azure bom from 1.2.26 to 1.2.28
 * msal4j from 1.16.1 to 1.17.1
 * json-schema-validator from 1.5.1 to 1.5.2
 * checker-qual from 3.45.0 to 3.47.0
 * checkstyle from 10.16.0 to 10.18.2

  was:
* Jackson from 2.17.2 to 2.18.0 - 
[https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.18]
 * zstd-jni from 1.5.6-5 to 1.5.6-6
 * gson from 2.10.1 to 2.11.0 - 
[https://github.com/google/gson/releases/tag/gson-parent-2.11.0]
 * okio from 3.9.0 to 3.9.1
 * io.fabric8 from 6.13.1 to 6.13.4
 * netty from 4.1.113.Final to 4.1.114.Final
 * swagger-annotations from 2.2.23 to 2.2.24
 * avro from 1.11.3 to 1.11.4
 * commons-lang3 from 3.16.0 to 3.17.0
 * log4j from 2.24.0 to 2.24.1
 * jetty from 12.0.13 to 12.0.14
 * junit-bom from 5.10.3 to 5.10.4
 * junit-platform-commons from 1.10.3 to 1.10.4
 * mockito from 5.13.0 to 5.14.1 - [https://github.com/mockito/mockito/releases]
 * testcontainers from 1.20.1 to 1.20.2
 * snakeyaml from 2.2 to 2.3
 * AWS SDK v2 from 2.28.4 to 2.28.13
 * datafaker from 2.3.1 to 2.4.0
 * flyway from 10.18.0 to 10.18.2
 * jline from 3.26.3 to 3.27.0
 * neo4j driver from 5.24.0 to 5.25.0
 * maxmind from 3.1.0 to 3.1.1
 * geoip2 from 4.2.0 to 4.2.1
 * amqp-client from 5.21.0 to 5.22.0
 * commons-csv from 1.11.0 to 1.12.0
 * splunk from 1.9.4 to 1.9.5
 * lucene from 9.11.1 to 9.12.0
 * google bom from 26.46.0 to 26.47.0
 * azure bom from 1.2.26 to 1.2.28
 * msal4j from 1.16.1 to 1.17.1
 * json-schema-validator from 1.5.1 to 1.5.2
 * checker-qual from 3.45.0 to 3.47.0
 * checkstyle from 10.16.0 to 10.18.2


> Bump various dependencies including jetty, netty, jackson, aws and more
> ---
>
> Key: NIFI-13826
> URL: https://issues.apache.org/jira/browse/NIFI-13826
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core Framework, Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 2.0.0-M5
>
>
> * Jackson from 2.17.2 to 2.18.0 - 
> [https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.18]
>  * zstd-jni from 1.5.6-5 to 1.5.6-6
>  * gson from 2.10.1 to 2.11.0 - 
> [https://github.com/google/gson/releases/tag/gson-parent-2.11.0]
>  * okio from 3.9.0 to 3.9.1
>  * io.fabric8 from 6.13.1 to 6.13.4
>  * netty from 4.1.113.Final to 4.1.114.Final
>  * swagger-annotations from 2.2.23 to 2.2.24
>  * avro from 1.11.3 to 1.11.4
>  * commons-lang3 from 3.16.0 to 3.17.0
>  * log4j from 2.24.0 to 2.24.1
>  * jetty from 12.0.13 to 12.0.14
>  * junit-bom from 5.10.3 to 5.10.4
>  * junit-platform-commons from 1.10.3 to 1.10.4
>  * mockito from 5.13.0 to 5.14.1 - 
> [https://github.com/mockito/mockito/releases]
>  * testcontainers from 1.20.1 to 1.20.2
>  * snakeyaml from 2.2 to 2.3
>  * AWS SDK v2 from 2.28.4 to 2.28.13
>  * flyway from 10.18.0 to 10.18.2
>  * jline from 3.26.3 to 3.27.0
>  * neo4j driver from 5.24.0 to 5.25.0
>  * maxmind from 3.1.0 to 3.1.1
>  * geoip2 from 4.2.0 to 4.2.1
>  * amqp-client from 5.21.0 to 5.22.0
>  * commons-csv from 1.11.0 to 1.12.0
>  * splunk from 1.9.4 to 1.9.5
>  * lucene from 9.11.1 to 9.12.0
>  * google bom from 26.46.0 to 26.47.0
>  * azure bom from 1.2.26 to 1.2.28
>  * msal4j from 1.16.1 to 1.17.1
>  * json-schema-validator from 1.5.1 to 1.5.2
>  * checker-qual from 3.45.0 to 3.47.0
>  * checkstyle from 10.16.0 to 10.18.2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13826) Bump various dependencies including jetty, netty, jackson, aws and more

2024-10-02 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13826:
--
Description: 
* Jackson from 2.17.2 to 2.18.0 - 
[https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.18]
 * zstd-jni from 1.5.6-5 to 1.5.6-6
 * gson from 2.10.1 to 2.11.0 - 
[https://github.com/google/gson/releases/tag/gson-parent-2.11.0]
 * okio from 3.9.0 to 3.9.1
 * io.fabric8 from 6.13.1 to 6.13.4
 * netty from 4.1.113.Final to 4.1.114.Final
 * swagger-annotations from 2.2.23 to 2.2.24
 * avro from 1.11.3 to 1.11.4
 * commons-lang3 from 3.16.0 to 3.17.0
 * log4j from 2.24.0 to 2.24.1
 * jetty from 12.0.13 to 12.0.14
 * junit-bom from 5.10.3 to 5.10.4
 * junit-platform-commons from 1.10.3 to 1.10.4
 * mockito from 5.13.0 to 5.14.1 - [https://github.com/mockito/mockito/releases]
 * testcontainers from 1.20.1 to 1.20.2
 * snakeyaml from 2.2 to 2.3
 * AWS SDK v2 from 2.28.4 to 2.28.13
 * datafaker from 2.3.1 to 2.4.0
 * flyway from 10.18.0 to 10.18.2
 * jline from 3.26.3 to 3.27.0
 * neo4j driver from 5.24.0 to 5.25.0
 * maxmind from 3.1.0 to 3.1.1
 * geoip2 from 4.2.0 to 4.2.1
 * amqp-client from 5.21.0 to 5.22.0
 * commons-csv from 1.11.0 to 1.12.0
 * splunk from 1.9.4 to 1.9.5
 * lucene from 9.11.1 to 9.12.0
 * google bom from 26.46.0 to 26.47.0
 * azure bom from 1.2.26 to 1.2.28
 * msal4j from 1.16.1 to 1.17.1
 * json-schema-validator from 1.5.1 to 1.5.2
 * checker-qual from 3.45.0 to 3.47.0
 * checkstyle from 10.16.0 to 10.18.2

  was:
* Jackson from 2.17.2 to 2.18.0 - 
[https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.18]
 * zstd-jni from 1.5.6-5 to 1.5.6-6
 * gson from 2.10.1 to 2.11.0 - 
[https://github.com/google/gson/releases/tag/gson-parent-2.11.0]
 * okio from 3.9.0 to 3.9.1
 * io.fabric8 from 6.13.1 to 6.13.4
 * netty from 4.1.113.Final to 4.1.114.Final
 * swagger-annotations from 2.2.23 to 2.2.24
 * avro from 1.11.3 to 1.12.0 - 
[https://avro.apache.org/blog/2024/08/05/avro-1.12.0/]
 * commons-lang3 from 3.16.0 to 3.17.0
 * log4j from 2.24.0 to 2.24.1
 * jetty from 12.0.13 to 12.0.14
 * junit-bom from 5.10.3 to 5.10.4
 * junit-platform-commons from 1.10.3 to 1.10.4
 * mockito from 5.13.0 to 5.14.1 - [https://github.com/mockito/mockito/releases]
 * testcontainers from 1.20.1 to 1.20.2
 * snakeyaml from 2.2 to 2.3
 * AWS SDK v2 from 2.28.4 to 2.28.13


> Bump various dependencies including jetty, netty, jackson, aws and more
> ---
>
> Key: NIFI-13826
> URL: https://issues.apache.org/jira/browse/NIFI-13826
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core Framework, Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 2.0.0-M5
>
>
> * Jackson from 2.17.2 to 2.18.0 - 
> [https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.18]
>  * zstd-jni from 1.5.6-5 to 1.5.6-6
>  * gson from 2.10.1 to 2.11.0 - 
> [https://github.com/google/gson/releases/tag/gson-parent-2.11.0]
>  * okio from 3.9.0 to 3.9.1
>  * io.fabric8 from 6.13.1 to 6.13.4
>  * netty from 4.1.113.Final to 4.1.114.Final
>  * swagger-annotations from 2.2.23 to 2.2.24
>  * avro from 1.11.3 to 1.11.4
>  * commons-lang3 from 3.16.0 to 3.17.0
>  * log4j from 2.24.0 to 2.24.1
>  * jetty from 12.0.13 to 12.0.14
>  * junit-bom from 5.10.3 to 5.10.4
>  * junit-platform-commons from 1.10.3 to 1.10.4
>  * mockito from 5.13.0 to 5.14.1 - 
> [https://github.com/mockito/mockito/releases]
>  * testcontainers from 1.20.1 to 1.20.2
>  * snakeyaml from 2.2 to 2.3
>  * AWS SDK v2 from 2.28.4 to 2.28.13
>  * datafaker from 2.3.1 to 2.4.0
>  * flyway from 10.18.0 to 10.18.2
>  * jline from 3.26.3 to 3.27.0
>  * neo4j driver from 5.24.0 to 5.25.0
>  * maxmind from 3.1.0 to 3.1.1
>  * geoip2 from 4.2.0 to 4.2.1
>  * amqp-client from 5.21.0 to 5.22.0
>  * commons-csv from 1.11.0 to 1.12.0
>  * splunk from 1.9.4 to 1.9.5
>  * lucene from 9.11.1 to 9.12.0
>  * google bom from 26.46.0 to 26.47.0
>  * azure bom from 1.2.26 to 1.2.28
>  * msal4j from 1.16.1 to 1.17.1
>  * json-schema-validator from 1.5.1 to 1.5.2
>  * checker-qual from 3.45.0 to 3.47.0
>  * checkstyle from 10.16.0 to 10.18.2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13826) Bump various dependencies including jetty, netty, jackson, aws and more

2024-10-02 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-13826:
-

 Summary: Bump various dependencies including jetty, netty, 
jackson, aws and more
 Key: NIFI-13826
 URL: https://issues.apache.org/jira/browse/NIFI-13826
 Project: Apache NiFi
  Issue Type: Task
  Components: Core Framework, Extensions
Reporter: Pierre Villard
Assignee: Pierre Villard
 Fix For: 2.0.0-M5


* Jackson from 2.17.2 to 2.18.0 - 
[https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.18]
 * zstd-jni from 1.5.6-5 to 1.5.6-6
 * gson from 2.10.1 to 2.11.0 - 
[https://github.com/google/gson/releases/tag/gson-parent-2.11.0]
 * okio from 3.9.0 to 3.9.1
 * io.fabric8 from 6.13.1 to 6.13.4
 * netty from 4.1.113.Final to 4.1.114.Final
 * swagger-annotations from 2.2.23 to 2.2.24
 * avro from 1.11.3 to 1.12.0 - 
[https://avro.apache.org/blog/2024/08/05/avro-1.12.0/]
 * commons-lang3 from 3.16.0 to 3.17.0
 * log4j from 2.24.0 to 2.24.1
 * jetty from 12.0.13 to 12.0.14
 * junit-bom from 5.10.3 to 5.10.4
 * junit-platform-commons from 1.10.3 to 1.10.4
 * mockito from 5.13.0 to 5.14.1 - [https://github.com/mockito/mockito/releases]
 * testcontainers from 1.20.1 to 1.20.2
 * snakeyaml from 2.2 to 2.3
 * AWS SDK v2 from 2.28.4 to 2.28.13



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-13769) Bump various dependencies including spring, jetty, aws, logback and more

2024-09-20 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-13769.
---
Resolution: Fixed

> Bump various dependencies including spring, jetty, aws, logback and more
> 
>
> Key: NIFI-13769
> URL: https://issues.apache.org/jira/browse/NIFI-13769
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
> Fix For: 2.0.0-M5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ch.qos.logback 1.5.7 1.5.8
> com.amazonaws 1.12.770 1.12.772
> com.github.luben 1.5.6-4 1.5.6-5
> commons-io 2.16.1 2.17.0
> io.netty 4.1.112.Final 4.1.113.Final
> org.apache.ant 1.10.14 1.10.15
> org.apache.logging.log4j 2.23.1 2.24.0  (why is this here!?)
> org.eclipse.jetty 12.0.12 12.0.13
> org.springframework 6.1.12 6.1.13
> org.xerial.snappy 1.1.10.6 1.1.10.7
> software.amazon.awssdk 2.27.14 2.28.4
> com.google.apis v3-rev20240730-2.0.0   v3-rev20240903-2.0.0
> com.slack.api bolt-socket-mode 1.42.0 1.42.1
> com.squareup.wire wire-schema-jvm 5.0.0 5.1.0
> io.projectreactor (core/test) 3.6.9 3.6.10
> net.java.dev.jna (jna/jna-platform) 5.14.0 5.15.0
> org.apache.groovy 4.0.22 4.0.23
> org.apache.maven maven-artifact 3.9.8 3.9.9
> org.eclipse.jgit 6.10.0.202406032230-r 7.0.0.202409031743-r
> org.flywaydb 10.17.2 10.18.0  
> org.kosuke 1.324 1.326
> org.mongodb 4.11.3 4.11.4
> org.neo4j.driver 5.23.0 5.24.0
> org.postgresql 42.7.3 42.7.4
> org.springframework.boot 3.3.3 3.3.4
> org.springframework.integration 6.3.3 6.3.4
> org.springframework.retry 2.0.8 2.0.9



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13733) Migrate [S]FTP processors' Proxy properties to ProxyConfigurationService

2024-09-12 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13733:
--
Fix Version/s: 2.0.0-M5
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Migrate [S]FTP processors' Proxy properties to ProxyConfigurationService
> 
>
> Key: NIFI-13733
> URL: https://issues.apache.org/jira/browse/NIFI-13733
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Peter Turcsanyi
>Assignee: Peter Turcsanyi
>Priority: Major
> Fix For: 2.0.0-M5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Get rid of the obsolete processor level Proxy properties in all FTP and SFTP 
> processors and add migration code to convert them to 
> ProxyConfigurationService.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13737) Change Version dialog is not including the branch when requesting versions

2024-09-11 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13737:
--
Fix Version/s: 2.0.0-M5
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Change Version dialog is not including the branch when requesting versions
> --
>
> Key: NIFI-13737
> URL: https://issues.apache.org/jira/browse/NIFI-13737
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 2.0.0-M4
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 2.0.0-M5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When opening the Change Versions dialog, a request is made to the back-end to 
> retrieve the list of versions. The request is not submitting the branch from 
> the VCI, and therefore the response is always returning the versions based on 
> the default branch in the client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13372) Add DeleteSFTP processor

2024-09-09 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13372:
--
Fix Version/s: 2.0.0-M5
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Add DeleteSFTP processor
> 
>
> Key: NIFI-13372
> URL: https://issues.apache.org/jira/browse/NIFI-13372
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: endzeit
>Assignee: endzeit
>Priority: Minor
> Fix For: 2.0.0-M5
>
>  Time Spent: 3h 5m
>  Remaining Estimate: 0h
>
> The existing processor to retrieve a file from a remote system over SFTP, 
> namely {{FetchSFTP}} and {{{}GetSFTP{}}}, support the removal of the file 
> from the file system once the content has been copied into the FlowFile.
> However, deleting the file from the file system immediately might not be 
> feasible in certain circumstances. 
> In cases where the content repository of NiFi does not meet sufficient data 
> durability guarantees, it might be desired to remove the source file only 
> after it has been processed successfully and its result transferred to a 
> system that satisfies those durability constraints.
> Additionally, the integrated deletion might fail "silently", that is, the 
> FlowFile is still sent to the "success" relationship. This results in the 
> file remaining at the source. Depending on the listing behaviour it may be 
> listed again (in the worst case over and over). Having the deletion as an 
> extra step with a failure relationship in case of failure avoids this 
> behaviour. 
> As of now, there is no built-in solution to achieve such behavior using the 
> standard NiFi distribution.
> Current workarounds involve the usage of a scripted processor or the creation 
> of a custom processor, that provides the desired functionality.
> This issue proposes the addition of a {{DeleteSFTP}} processor to the NiFi 
> standard-processors bundle, that fills this gap. 
> It should expect a FlowFile and delete the file at the path derived from the 
> FlowFile attributes. The default values to determine the file path should be 
> compatible with the attributes written by the existing {{ListSFTP}} processor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13719) PutElasticsearch* processors should handle Long "took" field values in _bulk API responses

2024-09-09 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13719:
--
Fix Version/s: 2.0.0-M5
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> PutElasticsearch* processors should handle Long "took" field values in _bulk 
> API responses
> --
>
> Key: NIFI-13719
> URL: https://issues.apache.org/jira/browse/NIFI-13719
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.27.0, 2.0.0-M4
>Reporter: Chris Sampson
>Assignee: Chris Sampson
>Priority: Minor
> Fix For: 2.0.0-M5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Elasticsearch {{_bulk}} API response contains a {{took}} field, which 
> indicates the number of milliseconds taken by Elasticsearch to perform the 
> requested operation.
> This value is [expected to be an 
> integer|https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html#bulk-api-response-body].
> In Elasticsearch 8.15.0, there [is a 
> bug|https://github.com/elastic/elasticsearch/issues/111854] that causes the 
> {{took}} field to be a very large value (typically resulting in a Long rather 
> than an Integer when the response JSON is parsed).
> This results in NiFi failing the parse the {{IndexOperationResponse}} with a 
> {{ClassCastException}}:
> {code:java}
> 2024-09-05 10:42:48,126 ERROR [Timer-Driven Process Thread-8] 
> o.a.n.p.e.PutElasticsearchRecord 
> PutElasticsearchRecord[id=17e51d66-18c9-1257-06b1-700d2a77894e] Encountered a 
> server-side problem with Elasticsearch. Routing to failure
> org.apache.nifi.elasticsearch.ElasticsearchException: 
> java.lang.ClassCastException: class java.lang.Long cannot be cast to class 
> java.lang.Integer (java.lang.Long and java.lang.Integer are in module 
> java.base of loader 'bootstrap')
>at 
> org.apache.nifi.elasticsearch.ElasticSearchClientServiceImpl.bulk(ElasticSearchClientServiceImpl.java:680)
>at 
> java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
>at java.base/java.lang.reflect.Method.invoke(Method.java:580)
>at 
> org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:254)
>at 
> org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:105)
>at jdk.proxy80/jdk.proxy80.$Proxy324.bulk(Unknown Source)
>at 
> org.apache.nifi.processors.elasticsearch.PutElasticsearchRecord.indexDocuments(PutElasticsearchRecord.java:536)
>at 
> org.apache.nifi.processors.elasticsearch.PutElasticsearchRecord.operate(PutElasticsearchRecord.java:516)
>at 
> org.apache.nifi.processors.elasticsearch.PutElasticsearchRecord.onTrigger(PutElasticsearchRecord.java:431)
>at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1274)
>at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:244)
>at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102)
>at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
>at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
>at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:358)
>at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
>at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
>at java.base/java.lang.Thread.run(Thread.java:1583)
> Caused by: java.lang.ClassCastException: class java.lang.Long cannot be cast 
> to class java.lang.Integer (java.lang.Long and java.lang.Integer are in 
> module java.base of loader 'bootstrap')
>at 
> org.apache.nifi.elasticsearch.IndexOperationResponse.fromJsonResponse(IndexOperationResponse.java:49)
>at 
> org.apache.nifi.elasticsearch.ElasticSearchClientServiceImpl.bulk(ElasticSearchClientServiceImpl.java:678)
>... 19 common frames omitted
> {code}
> While the Elasticsearch {{took}} field *should* be an Integer, NiFi should 
> allow for this to be a Long (although the Elasticsearch issue has been fixed 
> as of [8.15.1|https://github.com/elastic/elasticsearch/pull/111863]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13468) Add RecordPath function recordOf

2024-09-06 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13468:
--
Fix Version/s: 2.0.0-M5
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Add RecordPath function recordOf
> 
>
> Key: NIFI-13468
> URL: https://issues.apache.org/jira/browse/NIFI-13468
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: endzeit
>Assignee: endzeit
>Priority: Major
> Fix For: 2.0.0-M5
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> NiFi version 1.25.0 / NIFI-12538 introduced a new standalone function to the 
> RecordPath DSL named {{{}mapOf{}}}.
> As the name suggests, it allows DSL users to create more complex data 
> structures, namely maps, using the DSL.
> However, due to restrictions in the Record data type definitions, it only 
> supports the creation of map structures whose values are all of the same type.
> The current implementation even explicitly limits the values to be of type 
> {{{}String{}}}, effectively limiting the result type of {{mapOf}} to 
> {{Map}} but this limit could be lifted in the future to 
> support {{Map}}  instead.
> Due to the underlying restriction enforced by the 
> {{{}org.apache.nifi.serialization.record.type.MapDataType{}}}, adding support 
> for differently typed values seems not feasible. Thus this issue proposes the 
> addition of a {{recordOf}} standalone function to the RecordPath DSL, that 
> allows DSL users to create data structures of type 
> {{org.apache.nifi.serialization.record.type.RecordDataType}} as well.
> {{recordOf}} has the same requirements about its arguments, namely
>  - there has be an even number of arguments provided
>  - every odd argument must resemble a String used as field name
>  - every even argument is used as field value
>  
> 
> _An example_
> As {{mapOf}} and {{recordOf}} might look similar from the outside, lets 
> present a behavioral difference using an example. I'll make use of the 
> existing {{encodeJson}} RecordPath DSL function to turn the result into JSON 
> as it makes the differences quite obvious.
> Assume we have the following record with different types of fields.
> {code:json}
> {
>   "aLong": 9876543210,
>   "aDouble": 2.5,
>   "aString": "texty",
>   "anArray": [
> "a",
> "b",
> "c"
>   ],
>   "aMap": {
> "anotherKey": "anotherValue",
> "aKey": "aValue"
>   },
>   "aRecord": {
> "aField": 2,
> "anotherField": "aRecordValue"
>   }
> }
> {code}
> The result of {{recordOf}} preserves those types.
> {noformat}
> escapeJson(recordOf('mappedLong', /aLong, 'mappedDouble', /aDouble, 
> 'mappedString', /aString, 'mappedArray', /anArray, 'mappedMap', /aMap, 
> 'mappedRecord', /aRecord)){noformat}
> {code:json}
> {
>   "mappedLong": 9876543210,
>   "mappedDouble": 2.5,
>   "mappedString": "texty",
>   "mappedArray": [
> "a",
> "b",
> "c"
>   ],
>   "mappedMap": {
> "anotherKey": "anotherValue",
> "aKey": "aValue"
>   },
>   "mappedRecord": {
> "aField": 2,
> "anotherField": "aRecordValue"
>   }
> }
> {code}
> With {{{}mapOf{}}}, all types are coerced to String instead.
> {noformat}
> escapeJson(mapOf('mappedLong', /aLong, 'mappedDouble', /aDouble, 
> 'mappedString', /aString, 'mappedArray', /anArray, 'mappedMap', /aMap, 
> 'mappedRecord', /aRecord)){noformat}
> {code:json}
> {
>   "mappedLong": "9876543210",
>   "mappedDouble": "2.5",
>   "mappedString": "texty",
>   "mappedArray": "[a, b, c]",
>   "mappedMap": "{anotherKey=anotherValue, aKey=aValue}",
>   "mappedRecord": "MapRecord[{anotherField=aRecordValue, aField=2}]"
> }
> {code}
>  
>  
> 
> _Why not augment {{mapOf}} instead?_
> I thought about this. Actually this was my initial idea. 
> However, {{mapOf}} has been introduced almost 6 months ago. Adding support 
> for different types of values is a backwards incompatible change. It may 
> break existing code that expects values being coerced into {{String}} as its 
> done at the moment.
> Additionally, it may seems strange that a function called "mapOf" actually 
> creates a "Record" instead of a "Map".



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13715) StandardProvenanceEventRecord.hashCode() is not consistent with equals() in handling Parent/Child FlowFiles

2024-09-06 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13715:
--
Fix Version/s: 2.0.0-M5
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> StandardProvenanceEventRecord.hashCode() is not consistent with equals() in 
> handling Parent/Child FlowFiles
> ---
>
> Key: NIFI-13715
> URL: https://issues.apache.org/jira/browse/NIFI-13715
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Peter Turcsanyi
>Assignee: Peter Turcsanyi
>Priority: Major
> Fix For: 2.0.0-M5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{StandardProvenanceEventRecord}} may contain Child FlowFile UUIDs stored in 
> a list (and similarly Parent FlowFile UUIDs in another list). The 
> {{equals()}} method 
> [sorts|https://github.com/apache/nifi/blob/d56f92d738e54b8e9c0ce04606bd7b6259375239/nifi-commons/nifi-data-provenance-utils/src/main/java/org/apache/nifi/provenance/StandardProvenanceEventRecord.java#L400]
>  the UUID lists of the event objects before comparing them and therefore 2 
> event objects are considered equal if they have the same Child FlowFiles but 
> in different order. On the other hand, {{hashCode()}} does not apply sorting 
> and produces different hashes for these equal objects which breaks the 
> equals/hashCode contract: _If two objects are equal according to the equals() 
> method, then the hashCode() method must return the same value for them._
> Real life flow example where the improper {{hashCode()}} method causes an 
> issue:
> QueryRecord with multiple queries and output relationships. The processor's 
> code emits a FORK provenance event with 2+ Child FlowFiles (that many outputs 
> it has). The framework 
> ([StandardProcessSession|https://github.com/apache/nifi/blob/d56f92d738e54b8e9c0ce04606bd7b6259375239/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/controller/repository/StandardProcessSession.java#L849-L851])
>  can also generate the FORK event automatically and it checks if the 
> component has already emitted the event and if yes, it will skip the 
> automatic one. Due to the wrong hashCode() method, this check may fail and in 
> this case 2 FORK events are saved in Provenance repository. This leads to 
> "Unable to generate Lineage Graph because multiple events were registered 
> claiming to have generated the same FlowFile" error when opening the next 
> event after the FORKs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-13697) Clarify documentation in ProcessSession regarding StateManager interactions

2024-09-01 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-13697.
---
Fix Version/s: 2.0.0-M5
   Resolution: Fixed

> Clarify documentation in ProcessSession regarding StateManager interactions
> ---
>
> Key: NIFI-13697
> URL: https://issues.apache.org/jira/browse/NIFI-13697
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: endzeit
>Assignee: endzeit
>Priority: Trivial
> Fix For: 2.0.0-M5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> NIFI-12986 introduced a small mistake in paragraphs on StateManager 
> interactions.
> These should be adjusted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12212) Upgrade AWS DynamoDB Processors to use AWS 2.x libraries

2024-08-30 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-12212:
--
Epic Link: NIFI-13696

> Upgrade AWS DynamoDB Processors to use AWS 2.x libraries
> 
>
> Key: NIFI-12212
> URL: https://issues.apache.org/jira/browse/NIFI-12212
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joe Gresock
>Assignee: Joe Gresock
>Priority: Minor
> Fix For: 2.0.0-M1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12263) Upgrade AWS Machine Learning Processors to use AWS 2.x libraries

2024-08-30 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-12263:
--
Epic Link: NIFI-13696

> Upgrade AWS Machine Learning Processors to use AWS 2.x libraries
> 
>
> Key: NIFI-12263
> URL: https://issues.apache.org/jira/browse/NIFI-12263
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joe Gresock
>Assignee: Joe Gresock
>Priority: Minor
> Fix For: 2.0.0-M1
>
> Attachments: Machine_Learning.json
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12174) Upgrade AWS Lambda Processors to use AWS 2.x libraries

2024-08-30 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-12174:
--
Epic Link: NIFI-13696

> Upgrade AWS Lambda Processors to use AWS 2.x libraries
> --
>
> Key: NIFI-12174
> URL: https://issues.apache.org/jira/browse/NIFI-12174
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Joe Gresock
>Assignee: Joe Gresock
>Priority: Minor
> Fix For: 2.0.0-M1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12189) Upgrade AWS Cloudwatch Processors to use AWS 2.x libraries

2024-08-30 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-12189:
--
Epic Link: NIFI-13696

> Upgrade AWS Cloudwatch Processors to use AWS 2.x libraries
> --
>
> Key: NIFI-12189
> URL: https://issues.apache.org/jira/browse/NIFI-12189
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Joe Gresock
>Assignee: Joe Gresock
>Priority: Minor
> Fix For: 2.0.0-M1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-8287) Upgrade AWS SQS Processors to use AWS 2.x libraries

2024-08-30 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-8287:
-
Epic Link: NIFI-13696

> Upgrade AWS SQS Processors to use AWS 2.x libraries
> ---
>
> Key: NIFI-8287
> URL: https://issues.apache.org/jira/browse/NIFI-8287
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Chris Sampson
>Assignee: Joe Gresock
>Priority: Major
> Fix For: 2.0.0-M1, 1.22.0
>
> Attachments: NIFI-8287.json
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> AWS has updated many of their libraries to version 2.x. NiFi should look to 
> make use of these, potentially enabling access to new features, performance 
> and bug improvements.
> To enable an incremental review process, this issue will focus on upgrading 
> just the SQS processors to use library version 2.x.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13192) Upgrade AWS S3 Processors to use AWS 2.x libraries

2024-08-30 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13192:
--
Epic Link: NIFI-13696

> Upgrade AWS S3 Processors to use AWS 2.x libraries
> --
>
> Key: NIFI-13192
> URL: https://issues.apache.org/jira/browse/NIFI-13192
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joe Gresock
>Priority: Minor
>
> It looks like AWS recently began supporting client side encryption in the v2 
> SDK: https://github.com/aws/aws-sdk-java-v2/issues/34
> This will enable us to upgrade the S3 processors without losing functionality.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13696) Upgrade AWS components to SDK v2

2024-08-30 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-13696:
-

 Summary: Upgrade AWS components to SDK v2
 Key: NIFI-13696
 URL: https://issues.apache.org/jira/browse/NIFI-13696
 Project: Apache NiFi
  Issue Type: Epic
  Components: Extensions
Reporter: Pierre Villard


With a recent version upgrade on the AWS SDK v1, we can now see the below 
warning in the NiFi logs:
{code:java}
2024-08-30 13:36:43,035 WARN [Timer-Driven Process Thread-2] 
com.amazonaws.util.VersionInfoUtils The AWS SDK for Java 1.x entered 
maintenance mode starting July 31, 2024 and will reach end of support on 
December 31, 2025. For more information, see 
https://aws.amazon.com/blogs/developer/the-aws-sdk-for-java-1-x-is-in-maintenance-mode-effective-july-31-2024/
You can print where on the file system the AWS SDK for Java 1.x core runtime is 
located by setting the AWS_JAVA_V1_PRINT_LOCATION environment variable or 
aws.java.v1.printLocation system property to 'true'.
This message can be disabled by setting the 
AWS_JAVA_V1_DISABLE_DEPRECATION_ANNOUNCEMENT environment variable or 
aws.java.v1.disableDeprecationAnnouncement system property to 'true'.{code}
A lot of components are still depending on the AWS SDK v1 and migration to SDK 
v2 should be considered.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13688) NiFi can't start when Parameter Provider depends on controller service

2024-08-30 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13688:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> NiFi can't start when Parameter Provider depends on controller service
> --
>
> Key: NIFI-13688
> URL: https://issues.apache.org/jira/browse/NIFI-13688
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions
>Affects Versions: 2.0.0-M5
>Reporter: Pierre Villard
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0-M5
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Following NIFI-13560, it appears that NiFi can't start when a Parameter 
> Provider is configured and this parameter provider depends on a controller 
> service. In this case, fetching parameters will fail because the referenced 
> controller service is still enabling. This is the case with the HashiCorp 
> parameter provider.
> {code:java}
> 2024-08-28 15:17:15,752 ERROR [main] org.apache.nifi.web.server.JettyServer 
> Failed to start Server
> org.apache.nifi.controller.serialization.FlowSynchronizationException: 
> java.lang.IllegalStateException: Error fetching parameters for 
> ParameterProvider[id=98c9d5c5-0191-1000-624e-84083ed8842e]: Cannot invoke 
> method public abstract 
> org.apache.nifi.vault.hashicorp.HashiCorpVaultCommunicationService 
> org.apache.nifi.vault.hashicorp.HashiCorpVaultClientService.getHashiCorpVaultCommunicationService()
>  on Controller Service with identifier 98cab32e-0191-1000-4aa7-fda19202e9c5 
> because the Controller Service's State is currently ENABLING
>     at 
> org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.synchronizeFlow(VersionedFlowSynchronizer.java:465)
>     at 
> org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.sync(VersionedFlowSynchronizer.java:229)
>     at 
> org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1800)
>     at 
> org.apache.nifi.persistence.StandardFlowConfigurationDAO.load(StandardFlowConfigurationDAO.java:91)
>     at 
> org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:817)
>     at 
> org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:538)
>     at 
> org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:67)
>     at 
> org.eclipse.jetty.ee10.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:1591)
>     at 
> org.eclipse.jetty.ee10.servlet.ServletContextHandler.contextInitialized(ServletContextHandler.java:497)
>     at 
> org.eclipse.jetty.ee10.servlet.ServletHandler.initialize(ServletHandler.java:670)
>     at 
> org.eclipse.jetty.ee10.servlet.ServletContextHandler.startContext(ServletContextHandler.java:1325)
>     at 
> org.eclipse.jetty.ee10.webapp.WebAppContext.startWebapp(WebAppContext.java:1342)
>     at 
> org.eclipse.jetty.ee10.webapp.WebAppContext.startContext(WebAppContext.java:1300)
>     at 
> org.eclipse.jetty.ee10.servlet.ServletContextHandler.lambda$doStart$0(ServletContextHandler.java:1047)
>     at 
> org.eclipse.jetty.server.handler.ContextHandler$ScopedContext.call(ContextHandler.java:1446)
>     at 
> org.eclipse.jetty.ee10.servlet.ServletContextHandler.doStart(ServletContextHandler.java:1044)
>     at 
> org.eclipse.jetty.ee10.webapp.WebAppContext.doStart(WebAppContext.java:499)
>     at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:93)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:120)
>     at org.eclipse.jetty.server.Handler$Abstract.doStart(Handler.java:491)
>     at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:93)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:120)
>     at org.eclipse.jetty.server.Handler$Abstract.doStart(Handler.java:491)
>     at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:93)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
>     at org.eclipse.jetty.server.Server.start(Server.java:624)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:120)
>     at org.eclipse.jetty.server.Handler$Abstract.doStart(Handler.java:491)
>     at org.eclipse.jetty.server.Server.doStart(Server.java:565)
>     at 
> org.eclipse.jetty.util.compon

[jira] [Reopened] (NIFI-13688) NiFi can't start when Parameter Provider depends on controller service

2024-08-29 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard reopened NIFI-13688:
---

After further testing it appears that the change is introducing an issue for 
Parameter Providers that do not depend on other controller services as those 
ones would be in VALIDATING status mode unless performValidation is explicitly 
called. This causes the parameter context to only have parameters with no 
value. The validation should be executed before checking the status and 
fetching the parameter values.

> NiFi can't start when Parameter Provider depends on controller service
> --
>
> Key: NIFI-13688
> URL: https://issues.apache.org/jira/browse/NIFI-13688
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions
>Affects Versions: 2.0.0-M5
>Reporter: Pierre Villard
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0-M5
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Following NIFI-13560, it appears that NiFi can't start when a Parameter 
> Provider is configured and this parameter provider depends on a controller 
> service. In this case, fetching parameters will fail because the referenced 
> controller service is still enabling. This is the case with the HashiCorp 
> parameter provider.
> {code:java}
> 2024-08-28 15:17:15,752 ERROR [main] org.apache.nifi.web.server.JettyServer 
> Failed to start Server
> org.apache.nifi.controller.serialization.FlowSynchronizationException: 
> java.lang.IllegalStateException: Error fetching parameters for 
> ParameterProvider[id=98c9d5c5-0191-1000-624e-84083ed8842e]: Cannot invoke 
> method public abstract 
> org.apache.nifi.vault.hashicorp.HashiCorpVaultCommunicationService 
> org.apache.nifi.vault.hashicorp.HashiCorpVaultClientService.getHashiCorpVaultCommunicationService()
>  on Controller Service with identifier 98cab32e-0191-1000-4aa7-fda19202e9c5 
> because the Controller Service's State is currently ENABLING
>     at 
> org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.synchronizeFlow(VersionedFlowSynchronizer.java:465)
>     at 
> org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.sync(VersionedFlowSynchronizer.java:229)
>     at 
> org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1800)
>     at 
> org.apache.nifi.persistence.StandardFlowConfigurationDAO.load(StandardFlowConfigurationDAO.java:91)
>     at 
> org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:817)
>     at 
> org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:538)
>     at 
> org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:67)
>     at 
> org.eclipse.jetty.ee10.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:1591)
>     at 
> org.eclipse.jetty.ee10.servlet.ServletContextHandler.contextInitialized(ServletContextHandler.java:497)
>     at 
> org.eclipse.jetty.ee10.servlet.ServletHandler.initialize(ServletHandler.java:670)
>     at 
> org.eclipse.jetty.ee10.servlet.ServletContextHandler.startContext(ServletContextHandler.java:1325)
>     at 
> org.eclipse.jetty.ee10.webapp.WebAppContext.startWebapp(WebAppContext.java:1342)
>     at 
> org.eclipse.jetty.ee10.webapp.WebAppContext.startContext(WebAppContext.java:1300)
>     at 
> org.eclipse.jetty.ee10.servlet.ServletContextHandler.lambda$doStart$0(ServletContextHandler.java:1047)
>     at 
> org.eclipse.jetty.server.handler.ContextHandler$ScopedContext.call(ContextHandler.java:1446)
>     at 
> org.eclipse.jetty.ee10.servlet.ServletContextHandler.doStart(ServletContextHandler.java:1044)
>     at 
> org.eclipse.jetty.ee10.webapp.WebAppContext.doStart(WebAppContext.java:499)
>     at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:93)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:120)
>     at org.eclipse.jetty.server.Handler$Abstract.doStart(Handler.java:491)
>     at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:93)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:120)
>     at org.eclipse.jetty.server.Handler$Abstract.doStart(Handler.java:491)
>     at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:93)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)

[jira] [Updated] (NIFI-13643) mapOf returns a Record instead of the declared RecordFieldType.MAP

2024-08-29 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13643:
--
Fix Version/s: 2.0.0-M5
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> mapOf returns a Record instead of the declared RecordFieldType.MAP
> --
>
> Key: NIFI-13643
> URL: https://issues.apache.org/jira/browse/NIFI-13643
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: endzeit
>Assignee: endzeit
>Priority: Major
> Fix For: 2.0.0-M5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Record Path standalone function {{mapOf}} introduced in NIFI-12538 
> returns a 
> [Record|https://github.com/apache/nifi/blob/main/nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/record/Record.java]
>  (in the form of a 
> [MapRecord|https://github.com/apache/nifi/blob/main/nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/record/MapRecord.java]).
> At the same time it declares the {{RecordFieldType}} of the {{RecordField}} 
> returnes, as the function name might suggest, as
> {code:java}
> RecordFieldType.MAP.getMapDataType(RecordFieldType.STRING.getDataType())
> {code}
>  
> The implementation should be adjusted to instead return a {{RecordField}} 
> with a {{Map

[jira] [Updated] (NIFI-13688) NiFi can't start when Parameter Provider depends on controller service

2024-08-29 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13688:
--
Fix Version/s: 2.0.0-M5
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> NiFi can't start when Parameter Provider depends on controller service
> --
>
> Key: NIFI-13688
> URL: https://issues.apache.org/jira/browse/NIFI-13688
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions
>Affects Versions: 2.0.0-M5
>Reporter: Pierre Villard
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0-M5
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Following NIFI-13560, it appears that NiFi can't start when a Parameter 
> Provider is configured and this parameter provider depends on a controller 
> service. In this case, fetching parameters will fail because the referenced 
> controller service is still enabling. This is the case with the HashiCorp 
> parameter provider.
> {code:java}
> 2024-08-28 15:17:15,752 ERROR [main] org.apache.nifi.web.server.JettyServer 
> Failed to start Server
> org.apache.nifi.controller.serialization.FlowSynchronizationException: 
> java.lang.IllegalStateException: Error fetching parameters for 
> ParameterProvider[id=98c9d5c5-0191-1000-624e-84083ed8842e]: Cannot invoke 
> method public abstract 
> org.apache.nifi.vault.hashicorp.HashiCorpVaultCommunicationService 
> org.apache.nifi.vault.hashicorp.HashiCorpVaultClientService.getHashiCorpVaultCommunicationService()
>  on Controller Service with identifier 98cab32e-0191-1000-4aa7-fda19202e9c5 
> because the Controller Service's State is currently ENABLING
>     at 
> org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.synchronizeFlow(VersionedFlowSynchronizer.java:465)
>     at 
> org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.sync(VersionedFlowSynchronizer.java:229)
>     at 
> org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1800)
>     at 
> org.apache.nifi.persistence.StandardFlowConfigurationDAO.load(StandardFlowConfigurationDAO.java:91)
>     at 
> org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:817)
>     at 
> org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:538)
>     at 
> org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:67)
>     at 
> org.eclipse.jetty.ee10.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:1591)
>     at 
> org.eclipse.jetty.ee10.servlet.ServletContextHandler.contextInitialized(ServletContextHandler.java:497)
>     at 
> org.eclipse.jetty.ee10.servlet.ServletHandler.initialize(ServletHandler.java:670)
>     at 
> org.eclipse.jetty.ee10.servlet.ServletContextHandler.startContext(ServletContextHandler.java:1325)
>     at 
> org.eclipse.jetty.ee10.webapp.WebAppContext.startWebapp(WebAppContext.java:1342)
>     at 
> org.eclipse.jetty.ee10.webapp.WebAppContext.startContext(WebAppContext.java:1300)
>     at 
> org.eclipse.jetty.ee10.servlet.ServletContextHandler.lambda$doStart$0(ServletContextHandler.java:1047)
>     at 
> org.eclipse.jetty.server.handler.ContextHandler$ScopedContext.call(ContextHandler.java:1446)
>     at 
> org.eclipse.jetty.ee10.servlet.ServletContextHandler.doStart(ServletContextHandler.java:1044)
>     at 
> org.eclipse.jetty.ee10.webapp.WebAppContext.doStart(WebAppContext.java:499)
>     at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:93)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:120)
>     at org.eclipse.jetty.server.Handler$Abstract.doStart(Handler.java:491)
>     at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:93)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:120)
>     at org.eclipse.jetty.server.Handler$Abstract.doStart(Handler.java:491)
>     at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:93)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
>     at org.eclipse.jetty.server.Server.start(Server.java:624)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:120)
>     at org.eclipse.jetty.server.Handler$Abstract.doStart(Handler.java:491)
>     at org.eclipse.jetty.server.Server.doStart(Server.java:565)
>    

[jira] [Updated] (NIFI-12080) HashiCorp Vault parameter context kv2 compatability.

2024-08-28 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-12080:
--
Status: Patch Available  (was: Open)

> HashiCorp Vault parameter context kv2 compatability.
> 
>
> Key: NIFI-12080
> URL: https://issues.apache.org/jira/browse/NIFI-12080
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.20.0
> Environment: Tested on OpenShift 4.11 and local environment. 
>Reporter: Robert D
>Assignee: Pierre Villard
>Priority: Minor
> Attachments: image-2023-09-18-15-54-55-187.png, 
> image-2023-09-18-15-55-10-366.png
>
>
> When trying to use hashicorp vault with a kv2 backend I can successfully 
> authenticate with vault but trying to use a parameter provider it can't list 
> any secrets.
> I believe it's because {{KeyValueBackend.KV_1}} is hardcoded in the 
> {{listKeyValueSecrets}} function instead of using the member variable 
> {{{}keyValueBackend{}}}.
> The code can be seen 
> [here|https://github.com/apache/nifi/blob/main/nifi-commons/nifi-hashicorp-vault/src/main/java/org/apache/nifi/vault/hashicorp/StandardHashiCorpVaultCommunicationService.java#L148].
> !image-2023-09-18-15-54-55-187.png!
>  
> !image-2023-09-18-15-55-10-366.png!
>  
> After that is changed to {{keyValueBackend}} another issue that comes up is 
> that it can only list the top level secrets. 
> This is because {{listKeyValueSecrets}} hardcodes the path to the [root 
> path|https://github.com/apache/nifi/blob/main/nifi-commons/nifi-hashicorp-vault/src/main/java/org/apache/nifi/vault/hashicorp/StandardHashiCorpVaultCommunicationService.java#L149].
> For example if there is a secret under the path {{shared/test}} it is 
> inaccessible.
> Adding the {{shared}} path to the Key/Value path parameter also doesn't fix 
> it because Vault expects the metadata path after the kv engine.
> A valid path would be {{/kv/metadata/shared/?list=true}} adding {{shared }}to 
> the Key/Value path makes a request to {{{}/kv/shared/metadata/?list=true{}}}.
> Adding a parameter to the {{listKeyValueSecrets}} function to specify the 
> secret path fixes it.
>  
> In the parameter provider it says it's for Key/Value version 1 secrets but 
> after these changes I could use it with a kv2 backend. The only downside is 
> that it can only get the latest version of the secret but that is good enough 
> for my usecase.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-12080) HashiCorp Vault parameter context kv2 compatability.

2024-08-28 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard reassigned NIFI-12080:
-

Assignee: Pierre Villard

> HashiCorp Vault parameter context kv2 compatability.
> 
>
> Key: NIFI-12080
> URL: https://issues.apache.org/jira/browse/NIFI-12080
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.20.0
> Environment: Tested on OpenShift 4.11 and local environment. 
>Reporter: Robert D
>Assignee: Pierre Villard
>Priority: Minor
> Attachments: image-2023-09-18-15-54-55-187.png, 
> image-2023-09-18-15-55-10-366.png
>
>
> When trying to use hashicorp vault with a kv2 backend I can successfully 
> authenticate with vault but trying to use a parameter provider it can't list 
> any secrets.
> I believe it's because {{KeyValueBackend.KV_1}} is hardcoded in the 
> {{listKeyValueSecrets}} function instead of using the member variable 
> {{{}keyValueBackend{}}}.
> The code can be seen 
> [here|https://github.com/apache/nifi/blob/main/nifi-commons/nifi-hashicorp-vault/src/main/java/org/apache/nifi/vault/hashicorp/StandardHashiCorpVaultCommunicationService.java#L148].
> !image-2023-09-18-15-54-55-187.png!
>  
> !image-2023-09-18-15-55-10-366.png!
>  
> After that is changed to {{keyValueBackend}} another issue that comes up is 
> that it can only list the top level secrets. 
> This is because {{listKeyValueSecrets}} hardcodes the path to the [root 
> path|https://github.com/apache/nifi/blob/main/nifi-commons/nifi-hashicorp-vault/src/main/java/org/apache/nifi/vault/hashicorp/StandardHashiCorpVaultCommunicationService.java#L149].
> For example if there is a secret under the path {{shared/test}} it is 
> inaccessible.
> Adding the {{shared}} path to the Key/Value path parameter also doesn't fix 
> it because Vault expects the metadata path after the kv engine.
> A valid path would be {{/kv/metadata/shared/?list=true}} adding {{shared }}to 
> the Key/Value path makes a request to {{{}/kv/shared/metadata/?list=true{}}}.
> Adding a parameter to the {{listKeyValueSecrets}} function to specify the 
> secret path fixes it.
>  
> In the parameter provider it says it's for Key/Value version 1 secrets but 
> after these changes I could use it with a kv2 backend. The only downside is 
> that it can only get the latest version of the secret but that is good enough 
> for my usecase.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13689) Add configurable max attempts to avoid infinite starting loop for nifi.sh start

2024-08-28 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-13689:
-

 Summary: Add configurable max attempts to avoid infinite starting 
loop for nifi.sh start
 Key: NIFI-13689
 URL: https://issues.apache.org/jira/browse/NIFI-13689
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 2.0.0-M5
Reporter: Pierre Villard


Following the changes in NIFI-13665, in case NiFi cannot start successfully 
(flow not loading successfully for some reasons - like the example in 
NIFI-13688), then
{code:java}
nifi.sh start
{code}
will trigger an infinite loop of trying to start NiFi (until executing nifi.sh 
stop).

It might be worth considering an improvement where a maximum number of attempts 
is configurable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13688) NiFi can't start when Parameter Provider depends on controller service

2024-08-28 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-13688:
-

 Summary: NiFi can't start when Parameter Provider depends on 
controller service
 Key: NIFI-13688
 URL: https://issues.apache.org/jira/browse/NIFI-13688
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework, Extensions
Affects Versions: 2.0.0-M5
Reporter: Pierre Villard


Following NIFI-13560, it appears that NiFi can't start when a Parameter 
Provider is configured and this parameter provider depends on a controller 
service. In this case, fetching parameters will fail because the referenced 
controller service is still enabling. This is the case with the HashiCorp 
parameter provider.
{code:java}
2024-08-28 15:17:15,752 ERROR [main] org.apache.nifi.web.server.JettyServer 
Failed to start Server
org.apache.nifi.controller.serialization.FlowSynchronizationException: 
java.lang.IllegalStateException: Error fetching parameters for 
ParameterProvider[id=98c9d5c5-0191-1000-624e-84083ed8842e]: Cannot invoke 
method public abstract 
org.apache.nifi.vault.hashicorp.HashiCorpVaultCommunicationService 
org.apache.nifi.vault.hashicorp.HashiCorpVaultClientService.getHashiCorpVaultCommunicationService()
 on Controller Service with identifier 98cab32e-0191-1000-4aa7-fda19202e9c5 
because the Controller Service's State is currently ENABLING
    at 
org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.synchronizeFlow(VersionedFlowSynchronizer.java:465)
    at 
org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.sync(VersionedFlowSynchronizer.java:229)
    at 
org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1800)
    at 
org.apache.nifi.persistence.StandardFlowConfigurationDAO.load(StandardFlowConfigurationDAO.java:91)
    at 
org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:817)
    at 
org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:538)
    at 
org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:67)
    at 
org.eclipse.jetty.ee10.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:1591)
    at 
org.eclipse.jetty.ee10.servlet.ServletContextHandler.contextInitialized(ServletContextHandler.java:497)
    at 
org.eclipse.jetty.ee10.servlet.ServletHandler.initialize(ServletHandler.java:670)
    at 
org.eclipse.jetty.ee10.servlet.ServletContextHandler.startContext(ServletContextHandler.java:1325)
    at 
org.eclipse.jetty.ee10.webapp.WebAppContext.startWebapp(WebAppContext.java:1342)
    at 
org.eclipse.jetty.ee10.webapp.WebAppContext.startContext(WebAppContext.java:1300)
    at 
org.eclipse.jetty.ee10.servlet.ServletContextHandler.lambda$doStart$0(ServletContextHandler.java:1047)
    at 
org.eclipse.jetty.server.handler.ContextHandler$ScopedContext.call(ContextHandler.java:1446)
    at 
org.eclipse.jetty.ee10.servlet.ServletContextHandler.doStart(ServletContextHandler.java:1044)
    at 
org.eclipse.jetty.ee10.webapp.WebAppContext.doStart(WebAppContext.java:499)
    at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:93)
    at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
    at 
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:120)
    at org.eclipse.jetty.server.Handler$Abstract.doStart(Handler.java:491)
    at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:93)
    at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
    at 
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:120)
    at org.eclipse.jetty.server.Handler$Abstract.doStart(Handler.java:491)
    at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:93)
    at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
    at org.eclipse.jetty.server.Server.start(Server.java:624)
    at 
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:120)
    at org.eclipse.jetty.server.Handler$Abstract.doStart(Handler.java:491)
    at org.eclipse.jetty.server.Server.doStart(Server.java:565)
    at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:93)
    at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:878)
    at org.apache.nifi.NiFi.(NiFi.java:155)
    at org.apache.nifi.NiFi.(NiFi.java:86)
    at org.apache.nifi.NiFi.main(NiFi.java:284)
Caused by: java.lang.IllegalStateException: Error fetching parameters for 
ParameterProvider[id=98c9d5c5-0191-1000-624e-84083ed8842e]: Cannot invoke 
method public abstract 
org.apache.nifi.vault.hashicorp.HashiCorpVaultCommunicationService 
org.apache.nifi.vault.hashicorp.HashiCorpVaultClientServi

[jira] [Updated] (NIFI-13677) Remove install command from nifi.sh

2024-08-23 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13677:
--
Fix Version/s: 2.0.0-M5
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Remove install command from nifi.sh
> ---
>
> Key: NIFI-13677
> URL: https://issues.apache.org/jira/browse/NIFI-13677
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 2.0.0-M5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The nifi.sh script includes an install command to create symbolic links on 
> Linux systems to run the application as a system service. With the removal of 
> support for RPM packaging in earlier versions of NiFi, and with the variety 
> of service initialization approaches across Linux distributions, the install 
> command should be removed. Container packaging is a supported convenience 
> build, and custom distributions outside of the project could maintain 
> convenience binary builds if necessary.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13675) Fix tooltip for Parameter description

2024-08-23 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13675:
--
Status: Patch Available  (was: Open)

> Fix tooltip for Parameter description
> -
>
> Key: NIFI-13675
> URL: https://issues.apache.org/jira/browse/NIFI-13675
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.27.0
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.28.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-13675) Fix tooltip for Parameter description

2024-08-23 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard reassigned NIFI-13675:
-

Assignee: Pierre Villard

> Fix tooltip for Parameter description
> -
>
> Key: NIFI-13675
> URL: https://issues.apache.org/jira/browse/NIFI-13675
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.27.0
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.28.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13675) Fix tooltip for Parameter description

2024-08-23 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-13675:
-

 Summary: Fix tooltip for Parameter description
 Key: NIFI-13675
 URL: https://issues.apache.org/jira/browse/NIFI-13675
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core UI
Affects Versions: 1.27.0
Reporter: Pierre Villard
 Fix For: 1.28.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13671) QuerySalesforce record parsing fails with DateTime types

2024-08-23 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-13671:
--
Fix Version/s: 2.0.0-M5
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> QuerySalesforce record parsing fails with DateTime types
> 
>
> Key: NIFI-13671
> URL: https://issues.apache.org/jira/browse/NIFI-13671
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 2.0.0-M2, 2.0.0-M3, 2.0.0-M4
>Reporter: Lehel Boér
>Assignee: Lehel Boér
>Priority: Major
> Fix For: 2.0.0-M5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When reading records with QuerySalesforceObject while using Property Based 
> mode,
> it fails to read DateTime objects due to new changes in the RecordReader.
>  
> {code:java}
> java.lang.RuntimeException: 
> org.apache.nifi.serialization.MalformedRecordException: Successfully parsed a 
> JSON object from input but failed to convert into a Record object with the 
> given schema
>     at 
> org.apache.nifi.processors.salesforce.QuerySalesforceObject.lambda$processRecordsCallback$2(QuerySalesforceObject.java:442)
>     at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:3121)
>     at 
> org.apache.nifi.processors.salesforce.QuerySalesforceObject.processQuery(QuerySalesforceObject.java:398)
>     at 
> org.apache.nifi.processors.salesforce.QuerySalesforceObject.onTrigger(QuerySalesforceObject.java:357)
>     at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>     at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1274)
>     at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:244)
>     at 
> org.apache.nifi.controller.scheduling.AbstractTimeBasedSchedulingAgent.lambda$doScheduleOnce$0(AbstractTimeBasedSchedulingAgent.java:59)
>     at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
>     at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
>     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
>     at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
>     at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
>     at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
>     at java.base/java.lang.Thread.run(Thread.java:1583)
> Caused by: org.apache.nifi.serialization.MalformedRecordException: 
> Successfully parsed a JSON object from input but failed to convert into a 
> Record object with the given schema
>     at 
> org.apache.nifi.json.AbstractJsonRowRecordReader.nextRecord(AbstractJsonRowRecordReader.java:182)
>     at 
> org.apache.nifi.serialization.RecordReader.nextRecord(RecordReader.java:50)
>     at 
> org.apache.nifi.processors.salesforce.QuerySalesforceObject.handleRecordSet(QuerySalesforceObject.java:458)
>     at 
> org.apache.nifi.processors.salesforce.QuerySalesforceObject.lambda$processRecordsCallback$2(QuerySalesforceObject.java:434)
>     ... 14 common frames omitted
> Caused by: 
> org.apache.nifi.serialization.record.field.FieldConversionException: 
> Conversion failed for [2024-08-20T18:48:06.000+] named [CreatedDate] to 
> [java.time.LocalDateTime] [java.lang.NumberFormatException] For input string: 
> "2024-08-20T18:48:06.000+"
>     at 
> org.apache.nifi.serialization.record.field.ObjectLocalDateTimeFieldConverter.tryParseAsNumber(ObjectLocalDateTimeFieldConverter.java:97)
>     at 
> org.apache.nifi.serialization.record.field.ObjectLocalDateTimeFieldConverter.convertField(ObjectLocalDateTimeFieldConverter.java:75)
>     at 
> org.apache.nifi.serialization.record.field.ObjectTimestampFieldConverter.convertField(ObjectTimestampFieldConverter.java:42)
>     at 
> org.apache.nifi.serialization.record.field.ObjectTimestampFieldConverter.convertField(ObjectTimestampFieldConverter.java:28)
>     at 
> org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:232)
>     at 
> org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:179)
>     at 
> org.apache.nifi.json.JsonTreeRowRecordReader.convertField(JsonTreeRowRecordReader.java:220)
>     at 
> org.apache.nifi.json.JsonTreeRowRecordReader.convertJsonNodeToRecord(JsonTreeRowRecordReader.java:183)
>     at 
> org.apache.nifi.json.JsonTreeRowRecordReader.convertJsonNodeToRecord(JsonTreeRowRecordReader.java:129)
>     at 
> org.apache.nifi.json.JsonTreeRowRecordReader.convertJsonNodeToRecord(JsonTreeRowRecordReader.java:120)
>     at 
> org.apache.nifi.json.AbstractJsonRowRe

  1   2   3   4   5   6   7   8   9   10   >