[PR] Nifi 12390 more jvm metrics [nifi]

2023-11-17 Thread via GitHub


esecules opened a new pull request, #8050:
URL: https://github.com/apache/nifi/pull/8050

   
   
   
   
   
   
   
   
   
   
   
   
   
   # Summary
   
   [NIFI-12390](https://issues.apache.org/jira/browse/NIFI-12390)
   
   - Add init, committed, max, and used gauges for the non-heap
   - Add usage %, max, used, committed, init, and used-after-gc gauges for each 
GC-specific memory pool
   - Add count, memory used, and capacity for each buffer pool
   
   JVM Metrics:
   ```
   # HELP nifi_jvm_non_heap_used NiFi JVM non-heap used
   # TYPE nifi_jvm_non_heap_used gauge
   nifi_jvm_non_heap_used{instance="192.168.1.65",} 2.3022436E8
   # HELP nifi_jvm_daemon_thread_count NiFi JVM daemon thread count
   # TYPE nifi_jvm_daemon_thread_count gauge
   nifi_jvm_daemon_thread_count{instance="192.168.1.65",} 23.0
   # HELP nifi_jvm_heap_usage NiFi JVM heap usage
   # TYPE nifi_jvm_heap_usage gauge
   nifi_jvm_heap_usage{instance="192.168.1.65",} 0.7544599175453186
   # HELP nifi_jvm_heap_committed NiFi JVM heap committed
   # TYPE nifi_jvm_heap_committed gauge
   nifi_jvm_heap_committed{instance="192.168.1.65",} 1.073741824E9
   # HELP nifi_jvm_memory_pool_used_after_gc NiFi JVM memory in bytes used 
after gc per memory pool
   # TYPE nifi_jvm_memory_pool_used_after_gc gauge
   
nifi_jvm_memory_pool_used_after_gc{instance="192.168.1.65",pool="G1-Survivor-Space",}
 -1.0
   
nifi_jvm_memory_pool_used_after_gc{instance="192.168.1.65",pool="CodeHeap-non-nmethods",}
 -1.0
   
nifi_jvm_memory_pool_used_after_gc{instance="192.168.1.65",pool="CodeHeap-non-profiled-nmethods",}
 -1.0
   
nifi_jvm_memory_pool_used_after_gc{instance="192.168.1.65",pool="G1-Eden-Space",}
 -1.0
   
nifi_jvm_memory_pool_used_after_gc{instance="192.168.1.65",pool="CodeHeap-profiled-nmethods",}
 -1.0
   
nifi_jvm_memory_pool_used_after_gc{instance="192.168.1.65",pool="G1-Old-Gen",} 
-1.0
   
nifi_jvm_memory_pool_used_after_gc{instance="192.168.1.65",pool="Metaspace",} 
-1.0
   
nifi_jvm_memory_pool_used_after_gc{instance="192.168.1.65",pool="Compressed-Class-Space",}
 -1.0
   # HELP nifi_jvm_memory_pool_usage NiFi JVM memory percent used of each 
memory pool
   # TYPE nifi_jvm_memory_pool_usage gauge
   
nifi_jvm_memory_pool_usage{instance="192.168.1.65",pool="G1-Survivor-Space",} 
1.0
   
nifi_jvm_memory_pool_usage{instance="192.168.1.65",pool="CodeHeap-'profiled-nmethods'",}
 0.2484836871396674
   nifi_jvm_memory_pool_usage{instance="192.168.1.65",pool="G1-Eden-Space",} 
0.9366
   nifi_jvm_memory_pool_usage{instance="192.168.1.65",pool="G1-Old-Gen",} 
0.1616864800453186
   
nifi_jvm_memory_pool_usage{instance="192.168.1.65",pool="CodeHeap-'non-nmethods'",}
 0.32063025946704066
   
nifi_jvm_memory_pool_usage{instance="192.168.1.65",pool="CodeHeap-'non-profiled-nmethods'",}
 0.10038074449295165
   nifi_jvm_memory_pool_usage{instance="192.168.1.65",pool="Metaspace",} 
0.9883397638286731
   
nifi_jvm_memory_pool_usage{instance="192.168.1.65",pool="Compressed-Class-Space",}
 0.017947666347026825
   # HELP nifi_jvm_heap_non_usage NiFi JVM heap non usage
   # TYPE nifi_jvm_heap_non_usage gauge
   nifi_jvm_heap_non_usage{instance="192.168.1.65",} 0.9851219703558785
   # HELP nifi_jvm_thread_count NiFi JVM thread count
   # TYPE nifi_jvm_thread_count gauge
   nifi_jvm_thread_count{instance="192.168.1.65",} 80.0
   # HELP nifi_jvm_gc_time NiFi JVM GC time in milliseconds
   # TYPE nifi_jvm_gc_time gauge
   nifi_jvm_gc_time{instance="192.168.1.65",gc_name="G1-Young-Generation",} 
176.0
   nifi_jvm_gc_time{instance="192.168.1.65",gc_name="G1-Old-Generation",} 0.0
   nifi_jvm_gc_time{instance="192.168.1.65",gc_name="G1-Concurrent-GC",} 18.0
   # HELP nifi_jvm_non_heap_committed NiFi JVM non-heap committed
   # TYPE nifi_jvm_non_heap_committed gauge
   nifi_jvm_non_heap_committed{instance="192.168.1.65",} 2.33701376E8
   # HELP nifi_jvm_memory_pool_init NiFi JVM memory initial size in bytes per 
memory pool
   # TYPE nifi_jvm_memory_pool_init gauge
   nifi_jvm_memory_pool_init{instance="192.168.1.65",pool="G1-Survivor-Space",} 
0.0
   
nifi_jvm_memory_pool_init{instance="192.168.1.65",pool="CodeHeap-non-nmethods",}
 2555904.0
   
nifi_jvm_memory_pool_init{instance="192.168.1.65",pool="CodeHeap-non-profiled-nmethods",}
 2555904.0
   nifi_jvm_memory_pool_init{instance="192.168.1.65",pool="G1-Eden-Space",} 
5.3477376E7
   
nifi_jvm_memory_pool_init{instance="192.168.1.65",pool="CodeHeap-profiled-nmethods",}
 2555904.0
   nifi_jvm_memory_pool_init{instance="192.168.1.65",pool="G1-Old-Gen",} 
1.020264448E9
   nifi_jvm_memory_pool_init{instance="192.168.1.65",pool="Metaspace",} 0.0
   
nifi_jvm_memory_pool_init{instance="192.168.1.65",pool="Compressed-Class-Space",}
 0.0
   # HELP nifi_jvm_memory_pool_max NiFi JVM memory max bytes allowed per memory 
pool
   # TYPE nifi_jvm_memory_pool_max gauge
   nifi_jvm_memory_pool_max{instance="192.168.1.65",pool="G1-Survivor-Space",} 
-1.0
   

[jira] [Assigned] (NIFI-12390) Flesh out JVM Metrics exported by PrometheusReportingTask

2023-11-17 Thread Eric Secules (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Secules reassigned NIFI-12390:
---

Assignee: Eric Secules

> Flesh out JVM Metrics exported by PrometheusReportingTask
> -
>
> Key: NIFI-12390
> URL: https://issues.apache.org/jira/browse/NIFI-12390
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Eric Secules
>Assignee: Eric Secules
>Priority: Major
>
> * Add init, committed, max, and used gauges for the non-heap
> * Add usage %, max, used, committed, init, and used-after-gc gauges for each 
> GC-specific memory pool
> * Add count, memory used, and capacity for each buffer pool



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-12390) Flesh out JVM Metrics exported by PrometheusReportingTask

2023-11-17 Thread Eric Secules (Jira)
Eric Secules created NIFI-12390:
---

 Summary: Flesh out JVM Metrics exported by PrometheusReportingTask
 Key: NIFI-12390
 URL: https://issues.apache.org/jira/browse/NIFI-12390
 Project: Apache NiFi
  Issue Type: Sub-task
Reporter: Eric Secules


* Add init, committed, max, and used gauges for the non-heap
* Add usage %, max, used, committed, init, and used-after-gc gauges for each 
GC-specific memory pool
* Add count, memory used, and capacity for each buffer pool





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] WIP: NIFI-12386 Adds processor FilterAttributes [nifi]

2023-11-17 Thread via GitHub


EndzeitBegins opened a new pull request, #8049:
URL: https://github.com/apache/nifi/pull/8049

   
   
   
   
   
   
   
   
   
   
   
   
   
   > [!IMPORTANT]  
   > **WORK IN PROGRESS / DRAFT** 
   
   # Summary
   
   Adds a new processor `FilterAttributes` to the `nifi-standard-processors` 
bundle.
   See [NIFI-12386](https://issues.apache.org/jira/browse/NIFI-12386).
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [x] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [x] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [x] Pull Request based on current revision of the `main` branch
   - [x] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [ ] Build completed using `mvn clean install -P contrib-check`
 - [ ] JDK 21
   
   ### Licensing
   
   - [x] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [x] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [x] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-12386) Add a FilterAttributes processor

2023-11-17 Thread endzeit (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

endzeit updated NIFI-12386:
---
Description: 
Flows in Apache NiFi can get quite sophisticated, consisting of a long chains 
of both {{ProcessGroup}} and {{Processor}} components.
Oftentimes {{Processor}} components, including those in the NiFi standard 
bundle, enrich an incoming {{FlowFile}} with additional FlowFile attributes.
This can lead to a fair amount of different FlowFile attributes accumulating 
over the FlowFile's lifecycle.

In order to prevent subsequent {{ProcessGroup}} / {{Processor}} components to 
accidentally rely on implementation details of preceding components, a good 
practice is to:
 # define which FlowFile attributes should exist at selected points in the 
{{Flow}}
 # reduce the attributes of the FlowFile at the selected point to those defined

This can be achieved by using the 
[UpdateAttribute|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-update-attribute-nar/1.23.2/org.apache.nifi.processors.attributes.UpdateAttribute/index.html]
 processor of the standard processor bundle.

However, the {{UpdateAttribute}} processor allows only for a regular expression 
to define a set of attributes to remove. Instead, the outlined practice above 
desires to explicitly state a set of attributes to keep. One can do so with a 
regular expression as well, but writing the reverse lookup to achieve this is 
not the easiest endeavor to put it mildly.

This issue proposes a new processor {{FilterAttributes}} to be added to the 
library of {{{}nifi-standard-processors{}}}, which can be configured with a set 
of attributes and removes all attributes of an incoming FlowFile other than the 
ones configured.

The processor should
 * have a required, non-blank property "Attributes to keep", which takes a list 
of attribute names separated by delimiter, e.g. comma (,).
 ** trailing whitespace around attribute names should be ignored
 ** leading or trailing delimiters should be ignored
 * have a required, non-blank property "Delimiter", which is used to delimit 
the individual attribute names, with a default value of "," (comma).
 * have a single relationship "success" to which all FlowFiles are routed, 
similar to {{UpdateAttribute}}
 * have an 
[InputRequirement|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/annotation/behavior/InputRequirement.html]
 of 
[INPUT_REQUIRED|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/annotation/behavior/InputRequirement.Requirement.html]
 * 
[@SupportsBatching|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/annotation/behavior/SupportsBatching.html]
 * be 
[@SideEffectFree|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/annotation/behavior/SideEffectFree.html]

Some possible extension might be:
 * have a required property "Core attributes", with allowable values of "Keep 
UUID only", "Keep all", with a default of "Keep UUID only"
 ** an additional allowable value e.g. "Specify behaviour" may be added, which 
allows for more customization
 * have a required property "Mode", with allowable values of "Retain" and 
"Remove", with a default of "Retain"

  was:
Flows in Apache NiFi can get quite sophisticated, consisting of a long chains 
of both {{ProcessGroup}} and {{Processor}} components.
Oftentimes {{Processor}} components, including those in the NiFi standard 
bundle, enrich an incoming {{FlowFile}} with additional FlowFile attributes.
This can lead to a fair amount of different FlowFile attributes accumulating 
over the FlowFile's lifecycle.

In order to prevent subsequent {{ProcessGroup}} / {{Processor}} components to 
accidentally rely on implementation details of preceding components, a good 
practice is to:
 # define which FlowFile attributes should exist at selected points in the 
{{Flow}}
 # reduce the attributes of the FlowFile at the selected point to those defined

This can be achieved by using the 
[UpdateAttribute|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-update-attribute-nar/1.23.2/org.apache.nifi.processors.attributes.UpdateAttribute/index.html]
 processor of the standard processor bundle.

However, the {{UpdateAttribute}} processor allows only for a regular expression 
to define a set of attributes to remove. Instead, the outlined practice above 
desires to explicitly state a set of attributes to keep. One can do so with a 
regular expression as well, but writing the reverse lookup to achieve this is 
not the easiest endeavor to put it mildly.

This issue proposes a new processor {{FilterAttributes}} to be added to the 
library of {{{}nifi-standard-processors{}}}, which can be configured with a set 
of attributes and removes all attributes of an incoming FlowFile other than the 
ones configured.

The processor should
 * have a required, 

[jira] [Commented] (NIFI-12361) Conduct Apache NiFi 2.0.0-M1 Release

2023-11-17 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17787421#comment-17787421
 ] 

ASF subversion and git services commented on NIFI-12361:


Commit a53b4bffa795d94407a9da697098a83429f1d6df in nifi's branch 
refs/heads/NIFI-12361-RC2 from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=a53b4bffa7 ]

NIFI-12361-RC2 prepare for next development iteration


> Conduct Apache NiFi 2.0.0-M1 Release
> 
>
> Key: NIFI-12361
> URL: https://issues.apache.org/jira/browse/NIFI-12361
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0-M1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12361) Conduct Apache NiFi 2.0.0-M1 Release

2023-11-17 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17787420#comment-17787420
 ] 

ASF subversion and git services commented on NIFI-12361:


Commit 2c2a08ad69038e2f7b1975b288ad070a8ffdb66b in nifi's branch 
refs/heads/NIFI-12361-RC2 from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=2c2a08ad69 ]

NIFI-12361-RC2 prepare release nifi-2.0.0-M1-RC2


> Conduct Apache NiFi 2.0.0-M1 Release
> 
>
> Key: NIFI-12361
> URL: https://issues.apache.org/jira/browse/NIFI-12361
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0-M1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] NIFI-12389: Add variance and standard deviation to AttributeRollingWindow [nifi]

2023-11-17 Thread via GitHub


mattyb149 opened a new pull request, #8048:
URL: https://github.com/apache/nifi/pull/8048

   # Summary
   
   [NIFI-12389](https://issues.apache.org/jira/browse/NIFI-12389) This PR uses 
commons-math3 to calculate variance and standard deviation for an attribute in 
AttributeRollingWindow
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [x] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [x] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [x] Pull Request based on current revision of the `main` branch
   - [x] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [ ] Build completed using `mvn clean install -P contrib-check`
 - [x] JDK 21
   
   ### Licensing
   
   - [x] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-12389) Add variance and standard deviation to AttributeRollingWindow

2023-11-17 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-12389:

Fix Version/s: 1.latest
   2.latest

> Add variance and standard deviation to AttributeRollingWindow
> -
>
> Key: NIFI-12389
> URL: https://issues.apache.org/jira/browse/NIFI-12389
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.latest, 2.latest
>
>
> AttributeRollingWindow currently uses state to calculate metrics like sum, 
> count, and mean (average) for a rolling time window (optionally using 
> sub-windows to use smaller aggregates to estimate metrics for a larger total 
> time window as it stores the window values in state).
> Variance and its positive square root (standard deviation) are other helpful 
> univariate statistics. Using "online" algorithms such as Welford's method 
> adds no extra memory requirement and runs in constant time. Apache Commons 
> Math3 has such an implementation and could be used to add attributes to the 
> FlowFile for variance and standard deviation. These metrics can be used for 
> filtering out noise downstream, for example RouteOnAttribute and/or stateful 
> UpdateAttribute where the current attribute value is more than 2 standard 
> deviations away from the mean.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-12389) Add variance and standard deviation to AttributeRollingWindow

2023-11-17 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reassigned NIFI-12389:
---

Assignee: Matt Burgess

> Add variance and standard deviation to AttributeRollingWindow
> -
>
> Key: NIFI-12389
> URL: https://issues.apache.org/jira/browse/NIFI-12389
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> AttributeRollingWindow currently uses state to calculate metrics like sum, 
> count, and mean (average) for a rolling time window (optionally using 
> sub-windows to use smaller aggregates to estimate metrics for a larger total 
> time window as it stores the window values in state).
> Variance and its positive square root (standard deviation) are other helpful 
> univariate statistics. Using "online" algorithms such as Welford's method 
> adds no extra memory requirement and runs in constant time. Apache Commons 
> Math3 has such an implementation and could be used to add attributes to the 
> FlowFile for variance and standard deviation. These metrics can be used for 
> filtering out noise downstream, for example RouteOnAttribute and/or stateful 
> UpdateAttribute where the current attribute value is more than 2 standard 
> deviations away from the mean.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-12389) Add variance and standard deviation to AttributeRollingWindow

2023-11-17 Thread Matt Burgess (Jira)
Matt Burgess created NIFI-12389:
---

 Summary: Add variance and standard deviation to 
AttributeRollingWindow
 Key: NIFI-12389
 URL: https://issues.apache.org/jira/browse/NIFI-12389
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Matt Burgess


AttributeRollingWindow currently uses state to calculate metrics like sum, 
count, and mean (average) for a rolling time window (optionally using 
sub-windows to use smaller aggregates to estimate metrics for a larger total 
time window as it stores the window values in state).

Variance and its positive square root (standard deviation) are other helpful 
univariate statistics. Using "online" algorithms such as Welford's method adds 
no extra memory requirement and runs in constant time. Apache Commons Math3 has 
such an implementation and could be used to add attributes to the FlowFile for 
variance and standard deviation. These metrics can be used for filtering out 
noise downstream, for example RouteOnAttribute and/or stateful UpdateAttribute 
where the current attribute value is more than 2 standard deviations away from 
the mean.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] NiFi-12331: add new unified PublishSlack Processor [nifi]

2023-11-17 Thread via GitHub


sporadek opened a new pull request, #8047:
URL: https://github.com/apache/nifi/pull/8047

   
   
   
   
   
   
   
   
   
   
   
   
   
   # Summary
   
   [NIFI-12331](https://issues.apache.org/jira/browse/NIFI-12331)
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [x] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [x] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [x] Pull Request based on current revision of the `main` branch
   - [x] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [x] Build completed using `mvn clean install -P contrib-check`
 - [x] JDK 21
   
   ### Licensing
   
   No new libraries.
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-12387) Flow Configuration History can record a Comments change for Controller Service when no changes have been made

2023-11-17 Thread Nissim Shiman (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nissim Shiman updated NIFI-12387:
-
Affects Version/s: 1.23.2
   1.16.3

> Flow Configuration History can record a Comments change for Controller 
> Service when no changes have been made
> -
>
> Key: NIFI-12387
> URL: https://issues.apache.org/jira/browse/NIFI-12387
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.16.3, 1.23.2
>Reporter: Nissim Shiman
>Priority: Major
>
> To recreate, create a new controller service then open configuration gui for 
> it and click Apply button (without having made any changes to default 
> settings).
> Flow Configuration History will note that a Configure operation has occurred 
> and the Comments have been changed from _No value set_ to an {_}Empty string 
> se{_}t.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-12388) Process Group log file suffix can become out of sync with its name when PG was copied from a version controlled PG

2023-11-17 Thread Nissim Shiman (Jira)
Nissim Shiman created NIFI-12388:


 Summary: Process Group log file suffix can become out of sync with 
its name when PG was copied from a version controlled PG
 Key: NIFI-12388
 URL: https://issues.apache.org/jira/browse/NIFI-12388
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Nissim Shiman


To recreate, make a log suffix for a versioned controlled PG.
(i.e. right click on PG -> Configure -> General -> Log File Suffix)

Copy PG

Notice the name is same as the original PG
but the Log File Suffix will be have the words "Copy of" preceding it.

This does not occur with a copy of a non-versioned controlled PG (i.e. "Copy 
of" is pre-pended to both the new PG name and Log File Suffix so it is more in 
sync)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-12387) Flow Configuration History can record a Comments change for Controller Service when no changes have been made

2023-11-17 Thread Nissim Shiman (Jira)
Nissim Shiman created NIFI-12387:


 Summary: Flow Configuration History can record a Comments change 
for Controller Service when no changes have been made
 Key: NIFI-12387
 URL: https://issues.apache.org/jira/browse/NIFI-12387
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Nissim Shiman


To recreate, create a new controller service then open configuration gui for it 
and click Apply button (without having made any changes to default settings).

Flow Configuration History will note that a Configure operation has occurred 
and the Comments have been changed from _No value set_ to an {_}Empty string 
se{_}t.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-12386) Add a FilterAttributes processor

2023-11-17 Thread endzeit (Jira)
endzeit created NIFI-12386:
--

 Summary: Add a FilterAttributes processor
 Key: NIFI-12386
 URL: https://issues.apache.org/jira/browse/NIFI-12386
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Core Framework
Reporter: endzeit
Assignee: endzeit


Flows in Apache NiFi can get quite sophisticated, consisting of a long chains 
of both {{ProcessGroup}} and {{Processor}} components.
Oftentimes {{Processor}} components, including those in the NiFi standard 
bundle, enrich an incoming {{FlowFile}} with additional FlowFile attributes.
This can lead to a fair amount of different FlowFile attributes accumulating 
over the FlowFile's lifecycle.

In order to prevent subsequent {{ProcessGroup}} / {{Processor}} components to 
accidentally rely on implementation details of preceding components, a good 
practice is to:
 # define which FlowFile attributes should exist at selected points in the 
{{Flow}}
 # reduce the attributes of the FlowFile at the selected point to those defined

This can be achieved by using the 
[UpdateAttribute|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-update-attribute-nar/1.23.2/org.apache.nifi.processors.attributes.UpdateAttribute/index.html]
 processor of the standard processor bundle.

However, the {{UpdateAttribute}} processor allows only for a regular expression 
to define a set of attributes to remove. Instead, the outlined practice above 
desires to explicitly state a set of attributes to keep. One can do so with a 
regular expression as well, but writing the reverse lookup to achieve this is 
not the easiest endeavor to put it mildly.

This issue proposes a new processor {{FilterAttributes}} to be added to the 
library of {{{}nifi-standard-processors{}}}, which can be configured with a set 
of attributes and removes all attributes of an incoming FlowFile other than the 
ones configured.

The processor should
 * have a required, non-blank property "Attributes to keep", which takes a list 
of attribute names separated by delimiter, e.g. comma (,).
 ** trailing whitespace around attribute names should be ignored
 ** leading or trailing delimiters should be ignored
 * have a required, non-blank property "Delimiter", which is used to delimit 
the individual attribute names, with a default value of "," (comma).
 * have a single relationship "success" to which all FlowFiles are routed, 
similar to {{UpdateAttribute}}
 * have an 
[InputRequirement|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/annotation/behavior/InputRequirement.html]
 of 
[INPUT_REQUIRED|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/annotation/behavior/InputRequirement.Requirement.html]
 * 
[@SupportsBatching|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/annotation/behavior/SupportsBatching.html]
 * be 
[@SideEffectFree|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/annotation/behavior/SideEffectFree.html]

Some possible extension might be:
 * have a required property "Core attributes", with allowable values of "Keep 
UUID only", "Keep all", with a default of "Keep UUID only"
 ** an additional allowable value e.g. "Specify behaviour" may be added, which 
allows for more customization



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12382: Add DatabaseSchemaRegistry service [nifi]

2023-11-17 Thread via GitHub


exceptionfactory commented on code in PR #8042:
URL: https://github.com/apache/nifi/pull/8042#discussion_r1397676848


##
nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-db-schema-registry-service/src/main/java/org/apache/nifi/db/schemaregistry/DatabaseSchemaRegistry.java:
##
@@ -0,0 +1,179 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.db.schemaregistry;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.dbcp.DBCPService;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.schema.access.SchemaField;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.schemaregistry.services.SchemaRegistry;
+import org.apache.nifi.serialization.SimpleRecordSchema;
+import org.apache.nifi.serialization.record.RecordField;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.SchemaIdentifier;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.DatabaseMetaData;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.EnumSet;
+import java.util.List;
+import java.util.Optional;
+import java.util.Set;
+
+@Tags({"schema", "registry", "database", "table"})
+@CapabilityDescription("Provides a service for generating a record schema from 
a database table definition. The service is configured "
++ "to use a table name and a database connection fetches the table 
metadata (i.e. table definition) such as column names, data types, "
++ "nullability, etc.")
+public class DatabaseSchemaRegistry extends AbstractControllerService 
implements SchemaRegistry {
+
+private static final Set schemaFields = 
EnumSet.of(SchemaField.SCHEMA_NAME);
+
+static final PropertyDescriptor DBCP_SERVICE = new 
PropertyDescriptor.Builder()
+.name("dbcp-service")

Review Comment:
   Yes, it is newer with part of the changes for migrating properties in NiFi 
2.0. I know there have been past discussions about internationalization, but 
unfortunately it appears there was never any significant progress in that 
direction. For now, the best approach seems to be using the same name in both 
places, and if there is ever momentum around internationalization, then we 
could revisit a migration strategy. Unfortunately the disconnect between 
property and display names right now makes it more difficult to translate from 
a UI to programmatic approach, so keeping the same names will help under the 
current circumstances.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12382: Add DatabaseSchemaRegistry service [nifi]

2023-11-17 Thread via GitHub


mattyb149 commented on code in PR #8042:
URL: https://github.com/apache/nifi/pull/8042#discussion_r1397661227


##
nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-db-schema-registry-service/src/main/java/org/apache/nifi/db/schemaregistry/DatabaseSchemaRegistry.java:
##
@@ -0,0 +1,179 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.db.schemaregistry;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.dbcp.DBCPService;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.schema.access.SchemaField;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.schemaregistry.services.SchemaRegistry;
+import org.apache.nifi.serialization.SimpleRecordSchema;
+import org.apache.nifi.serialization.record.RecordField;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.SchemaIdentifier;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.DatabaseMetaData;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.EnumSet;
+import java.util.List;
+import java.util.Optional;
+import java.util.Set;
+
+@Tags({"schema", "registry", "database", "table"})
+@CapabilityDescription("Provides a service for generating a record schema from 
a database table definition. The service is configured "
++ "to use a table name and a database connection fetches the table 
metadata (i.e. table definition) such as column names, data types, "
++ "nullability, etc.")
+public class DatabaseSchemaRegistry extends AbstractControllerService 
implements SchemaRegistry {
+
+private static final Set schemaFields = 
EnumSet.of(SchemaField.SCHEMA_NAME);
+
+static final PropertyDescriptor DBCP_SERVICE = new 
PropertyDescriptor.Builder()
+.name("dbcp-service")

Review Comment:
   Is that the new convention? We used to do the opposite when requiring both 
`name()` and `displayName()` to be set, `name` was a more machine-friendly name 
for things like internationalization and `displayName` was the display name in 
English. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12382: Add DatabaseSchemaRegistry service [nifi]

2023-11-17 Thread via GitHub


mattyb149 commented on code in PR #8042:
URL: https://github.com/apache/nifi/pull/8042#discussion_r1397658749


##
nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-db-schema-registry-service/pom.xml:
##
@@ -0,0 +1,76 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
https://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+
+org.apache.nifi
+nifi-dbcp-service-bundle
+2.0.0-SNAPSHOT
+
+
+nifi-db-schema-registry-service
+jar
+
+
+org.apache.nifi
+nifi-schema-registry-service-api
+
+
+org.apache.nifi
+nifi-dbcp-service-api
+
+
+org.apache.nifi
+nifi-dbcp-base
+2.0.0-SNAPSHOT
+
+
+org.apache.nifi
+nifi-kerberos-credentials-service-api
+2.0.0-SNAPSHOT
+provided
+
+
+org.apache.nifi
+nifi-kerberos-user-service-api
+

Review Comment:
   I got an error when running tests, but it might be because of including 
nifi-dbcp-base. If I can refactor out the common properties I probably don't 
need these either.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12382: Add DatabaseSchemaRegistry service [nifi]

2023-11-17 Thread via GitHub


mattyb149 commented on code in PR #8042:
URL: https://github.com/apache/nifi/pull/8042#discussion_r1397657327


##
nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-db-schema-registry-service/pom.xml:
##
@@ -0,0 +1,76 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
https://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+
+org.apache.nifi
+nifi-dbcp-service-bundle
+2.0.0-SNAPSHOT
+
+
+nifi-db-schema-registry-service
+jar
+
+
+org.apache.nifi
+nifi-schema-registry-service-api
+
+
+org.apache.nifi
+nifi-dbcp-service-api
+
+
+org.apache.nifi
+nifi-dbcp-base
+2.0.0-SNAPSHOT
+

Review Comment:
   It's only for the common properties. If there's a better spot to move those 
to I can refactor it as part of this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12382: Add DatabaseSchemaRegistry service [nifi]

2023-11-17 Thread via GitHub


exceptionfactory commented on code in PR #8042:
URL: https://github.com/apache/nifi/pull/8042#discussion_r1397647623


##
nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-db-schema-registry-service/src/main/java/org/apache/nifi/db/schemaregistry/DatabaseSchemaRegistry.java:
##
@@ -0,0 +1,179 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.db.schemaregistry;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.dbcp.DBCPService;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.schema.access.SchemaField;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.schemaregistry.services.SchemaRegistry;
+import org.apache.nifi.serialization.SimpleRecordSchema;
+import org.apache.nifi.serialization.record.RecordField;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.SchemaIdentifier;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.DatabaseMetaData;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.EnumSet;
+import java.util.List;
+import java.util.Optional;
+import java.util.Set;
+
+@Tags({"schema", "registry", "database", "table"})
+@CapabilityDescription("Provides a service for generating a record schema from 
a database table definition. The service is configured "
++ "to use a table name and a database connection fetches the table 
metadata (i.e. table definition) such as column names, data types, "
++ "nullability, etc.")
+public class DatabaseSchemaRegistry extends AbstractControllerService 
implements SchemaRegistry {
+
+private static final Set schemaFields = 
EnumSet.of(SchemaField.SCHEMA_NAME);
+
+static final PropertyDescriptor DBCP_SERVICE = new 
PropertyDescriptor.Builder()
+.name("dbcp-service")

Review Comment:
   As a new Service, this is a good opportunity to use the same value for the 
name and display name.
   ```suggestion
   .name("Database Connection Pooling Service")
   ```



##
nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-db-schema-registry-service/src/main/java/org/apache/nifi/db/schemaregistry/DatabaseSchemaRegistry.java:
##
@@ -0,0 +1,179 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.db.schemaregistry;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.dbcp.DBCPService;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import 

Re: [PR] NIFI-8932: Add capability to skip first N rows in CSVReader [nifi]

2023-11-17 Thread via GitHub


exceptionfactory commented on PR #7952:
URL: https://github.com/apache/nifi/pull/7952#issuecomment-1816807256

   > Reopening this as I'm actively working it. I realized I hadn't passed in 
the `Character Set` property value into the InputStreamReader so I'm trying to 
fix the code/tests using that first. If not (or if you still object to the 
Reader at all) I can try the PushbackInputStream. The only caveat there is that 
the record separator can be an arbitrary string so I need to create the 
pushback buffer of that size ("n") and push back only "n-1" bytes. I figured 
the reader would do something similar and it makes the code easier to read, so 
if it works with the Reader and the correct encoding I'd like to go with that.
   
   That's a good point about the arbitrary string separator. The Reader will do 
character decoding based on the configured Character Set, so the best approach 
probably depends on how to evaluate the string separator.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-8932: Add capability to skip first N rows in CSVReader [nifi]

2023-11-17 Thread via GitHub


mattyb149 commented on PR #7952:
URL: https://github.com/apache/nifi/pull/7952#issuecomment-1816797181

   Reopening this as I'm actively working it. I realized I hadn't passed in the 
`Character Set` property value into the InputStreamReader so I'm trying to fix 
the code/tests using that first. If not (or if you still object to the Reader 
at all) I can try the PushbackInputStream. The only caveat there is that the 
record separator can be an arbitrary string so I need to create the pushback 
buffer of that size ("n") and push back only "n-1" bytes. I figured the reader 
would do something similar and it makes the code easier to read, so if it works 
with the Reader and the correct encoding I'd like to go with that.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-12383) GZipException occur during cluster replication, when the original request contains "accept-encoding" header with lowercase

2023-11-17 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17787300#comment-17787300
 ] 

David Handermann commented on NIFI-12383:
-

Thanks for the confirmation!

> GZipException occur during cluster replication, when the original request 
> contains "accept-encoding" header with lowercase
> --
>
> Key: NIFI-12383
> URL: https://issues.apache.org/jira/browse/NIFI-12383
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Zoltán Kornél Török
>Assignee: Zoltán Kornél Török
>Priority: Major
> Fix For: 2.0.0-M1, 1.25.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I had a three node cluster with knox. 
> Time to time an error occured in the nifi logs on this cluster:
> {code}
> 2023-11-15 13:25:51,637 INFO 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator:
>  Received a status of 500 from xy:8443 for request PUT 
> /nifi-api/process-groups/d2cedf64-018b-1000--164a79fc when performing 
> first stage of two-stage commit. The action will not occur. Node explanation: 
> An unexpected error has occurred. Please check the logs for additional 
> details.
> {code}
> Also sometimes I got "An unexpected error has occurred. Please check the logs 
> for additional details." error on the UI too. After some investigation in I 
> found the error in the logs:
> {code}
> 23-11-15 13:40:25,289 ERROR [NiFi Web Server-78] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.util.zip.ZipException: Not in GZIP format. Returning Internal Server 
> Error response.
> java.util.zip.ZipException: Not in GZIP format
> at 
> java.base/java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:176)
> at 
> java.base/java.util.zip.GZIPInputStream.(GZIPInputStream.java:79)
> at 
> java.base/java.util.zip.GZIPInputStream.(GZIPInputStream.java:91)
> at 
> org.glassfish.jersey.message.GZipEncoder.decode(GZipEncoder.java:49)
> at 
> org.glassfish.jersey.spi.ContentEncoder.aroundReadFrom(ContentEncoder.java:100)
> at 
> org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:132)
> at 
> org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundReadFrom(MappableExceptionWrapperInterceptor.java:49)
> at 
> org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:132)
> at 
> org.glassfish.jersey.message.internal.MessageBodyFactory.readFrom(MessageBodyFactory.java:1072)
> at 
> org.glassfish.jersey.message.internal.InboundMessageContext.readEntity(InboundMessageContext.java:919)
> at 
> org.glassfish.jersey.server.ContainerRequest.readEntity(ContainerRequest.java:290)
> at 
> org.glassfish.jersey.server.internal.inject.EntityParamValueParamProvider$EntityValueSupplier.apply(EntityParamValueParamProvider.java:73)
> {code}
> After many hours of debugging, I found out, that sometimes when I use the 
> cluster via knox, some unknown reason the incoming "Accept-Encoding" come 
> with all leters lowercase (which is valid, becase HTTP header is case 
> insensitive - https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers). 
> However OkHttpReplicationClient assume that the header is always 
> "Accept-Encoding" 
> (https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/replication/okhttp/OkHttpReplicationClient.java#L294
>   and  
> https://github.com/apache/nifi/blob/main/nifi-commons/nifi-site-to-site-client/src/main/java/org/apache/nifi/remote/protocol/http/HttpHeaders.java#L25).
>  Because of that, during replication the client not use gzip compression but 
> when other node get the requests, the jetty read the original 
> "accept-encoding" header and try to uncompress the inputstream, which lead to 
> the above error.
> We need to add a few line code to the client to read the header case 
> insensitivity



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-11992: Processor and sink service for filing tickets in Zendesk [nifi]

2023-11-17 Thread via GitHub


turcsanyip commented on code in PR #7644:
URL: https://github.com/apache/nifi/pull/7644#discussion_r1396967921


##
nifi-nar-bundles/nifi-zendesk-bundle/nifi-zendesk-nar/src/main/resources/META-INF/NOTICE:
##
@@ -4,36 +4,12 @@ Copyright 2015-2022 The Apache Software Foundation
 This product includes software developed at
 The Apache Software Foundation (http://www.apache.org/).
 
-===
-Apache Software License v2
-===
+
+BSD License
+
 
-The following binary components are provided under the Apache Software License 
v2
-
-  (ASLv2) Apache Commons IO
-The following NOTICE information applies:
-  Apache Commons IO
-  Copyright 2002-2017 The Apache Software Foundation
-
-  (ASLv2) Jackson JSON processor
+  (BSD) ANTLR 3 Runtime (org.antlr:antlr-runtime:3.5.3)

Review Comment:
   Based on other nar modules using Antlr, the license entry needs to be added 
in the `LICENSE` file instead of `NOTICE`.
   Please use 
[nifi-framework-nar](https://github.com/apache/nifi/blob/63364687d8bbe2af2d0031f745ca3c61144f8b4f/nifi-nar-bundles/nifi-framework-bundle/nifi-framework-nar/src/main/resources/META-INF/LICENSE#L204-L238)
 as a template.



##
nifi-nar-bundles/nifi-zendesk-bundle/nifi-zendesk-services/src/main/java/org/apache/nifi/services/zendesk/ZendeskRecordSink.java:
##
@@ -0,0 +1,239 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.services.zendesk;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.fasterxml.jackson.databind.node.ObjectNode;
+import com.github.benmanes.caffeine.cache.Cache;
+import com.github.benmanes.caffeine.cache.Caffeine;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnDisabled;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.common.zendesk.ZendeskAuthenticationContext;
+import org.apache.nifi.common.zendesk.ZendeskAuthenticationType;
+import org.apache.nifi.common.zendesk.ZendeskClient;
+import 
org.apache.nifi.common.zendesk.validation.JsonPointerPropertyNameValidator;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.sink.RecordSinkService;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.WriteResult;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSet;
+import org.apache.nifi.web.client.api.HttpResponseEntity;
+import org.apache.nifi.web.client.api.HttpResponseStatus;
+import org.apache.nifi.web.client.api.HttpUriBuilder;
+import org.apache.nifi.web.client.provider.api.WebClientServiceProvider;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+import static java.lang.String.format;
+import static 
org.apache.nifi.common.zendesk.ZendeskProperties.WEB_CLIENT_SERVICE_PROVIDER;
+import static 
org.apache.nifi.common.zendesk.ZendeskProperties.ZENDESK_AUTHENTICATION_CREDENTIAL;
+import static 
org.apache.nifi.common.zendesk.ZendeskProperties.ZENDESK_AUTHENTICATION_TYPE;
+import static 
org.apache.nifi.common.zendesk.ZendeskProperties.ZENDESK_CREATE_TICKETS_RESOURCE;
+import static 
org.apache.nifi.common.zendesk.ZendeskProperties.ZENDESK_CREATE_TICKET_RESOURCE;
+import static 
org.apache.nifi.common.zendesk.ZendeskProperties.ZENDESK_SUBDOMAIN;
+import 

[jira] [Commented] (NIFI-12383) GZipException occur during cluster replication, when the original request contains "accept-encoding" header with lowercase

2023-11-17 Thread Jira


[ 
https://issues.apache.org/jira/browse/NIFI-12383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17787276#comment-17787276
 ] 

Zoltán Kornél Török commented on NIFI-12383:


Here is the content of nifi properties (ommit some sensitive values)
{code:java}
nifi.administrative.yield.duration=30 sec
nifi.analytics.connection.model.implementation=org.apache.nifi.controller.status.analytics.models.OrdinaryLeastSquares
nifi.analytics.connection.model.score.name=rSquared
nifi.analytics.connection.model.score.threshold=.90
nifi.analytics.predict.enabled=true
nifi.analytics.predict.interval=3 mins
nifi.analytics.query.interval=5 mins
nifi.authorizer.configuration.file=/var/run/../authorizers.xml
nifi.bored.yield.duration=10 millis
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.candidates=3
nifi.cluster.flow.election.max.wait.time=1 mins
nifi.cluster.is.node=true
nifi.cluster.load.balance.comms.timeout=30 sec
nifi.cluster.load.balance.connections.per.node=1
nifi.cluster.load.balance.host=
nifi.cluster.load.balance.max.thread.count=8
nifi.cluster.load.balance.port=6342
nifi.cluster.node.address=*
nifi.cluster.node.connection.timeout=30 sec
nifi.cluster.node.event.history.size=25
nifi.cluster.node.max.concurrent.requests=100
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.protocol.port=9088
nifi.cluster.node.read.timeout=30 sec
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.is.secure=true
nifi.components.status.repository.buffer.size=1440
nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
nifi.components.status.snapshot.frequency=1 min
nifi.content.claim.max.appendable.size=50 KB
nifi.content.repository.always.sync=false
nifi.content.repository.archive.enabled=true
nifi.content.repository.archive.max.retention.period=30 days
nifi.content.repository.archive.max.usage.percentage=70%
nifi.content.repository.directory.default=*
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
nifi.content.viewer.url=../nifi-content-viewer/
nifi.database.directory==*
nifi.documentation.working.directory==*
nifi.flow.analysis.background.task.schedule=5 mins
nifi.flow.configuration.archive.dir==*
nifi.flow.configuration.archive.enabled=true
nifi.flow.configuration.archive.max.storage=500 MB
nifi.flow.configuration.archive.max.time=30 days
nifi.flow.configuration.file=flow.json.gz
nifi.flow.configuration.json.file=flow.json.gz
nifi.flowcontroller.autoResumeState=true
nifi.flowcontroller.graceful.shutdown.period=10 sec
nifi.flowfile.repository.always.sync=false
nifi.flowfile.repository.checkpoint.interval=2 mins
nifi.flowfile.repository.directory==flowfile-repo
nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
nifi.flowfile.repository.partitions=256
nifi.flowfile.repository.wal.implementation=org.apache.nifi.wali.SequentialAccessWriteAheadLog
nifi.flowservice.writedelay.interval=500 ms
nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
nifi.initial.admin.identity=
nifi.kerberos.krb5.file=/etc/krb5.conf
nifi.kerberos.service.keytab.location==b
nifi.kerberos.service.principal=n=K
nifi.login.identity.provider.configuration.file==/login-identity-providers.xml
nifi.monitor.long.running.task.schedule=
nifi.monitor.long.running.task.threshold=
nifi.provenance.repository.always.sync=false
nifi.provenance.repository.buffer.size=10
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.concurrent.merge.threads=2
nifi.provenance.repository.directory.default==
nifi.provenance.repository.encryption.key.provider.location=
nifi.provenance.repository.encryption.key.provider.password=
nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
nifi.provenance.repository.index.shard.size=4 GB
nifi.provenance.repository.index.threads=2
nifi.provenance.repository.indexed.attributes=
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, 
ProcessorID, Relationship
nifi.provenance.repository.max.attribute.length=65536
nifi.provenance.repository.max.storage.size=150 GB
nifi.provenance.repository.max.storage.time=30 days
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.rollover.size=1 GB
nifi.provenance.repository.rollover.time=10 mins
nifi.python.extensions.source.directory.default==
nifi.python.framework.source.directory==
nifi.python.logs.directory==
nifi.python.max.processes=100
nifi.python.max.processes.per.extension.type=10
nifi.python.working.directory==
nifi.queue.backpressure.count=1
nifi.queue.backpressure.size=1 GB
nifi.queue.swap.threshold=2
nifi.remote.contents.cache.expiration=30 secs
nifi.remote.input.host==
nifi.remote.input.http.enabled=true

[jira] [Commented] (NIFI-12383) GZipException occur during cluster replication, when the original request contains "accept-encoding" header with lowercase

2023-11-17 Thread Jira


[ 
https://issues.apache.org/jira/browse/NIFI-12383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17787274#comment-17787274
 ] 

Zoltán Kornél Török commented on NIFI-12383:


Oh I see. It was a nifi-2.0 cluster what I played with where this error occured

> GZipException occur during cluster replication, when the original request 
> contains "accept-encoding" header with lowercase
> --
>
> Key: NIFI-12383
> URL: https://issues.apache.org/jira/browse/NIFI-12383
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Zoltán Kornél Török
>Assignee: Zoltán Kornél Török
>Priority: Major
> Fix For: 2.0.0-M1, 1.25.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I had a three node cluster with knox. 
> Time to time an error occured in the nifi logs on this cluster:
> {code}
> 2023-11-15 13:25:51,637 INFO 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator:
>  Received a status of 500 from xy:8443 for request PUT 
> /nifi-api/process-groups/d2cedf64-018b-1000--164a79fc when performing 
> first stage of two-stage commit. The action will not occur. Node explanation: 
> An unexpected error has occurred. Please check the logs for additional 
> details.
> {code}
> Also sometimes I got "An unexpected error has occurred. Please check the logs 
> for additional details." error on the UI too. After some investigation in I 
> found the error in the logs:
> {code}
> 23-11-15 13:40:25,289 ERROR [NiFi Web Server-78] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.util.zip.ZipException: Not in GZIP format. Returning Internal Server 
> Error response.
> java.util.zip.ZipException: Not in GZIP format
> at 
> java.base/java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:176)
> at 
> java.base/java.util.zip.GZIPInputStream.(GZIPInputStream.java:79)
> at 
> java.base/java.util.zip.GZIPInputStream.(GZIPInputStream.java:91)
> at 
> org.glassfish.jersey.message.GZipEncoder.decode(GZipEncoder.java:49)
> at 
> org.glassfish.jersey.spi.ContentEncoder.aroundReadFrom(ContentEncoder.java:100)
> at 
> org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:132)
> at 
> org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundReadFrom(MappableExceptionWrapperInterceptor.java:49)
> at 
> org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:132)
> at 
> org.glassfish.jersey.message.internal.MessageBodyFactory.readFrom(MessageBodyFactory.java:1072)
> at 
> org.glassfish.jersey.message.internal.InboundMessageContext.readEntity(InboundMessageContext.java:919)
> at 
> org.glassfish.jersey.server.ContainerRequest.readEntity(ContainerRequest.java:290)
> at 
> org.glassfish.jersey.server.internal.inject.EntityParamValueParamProvider$EntityValueSupplier.apply(EntityParamValueParamProvider.java:73)
> {code}
> After many hours of debugging, I found out, that sometimes when I use the 
> cluster via knox, some unknown reason the incoming "Accept-Encoding" come 
> with all leters lowercase (which is valid, becase HTTP header is case 
> insensitive - https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers). 
> However OkHttpReplicationClient assume that the header is always 
> "Accept-Encoding" 
> (https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/replication/okhttp/OkHttpReplicationClient.java#L294
>   and  
> https://github.com/apache/nifi/blob/main/nifi-commons/nifi-site-to-site-client/src/main/java/org/apache/nifi/remote/protocol/http/HttpHeaders.java#L25).
>  Because of that, during replication the client not use gzip compression but 
> when other node get the requests, the jetty read the original 
> "accept-encoding" header and try to uncompress the inputstream, which lead to 
> the above error.
> We need to add a few line code to the client to read the header case 
> insensitivity



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12384 Upgrade Registry to Spring Framework 6 [nifi]

2023-11-17 Thread via GitHub


joewitt commented on PR #8044:
URL: https://github.com/apache/nifi/pull/8044#issuecomment-1816604917

   Full clean build with contrib all tests AND integration tests checks out.
   
   However when attempting to connect nifi with the registry plain unsecured 
connection.  I attempt to version control a group and receive in the 
nifi-user.log
   
   `2023-11-17 08:10:24,371 ERROR [NiFi Web Server-55] 
o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
java.lang.RuntimeException: java.lang.ClassNotFoundException: Provider for 
jakarta.ws.rs.client.ClientBuilder cann
   ot be found. Returning Internal Server Error response.
   java.lang.RuntimeException: java.lang.ClassNotFoundException: Provider for 
jakarta.ws.rs.client.ClientBuilder cannot be found
   at 
jakarta.ws.rs.client.ClientBuilder.newBuilder(ClientBuilder.java:75)
   at 
org.apache.nifi.registry.client.impl.JerseyNiFiRegistryClient.(JerseyNiFiRegistryClient.java:100)
   at 
org.apache.nifi.registry.client.impl.JerseyNiFiRegistryClient$Builder.build(JerseyNiFiRegistryClient.java:275)
   at 
org.apache.nifi.registry.flow.NifiRegistryFlowRegistryClient.getRegistryClient(NifiRegistryFlowRegistryClient.java:90)`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2263 Fix debug build [nifi-minifi-cpp]

2023-11-17 Thread via GitHub


lordgamez commented on code in PR #1696:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1696#discussion_r1397451868


##
.github/workflows/ci.yml:
##
@@ -111,7 +111,7 @@ jobs:
   - name: build
 run: |
   for /f "usebackq delims=" %%i in (`vswhere.exe -latest -property 
installationPath`) do if exist "%%i\Common7\Tools\vsdevcmd.bat" call 
"%%i\Common7\Tools\vsdevcmd.bat" -arch=x64 -host_arch=x64
-  win_build_vs.bat ..\b /64 /CI /S /A /PDH /SPLUNK /GCP /ELASTIC /K /L 
/R /Z /N /RO /PR /PYTHON_SCRIPTING /LUA_SCRIPTING /MQTT /SCCACHE /NINJA
+  win_build_vs.bat ..\b /64 /CI /PDH /R /N /RO /SCCACHE /NINJA

Review Comment:
   Also it seems that elastic and gcp extension defaults were missing so I 
added them in 26d8bafcda3458c8166e269dd7ccacaee506f142



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12382: Add DatabaseSchemaRegistry service [nifi]

2023-11-17 Thread via GitHub


exceptionfactory commented on PR #8042:
URL: https://github.com/apache/nifi/pull/8042#issuecomment-1816579844

   Thanks for the additional input @pvillard31, that is helpful, and makes more 
sense now.
   
   I realize that I was not evaluating all the implementation details. Now I 
see that the implementation uses the Database Metadata results, and that is 
what defines the information. Sorry for missing that detail earlier @mattyb149.
   
   With that in mind, I can now see how this makes more sense as a way to use 
database table metadata for the schema definition. Perhaps including `Metadata` 
in Service class name would also help describe how this works.
   
   With that background, are there any concerns about JDBC driver support 
across vendors? I would expect that more popular database vendors should work 
with this approach, but some may not, so that may be worth highlighting under 
additional details.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2263 Fix debug build [nifi-minifi-cpp]

2023-11-17 Thread via GitHub


lordgamez commented on code in PR #1696:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1696#discussion_r1397441922


##
.github/workflows/ci.yml:
##
@@ -111,7 +111,7 @@ jobs:
   - name: build
 run: |
   for /f "usebackq delims=" %%i in (`vswhere.exe -latest -property 
installationPath`) do if exist "%%i\Common7\Tools\vsdevcmd.bat" call 
"%%i\Common7\Tools\vsdevcmd.bat" -arch=x64 -host_arch=x64
-  win_build_vs.bat ..\b /64 /CI /S /A /PDH /SPLUNK /GCP /ELASTIC /K /L 
/R /Z /N /RO /PR /PYTHON_SCRIPTING /LUA_SCRIPTING /MQTT /SCCACHE /NINJA
+  win_build_vs.bat ..\b /64 /CI /PDH /R /N /RO /SCCACHE /NINJA

Review Comment:
   Yes, the removed options do not exist anymore, they are enabled by default 
and there is a `NO_*` option for those, so they are not needed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-12383) GZipException occur during cluster replication, when the original request contains "accept-encoding" header with lowercase

2023-11-17 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17787241#comment-17787241
 ] 

David Handermann commented on NIFI-12383:
-

Thanks for the reply [~taz1988].

The {{h2}} protocol is now the default on the main branch, but it was not the 
default for NiFi 1, so that is why I asked.

The change itself is simple enough, so it is not so much of a concern, but it 
would be useful to know the NiFi version and configured property value if you 
have that information available for reference.

> GZipException occur during cluster replication, when the original request 
> contains "accept-encoding" header with lowercase
> --
>
> Key: NIFI-12383
> URL: https://issues.apache.org/jira/browse/NIFI-12383
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Zoltán Kornél Török
>Assignee: Zoltán Kornél Török
>Priority: Major
> Fix For: 2.latest
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I had a three node cluster with knox. 
> Time to time an error occured in the nifi logs on this cluster:
> {code}
> 2023-11-15 13:25:51,637 INFO 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator:
>  Received a status of 500 from xy:8443 for request PUT 
> /nifi-api/process-groups/d2cedf64-018b-1000--164a79fc when performing 
> first stage of two-stage commit. The action will not occur. Node explanation: 
> An unexpected error has occurred. Please check the logs for additional 
> details.
> {code}
> Also sometimes I got "An unexpected error has occurred. Please check the logs 
> for additional details." error on the UI too. After some investigation in I 
> found the error in the logs:
> {code}
> 23-11-15 13:40:25,289 ERROR [NiFi Web Server-78] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.util.zip.ZipException: Not in GZIP format. Returning Internal Server 
> Error response.
> java.util.zip.ZipException: Not in GZIP format
> at 
> java.base/java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:176)
> at 
> java.base/java.util.zip.GZIPInputStream.(GZIPInputStream.java:79)
> at 
> java.base/java.util.zip.GZIPInputStream.(GZIPInputStream.java:91)
> at 
> org.glassfish.jersey.message.GZipEncoder.decode(GZipEncoder.java:49)
> at 
> org.glassfish.jersey.spi.ContentEncoder.aroundReadFrom(ContentEncoder.java:100)
> at 
> org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:132)
> at 
> org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundReadFrom(MappableExceptionWrapperInterceptor.java:49)
> at 
> org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:132)
> at 
> org.glassfish.jersey.message.internal.MessageBodyFactory.readFrom(MessageBodyFactory.java:1072)
> at 
> org.glassfish.jersey.message.internal.InboundMessageContext.readEntity(InboundMessageContext.java:919)
> at 
> org.glassfish.jersey.server.ContainerRequest.readEntity(ContainerRequest.java:290)
> at 
> org.glassfish.jersey.server.internal.inject.EntityParamValueParamProvider$EntityValueSupplier.apply(EntityParamValueParamProvider.java:73)
> {code}
> After many hours of debugging, I found out, that sometimes when I use the 
> cluster via knox, some unknown reason the incoming "Accept-Encoding" come 
> with all leters lowercase (which is valid, becase HTTP header is case 
> insensitive - https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers). 
> However OkHttpReplicationClient assume that the header is always 
> "Accept-Encoding" 
> (https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/replication/okhttp/OkHttpReplicationClient.java#L294
>   and  
> https://github.com/apache/nifi/blob/main/nifi-commons/nifi-site-to-site-client/src/main/java/org/apache/nifi/remote/protocol/http/HttpHeaders.java#L25).
>  Because of that, during replication the client not use gzip compression but 
> when other node get the requests, the jetty read the original 
> "accept-encoding" header and try to uncompress the inputstream, which lead to 
> the above error.
> We need to add a few line code to the client to read the header case 
> insensitivity



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-12383) GZipException occur during cluster replication, when the original request contains "accept-encoding" header with lowercase

2023-11-17 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann resolved NIFI-12383.
-
Fix Version/s: 2.0.0-M1
   1.25.0
   (was: 2.latest)
   Resolution: Fixed

> GZipException occur during cluster replication, when the original request 
> contains "accept-encoding" header with lowercase
> --
>
> Key: NIFI-12383
> URL: https://issues.apache.org/jira/browse/NIFI-12383
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Zoltán Kornél Török
>Assignee: Zoltán Kornél Török
>Priority: Major
> Fix For: 2.0.0-M1, 1.25.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I had a three node cluster with knox. 
> Time to time an error occured in the nifi logs on this cluster:
> {code}
> 2023-11-15 13:25:51,637 INFO 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator:
>  Received a status of 500 from xy:8443 for request PUT 
> /nifi-api/process-groups/d2cedf64-018b-1000--164a79fc when performing 
> first stage of two-stage commit. The action will not occur. Node explanation: 
> An unexpected error has occurred. Please check the logs for additional 
> details.
> {code}
> Also sometimes I got "An unexpected error has occurred. Please check the logs 
> for additional details." error on the UI too. After some investigation in I 
> found the error in the logs:
> {code}
> 23-11-15 13:40:25,289 ERROR [NiFi Web Server-78] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.util.zip.ZipException: Not in GZIP format. Returning Internal Server 
> Error response.
> java.util.zip.ZipException: Not in GZIP format
> at 
> java.base/java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:176)
> at 
> java.base/java.util.zip.GZIPInputStream.(GZIPInputStream.java:79)
> at 
> java.base/java.util.zip.GZIPInputStream.(GZIPInputStream.java:91)
> at 
> org.glassfish.jersey.message.GZipEncoder.decode(GZipEncoder.java:49)
> at 
> org.glassfish.jersey.spi.ContentEncoder.aroundReadFrom(ContentEncoder.java:100)
> at 
> org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:132)
> at 
> org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundReadFrom(MappableExceptionWrapperInterceptor.java:49)
> at 
> org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:132)
> at 
> org.glassfish.jersey.message.internal.MessageBodyFactory.readFrom(MessageBodyFactory.java:1072)
> at 
> org.glassfish.jersey.message.internal.InboundMessageContext.readEntity(InboundMessageContext.java:919)
> at 
> org.glassfish.jersey.server.ContainerRequest.readEntity(ContainerRequest.java:290)
> at 
> org.glassfish.jersey.server.internal.inject.EntityParamValueParamProvider$EntityValueSupplier.apply(EntityParamValueParamProvider.java:73)
> {code}
> After many hours of debugging, I found out, that sometimes when I use the 
> cluster via knox, some unknown reason the incoming "Accept-Encoding" come 
> with all leters lowercase (which is valid, becase HTTP header is case 
> insensitive - https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers). 
> However OkHttpReplicationClient assume that the header is always 
> "Accept-Encoding" 
> (https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/replication/okhttp/OkHttpReplicationClient.java#L294
>   and  
> https://github.com/apache/nifi/blob/main/nifi-commons/nifi-site-to-site-client/src/main/java/org/apache/nifi/remote/protocol/http/HttpHeaders.java#L25).
>  Because of that, during replication the client not use gzip compression but 
> when other node get the requests, the jetty read the original 
> "accept-encoding" header and try to uncompress the inputstream, which lead to 
> the above error.
> We need to add a few line code to the client to read the header case 
> insensitivity



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12382: Add DatabaseSchemaRegistry service [nifi]

2023-11-17 Thread via GitHub


pvillard31 commented on PR #8042:
URL: https://github.com/apache/nifi/pull/8042#issuecomment-1816560628

   I'm jumping in the conversation here but I don't think this is a very 
specific use case. The idea is that, in many cases, data would be sent into a 
database by NiFi and some users may not want to deal with schemas in NiFi or 
introduce an external Schema Registry. The idea here is to be able to retrieve 
the schema of the destination table in the NiFi flow so that it can be 
leveraged instead of the Infer Schema option which comes with the limitations 
and risks that we know.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12372 MiNiFi C2 Encrypt Flow Configuration Properties when Transferring [nifi]

2023-11-17 Thread via GitHub


exceptionfactory commented on code in PR #8028:
URL: https://github.com/apache/nifi/pull/8028#discussion_r1397416896


##
minifi/minifi-nar-bundles/minifi-framework-bundle/minifi-framework/minifi-framework-core/src/test/java/org/apache/nifi/minifi/c2/command/FlowPropertyEncryptorTest.java:
##
@@ -0,0 +1,329 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.minifi.c2.command;
+
+import static java.util.Map.entry;
+import static java.util.UUID.randomUUID;
+import static java.util.stream.Collectors.toMap;
+import static org.apache.commons.lang3.RandomStringUtils.randomAlphabetic;
+import static 
org.apache.nifi.controller.serialization.FlowSerializer.ENC_PREFIX;
+import static org.junit.jupiter.api.Assertions.assertFalse;
+import static org.junit.jupiter.api.Assertions.assertTrue;
+import static org.mockito.ArgumentMatchers.anyString;
+import static org.mockito.Mockito.never;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.stream.Stream;
+import org.apache.nifi.c2.protocol.component.api.Bundle;
+import org.apache.nifi.c2.protocol.component.api.ComponentManifest;
+import org.apache.nifi.c2.protocol.component.api.ControllerServiceDefinition;
+import org.apache.nifi.c2.protocol.component.api.ProcessorDefinition;
+import org.apache.nifi.c2.protocol.component.api.PropertyDescriptor;
+import org.apache.nifi.c2.protocol.component.api.RuntimeManifest;
+import org.apache.nifi.controller.flow.VersionedDataflow;
+import org.apache.nifi.encrypt.PropertyEncryptor;
+import org.apache.nifi.flow.VersionedConfigurableExtension;
+import org.apache.nifi.flow.VersionedControllerService;
+import org.apache.nifi.flow.VersionedParameter;
+import org.apache.nifi.flow.VersionedParameterContext;
+import org.apache.nifi.flow.VersionedProcessGroup;
+import org.apache.nifi.flow.VersionedProcessor;
+import org.apache.nifi.flow.VersionedPropertyDescriptor;
+import org.apache.nifi.minifi.commons.service.FlowSerDeService;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.mockito.Mock;
+import org.mockito.junit.jupiter.MockitoExtension;
+
+@ExtendWith(MockitoExtension.class)
+public class FlowPropertyEncryptorTest {

Review Comment:
   This class should be renamed to `StandardFlowPropertyEncryptorTest` to match 
the new implementation class name.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12382: Add DatabaseSchemaRegistry service [nifi]

2023-11-17 Thread via GitHub


exceptionfactory commented on PR #8042:
URL: https://github.com/apache/nifi/pull/8042#issuecomment-1816528314

   Thanks for the reply @mattyb149. Renaming the class could be helpful.
   
   Another potential concern is using the schema name as the table name. Aside 
from schema naming conventions potentially conflicting with database table 
naming requirements, this would also require a table for every schema, which 
seems like it could be difficult to maintain. Although this could make sense in 
some specific environments, it seems like this might not be a good fit for 
inclusion as a component for general usage.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2263 Fix debug build [nifi-minifi-cpp]

2023-11-17 Thread via GitHub


fgerlits commented on code in PR #1696:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1696#discussion_r1397396341


##
.github/workflows/ci.yml:
##
@@ -111,7 +111,7 @@ jobs:
   - name: build
 run: |
   for /f "usebackq delims=" %%i in (`vswhere.exe -latest -property 
installationPath`) do if exist "%%i\Common7\Tools\vsdevcmd.bat" call 
"%%i\Common7\Tools\vsdevcmd.bat" -arch=x64 -host_arch=x64
-  win_build_vs.bat ..\b /64 /CI /S /A /PDH /SPLUNK /GCP /ELASTIC /K /L 
/R /Z /N /RO /PR /PYTHON_SCRIPTING /LUA_SCRIPTING /MQTT /SCCACHE /NINJA
+  win_build_vs.bat ..\b /64 /CI /PDH /R /N /RO /SCCACHE /NINJA

Review Comment:
   is this intentional?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (NIFI-8932) Add feature to CSVReader to skip N lines at top of the file

2023-11-17 Thread Philipp Korniets (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17787214#comment-17787214
 ] 

Philipp Korniets edited comment on NIFI-8932 at 11/17/23 1:53 PM:
--

Thanks [~mattyb149], it would be nice if this new Property of CSVReader will 
allow Expression Language, and scope will be file attribute. This will allow to 
use same service with multiple parameters. IF new property is not provided - 
default to 0


was (Author: iiojj2):
Thanks Matt, it would be nice if this new Property of CSVReader will allow 
Expression Language, and scope will be file attribute. This will allow to use 
same service with multiple parameters. IF new property is not provided - 
default to 0

> Add feature to CSVReader to skip N lines at top of the file
> ---
>
> Key: NIFI-8932
> URL: https://issues.apache.org/jira/browse/NIFI-8932
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Philipp Korniets
>Assignee: Matt Burgess
>Priority: Minor
> Fix For: 1.latest, 2.latest
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We have a lot of CSV files where provider add custom header/footer to valid 
> CSV content.
>  CSV header is actually second row. 
> To remove unnecessary data we can use
>  * ReplaceText 
>  * splitText->RouteOnAttribute -> MergeContent
> It would be great to have an option in CSVReader controller to skip N rows 
> from top/bottom in order to get5 clean data.
>  * skip N from the top
>  * skip M from the bottom
>  Similar request was developed in FLINK 
> https://issues.apache.org/jira/browse/FLINK-1002
>  
> Data Example:
> {code}
> 7/20/21 2:48:47 AM GMT-04:00  ABB: Blended Rate Calc (X),,,
> distribution_id,Distribution 
> Id,settle_date,group_code,company_name,currency_code,common_account_name,business_date,prod_code,security,class,asset_type
> -1,all,20210719,Repo 21025226,qwerty                                    
> ,EUR,TPSL_21025226   ,19-Jul-21,BRM96ST7   ,ABC 
> 14/09/24,NR,BOND  
> -1,all,20210719,Repo 21025226,qwerty                                    
> ,GBP,RPSS_21025226   ,19-Jul-21,,Total @ -0.11,,
> {code}
> |7/20/21 2:48:47 AM GMT-04:00  ABB: Blended Rate Calc (X)|  |  |  |  |  |  |  
> |  |  |  |  |  
> |distribution_id|Distribution 
> Id|settle_date|group_code|company_name|currency_code|common_account_name|business_date|prod_code|security|class|asset_type|
> |-1|all|20210719|Repo 21025226|qwerty                                    
> |EUR|TPSL_21025226   |19-Jul-21|BRM96ST7   |ABC 
> 14/09/24|NR|BOND  |
> |-1|all|20210719|Repo 21025226|qwerty                                    
> |GBP|RPSS_21025226   |19-Jul-21| |Total @ -0.11| | |



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-8932) Add feature to CSVReader to skip N lines at top of the file

2023-11-17 Thread Philipp Korniets (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17787214#comment-17787214
 ] 

Philipp Korniets commented on NIFI-8932:


Thanks Matt, it would be nice if this new Property of CSVReader will allow 
Expression Language, and scope will be file attribute. This will allow to use 
same service with multiple parameters. IF new property is not provided - 
default to 0

> Add feature to CSVReader to skip N lines at top of the file
> ---
>
> Key: NIFI-8932
> URL: https://issues.apache.org/jira/browse/NIFI-8932
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Philipp Korniets
>Assignee: Matt Burgess
>Priority: Minor
> Fix For: 1.latest, 2.latest
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We have a lot of CSV files where provider add custom header/footer to valid 
> CSV content.
>  CSV header is actually second row. 
> To remove unnecessary data we can use
>  * ReplaceText 
>  * splitText->RouteOnAttribute -> MergeContent
> It would be great to have an option in CSVReader controller to skip N rows 
> from top/bottom in order to get5 clean data.
>  * skip N from the top
>  * skip M from the bottom
>  Similar request was developed in FLINK 
> https://issues.apache.org/jira/browse/FLINK-1002
>  
> Data Example:
> {code}
> 7/20/21 2:48:47 AM GMT-04:00  ABB: Blended Rate Calc (X),,,
> distribution_id,Distribution 
> Id,settle_date,group_code,company_name,currency_code,common_account_name,business_date,prod_code,security,class,asset_type
> -1,all,20210719,Repo 21025226,qwerty                                    
> ,EUR,TPSL_21025226   ,19-Jul-21,BRM96ST7   ,ABC 
> 14/09/24,NR,BOND  
> -1,all,20210719,Repo 21025226,qwerty                                    
> ,GBP,RPSS_21025226   ,19-Jul-21,,Total @ -0.11,,
> {code}
> |7/20/21 2:48:47 AM GMT-04:00  ABB: Blended Rate Calc (X)|  |  |  |  |  |  |  
> |  |  |  |  |  
> |distribution_id|Distribution 
> Id|settle_date|group_code|company_name|currency_code|common_account_name|business_date|prod_code|security|class|asset_type|
> |-1|all|20210719|Repo 21025226|qwerty                                    
> |EUR|TPSL_21025226   |19-Jul-21|BRM96ST7   |ABC 
> 14/09/24|NR|BOND  |
> |-1|all|20210719|Repo 21025226|qwerty                                    
> |GBP|RPSS_21025226   |19-Jul-21| |Total @ -0.11| | |



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12366 Add HuggingFace support to Pinecone processors [nifi]

2023-11-17 Thread via GitHub


krisztina-zsihovszki commented on PR #8026:
URL: https://github.com/apache/nifi/pull/8026#issuecomment-1816336259

   @markap14 This PR is an addition to the vectorization related Python 
processors you created. When you have some time, can you review this PR? Thank 
you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-11129 - adding PutMongoBulk processor to use the bulkWrite API [nifi]

2023-11-17 Thread via GitHub


sebastianrothbucher commented on PR #6918:
URL: https://github.com/apache/nifi/pull/6918#issuecomment-1816263017

   all right - addressed both final issues


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] MINIFICPP-2263 Fix debug build [nifi-minifi-cpp]

2023-11-17 Thread via GitHub


lordgamez opened a new pull request, #1696:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1696

   https://issues.apache.org/jira/browse/MINIFICPP-2263
   
   ---
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically main)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI 
results for build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-12383) GZipException occur during cluster replication, when the original request contains "accept-encoding" header with lowercase

2023-11-17 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17787116#comment-17787116
 ] 

ASF subversion and git services commented on NIFI-12383:


Commit 4e38e28d050d4ddcf766860ef820cb79aa8cfbc7 in nifi's branch 
refs/heads/support/nifi-1.x from Zoltan Kornel Torok
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=4e38e28d05 ]

NIFI-12383 Replication client should handle accept encoding with lowercase

Signed-off-by: Bence Simon 
This closes #8043


> GZipException occur during cluster replication, when the original request 
> contains "accept-encoding" header with lowercase
> --
>
> Key: NIFI-12383
> URL: https://issues.apache.org/jira/browse/NIFI-12383
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Zoltán Kornél Török
>Assignee: Zoltán Kornél Török
>Priority: Major
> Fix For: 2.latest
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I had a three node cluster with knox. 
> Time to time an error occured in the nifi logs on this cluster:
> {code}
> 2023-11-15 13:25:51,637 INFO 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator:
>  Received a status of 500 from xy:8443 for request PUT 
> /nifi-api/process-groups/d2cedf64-018b-1000--164a79fc when performing 
> first stage of two-stage commit. The action will not occur. Node explanation: 
> An unexpected error has occurred. Please check the logs for additional 
> details.
> {code}
> Also sometimes I got "An unexpected error has occurred. Please check the logs 
> for additional details." error on the UI too. After some investigation in I 
> found the error in the logs:
> {code}
> 23-11-15 13:40:25,289 ERROR [NiFi Web Server-78] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.util.zip.ZipException: Not in GZIP format. Returning Internal Server 
> Error response.
> java.util.zip.ZipException: Not in GZIP format
> at 
> java.base/java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:176)
> at 
> java.base/java.util.zip.GZIPInputStream.(GZIPInputStream.java:79)
> at 
> java.base/java.util.zip.GZIPInputStream.(GZIPInputStream.java:91)
> at 
> org.glassfish.jersey.message.GZipEncoder.decode(GZipEncoder.java:49)
> at 
> org.glassfish.jersey.spi.ContentEncoder.aroundReadFrom(ContentEncoder.java:100)
> at 
> org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:132)
> at 
> org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundReadFrom(MappableExceptionWrapperInterceptor.java:49)
> at 
> org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:132)
> at 
> org.glassfish.jersey.message.internal.MessageBodyFactory.readFrom(MessageBodyFactory.java:1072)
> at 
> org.glassfish.jersey.message.internal.InboundMessageContext.readEntity(InboundMessageContext.java:919)
> at 
> org.glassfish.jersey.server.ContainerRequest.readEntity(ContainerRequest.java:290)
> at 
> org.glassfish.jersey.server.internal.inject.EntityParamValueParamProvider$EntityValueSupplier.apply(EntityParamValueParamProvider.java:73)
> {code}
> After many hours of debugging, I found out, that sometimes when I use the 
> cluster via knox, some unknown reason the incoming "Accept-Encoding" come 
> with all leters lowercase (which is valid, becase HTTP header is case 
> insensitive - https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers). 
> However OkHttpReplicationClient assume that the header is always 
> "Accept-Encoding" 
> (https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/replication/okhttp/OkHttpReplicationClient.java#L294
>   and  
> https://github.com/apache/nifi/blob/main/nifi-commons/nifi-site-to-site-client/src/main/java/org/apache/nifi/remote/protocol/http/HttpHeaders.java#L25).
>  Because of that, during replication the client not use gzip compression but 
> when other node get the requests, the jetty read the original 
> "accept-encoding" header and try to uncompress the inputstream, which lead to 
> the above error.
> We need to add a few line code to the client to read the header case 
> insensitivity



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12383 Replication client should handle accept encoding with lowe… [nifi]

2023-11-17 Thread via GitHub


simonbence commented on PR #8046:
URL: https://github.com/apache/nifi/pull/8046#issuecomment-1816072468

   Handled in 8043


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12383 Replication client should handle accept encoding with lowe… [nifi]

2023-11-17 Thread via GitHub


simonbence closed pull request #8046: NIFI-12383 Replication client should 
handle accept encoding with lowe…
URL: https://github.com/apache/nifi/pull/8046


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (MINIFICPP-2263) Debug build fails after curl upgrade

2023-11-17 Thread Jira
Gábor Gyimesi created MINIFICPP-2263:


 Summary: Debug build fails after curl upgrade
 Key: MINIFICPP-2263
 URL: https://issues.apache.org/jira/browse/MINIFICPP-2263
 Project: Apache NiFi MiNiFi C++
  Issue Type: Bug
Reporter: Gábor Gyimesi
Assignee: Gábor Gyimesi


{code:java}
make[2]: *** No rule to make target 'thirdparty/curl-install/lib/libcurl.a', 
needed by 'bin/libminifi-expression-language-extensions.so'.  Stop. {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] NIFI-12383 Replication client should handle accept encoding with lowe… [nifi]

2023-11-17 Thread via GitHub


taz1988 opened a new pull request, #8046:
URL: https://github.com/apache/nifi/pull/8046

   …rcase
   
   (cherry picked from commit b9363c06ea57e6102e0083c671429cfb1785b83c)
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   # Summary
   
   [NIFI-12383](https://issues.apache.org/jira/browse/NIFI-12383)
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI-12383) 
issue created
   
   ### Pull Request Tracking
   
   - [x] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [x] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [ ] Pull Request based on current revision of the `main` branch
   - [ ] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [ ] Build completed using `mvn clean install -P contrib-check`
 - [ ] JDK 21
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Nifi 12383 [nifi]

2023-11-17 Thread via GitHub


taz1988 closed pull request #8045: Nifi 12383
URL: https://github.com/apache/nifi/pull/8045


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Nifi 12383 [nifi]

2023-11-17 Thread via GitHub


taz1988 opened a new pull request, #8045:
URL: https://github.com/apache/nifi/pull/8045

   …rcase
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   # Summary
   
   [NIFI-12383](https://issues.apache.org/jira/browse/NIFI-12383)
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI-12383) 
issue created
   
   ### Pull Request Tracking
   
   - [x] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-12383`
   - [x] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [x] Pull Request based on current revision of the `main` branch
   - [x] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [ ] Build completed using `mvn clean install -P contrib-check`
 - [ ] JDK 8
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-12383) GZipException occur during cluster replication, when the original request contains "accept-encoding" header with lowercase

2023-11-17 Thread Jira


 [ 
https://issues.apache.org/jira/browse/NIFI-12383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltán Kornél Török updated NIFI-12383:
---
Fix Version/s: 2.latest

> GZipException occur during cluster replication, when the original request 
> contains "accept-encoding" header with lowercase
> --
>
> Key: NIFI-12383
> URL: https://issues.apache.org/jira/browse/NIFI-12383
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Zoltán Kornél Török
>Assignee: Zoltán Kornél Török
>Priority: Major
> Fix For: 2.latest
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> I had a three node cluster with knox. 
> Time to time an error occured in the nifi logs on this cluster:
> {code}
> 2023-11-15 13:25:51,637 INFO 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator:
>  Received a status of 500 from xy:8443 for request PUT 
> /nifi-api/process-groups/d2cedf64-018b-1000--164a79fc when performing 
> first stage of two-stage commit. The action will not occur. Node explanation: 
> An unexpected error has occurred. Please check the logs for additional 
> details.
> {code}
> Also sometimes I got "An unexpected error has occurred. Please check the logs 
> for additional details." error on the UI too. After some investigation in I 
> found the error in the logs:
> {code}
> 23-11-15 13:40:25,289 ERROR [NiFi Web Server-78] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.util.zip.ZipException: Not in GZIP format. Returning Internal Server 
> Error response.
> java.util.zip.ZipException: Not in GZIP format
> at 
> java.base/java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:176)
> at 
> java.base/java.util.zip.GZIPInputStream.(GZIPInputStream.java:79)
> at 
> java.base/java.util.zip.GZIPInputStream.(GZIPInputStream.java:91)
> at 
> org.glassfish.jersey.message.GZipEncoder.decode(GZipEncoder.java:49)
> at 
> org.glassfish.jersey.spi.ContentEncoder.aroundReadFrom(ContentEncoder.java:100)
> at 
> org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:132)
> at 
> org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundReadFrom(MappableExceptionWrapperInterceptor.java:49)
> at 
> org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:132)
> at 
> org.glassfish.jersey.message.internal.MessageBodyFactory.readFrom(MessageBodyFactory.java:1072)
> at 
> org.glassfish.jersey.message.internal.InboundMessageContext.readEntity(InboundMessageContext.java:919)
> at 
> org.glassfish.jersey.server.ContainerRequest.readEntity(ContainerRequest.java:290)
> at 
> org.glassfish.jersey.server.internal.inject.EntityParamValueParamProvider$EntityValueSupplier.apply(EntityParamValueParamProvider.java:73)
> {code}
> After many hours of debugging, I found out, that sometimes when I use the 
> cluster via knox, some unknown reason the incoming "Accept-Encoding" come 
> with all leters lowercase (which is valid, becase HTTP header is case 
> insensitive - https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers). 
> However OkHttpReplicationClient assume that the header is always 
> "Accept-Encoding" 
> (https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/replication/okhttp/OkHttpReplicationClient.java#L294
>   and  
> https://github.com/apache/nifi/blob/main/nifi-commons/nifi-site-to-site-client/src/main/java/org/apache/nifi/remote/protocol/http/HttpHeaders.java#L25).
>  Because of that, during replication the client not use gzip compression but 
> when other node get the requests, the jetty read the original 
> "accept-encoding" header and try to uncompress the inputstream, which lead to 
> the above error.
> We need to add a few line code to the client to read the header case 
> insensitivity



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12383) GZipException occur during cluster replication, when the original request contains "accept-encoding" header with lowercase

2023-11-17 Thread Jira


[ 
https://issues.apache.org/jira/browse/NIFI-12383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17787089#comment-17787089
 ] 

Zoltán Kornél Török commented on NIFI-12383:


[~exceptionfactory],
As I see h2 
http/1.1 is the default, so I guess that 
means yes? (I didn't override this value)

> GZipException occur during cluster replication, when the original request 
> contains "accept-encoding" header with lowercase
> --
>
> Key: NIFI-12383
> URL: https://issues.apache.org/jira/browse/NIFI-12383
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Zoltán Kornél Török
>Assignee: Zoltán Kornél Török
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> I had a three node cluster with knox. 
> Time to time an error occured in the nifi logs on this cluster:
> {code}
> 2023-11-15 13:25:51,637 INFO 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator:
>  Received a status of 500 from xy:8443 for request PUT 
> /nifi-api/process-groups/d2cedf64-018b-1000--164a79fc when performing 
> first stage of two-stage commit. The action will not occur. Node explanation: 
> An unexpected error has occurred. Please check the logs for additional 
> details.
> {code}
> Also sometimes I got "An unexpected error has occurred. Please check the logs 
> for additional details." error on the UI too. After some investigation in I 
> found the error in the logs:
> {code}
> 23-11-15 13:40:25,289 ERROR [NiFi Web Server-78] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.util.zip.ZipException: Not in GZIP format. Returning Internal Server 
> Error response.
> java.util.zip.ZipException: Not in GZIP format
> at 
> java.base/java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:176)
> at 
> java.base/java.util.zip.GZIPInputStream.(GZIPInputStream.java:79)
> at 
> java.base/java.util.zip.GZIPInputStream.(GZIPInputStream.java:91)
> at 
> org.glassfish.jersey.message.GZipEncoder.decode(GZipEncoder.java:49)
> at 
> org.glassfish.jersey.spi.ContentEncoder.aroundReadFrom(ContentEncoder.java:100)
> at 
> org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:132)
> at 
> org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundReadFrom(MappableExceptionWrapperInterceptor.java:49)
> at 
> org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:132)
> at 
> org.glassfish.jersey.message.internal.MessageBodyFactory.readFrom(MessageBodyFactory.java:1072)
> at 
> org.glassfish.jersey.message.internal.InboundMessageContext.readEntity(InboundMessageContext.java:919)
> at 
> org.glassfish.jersey.server.ContainerRequest.readEntity(ContainerRequest.java:290)
> at 
> org.glassfish.jersey.server.internal.inject.EntityParamValueParamProvider$EntityValueSupplier.apply(EntityParamValueParamProvider.java:73)
> {code}
> After many hours of debugging, I found out, that sometimes when I use the 
> cluster via knox, some unknown reason the incoming "Accept-Encoding" come 
> with all leters lowercase (which is valid, becase HTTP header is case 
> insensitive - https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers). 
> However OkHttpReplicationClient assume that the header is always 
> "Accept-Encoding" 
> (https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/replication/okhttp/OkHttpReplicationClient.java#L294
>   and  
> https://github.com/apache/nifi/blob/main/nifi-commons/nifi-site-to-site-client/src/main/java/org/apache/nifi/remote/protocol/http/HttpHeaders.java#L25).
>  Because of that, during replication the client not use gzip compression but 
> when other node get the requests, the jetty read the original 
> "accept-encoding" header and try to uncompress the inputstream, which lead to 
> the above error.
> We need to add a few line code to the client to read the header case 
> insensitivity



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12372 MiNiFi C2 Encrypt Flow Configuration Properties when Transferring [nifi]

2023-11-17 Thread via GitHub


briansolo1985 commented on PR #8028:
URL: https://github.com/apache/nifi/pull/8028#issuecomment-1815926804

   Thanks your all the reviews. Addressed the comments, and pushed the most 
recent changes. Please re-review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12372 MiNiFi C2 Encrypt Flow Configuration Properties when Transferring [nifi]

2023-11-17 Thread via GitHub


briansolo1985 commented on code in PR #8028:
URL: https://github.com/apache/nifi/pull/8028#discussion_r1396853662


##
minifi/minifi-nar-bundles/minifi-framework-bundle/minifi-framework/minifi-framework-core/src/test/java/org/apache/nifi/minifi/c2/command/FlowPropertyEncryptorTest.java:
##
@@ -0,0 +1,338 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.minifi.c2.command;
+
+import static java.util.Map.entry;
+import static java.util.UUID.randomUUID;
+import static java.util.stream.Collectors.toMap;
+import static org.apache.commons.lang3.RandomStringUtils.randomAlphabetic;
+import static 
org.apache.nifi.controller.serialization.FlowSerializer.ENC_PREFIX;
+import static org.junit.jupiter.api.Assertions.assertFalse;
+import static org.junit.jupiter.api.Assertions.assertTrue;
+import static org.mockito.ArgumentMatchers.anyString;
+import static org.mockito.Mockito.never;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.stream.Stream;
+import org.apache.nifi.c2.protocol.component.api.Bundle;
+import org.apache.nifi.c2.protocol.component.api.ComponentManifest;
+import org.apache.nifi.c2.protocol.component.api.ControllerServiceDefinition;
+import org.apache.nifi.c2.protocol.component.api.ProcessorDefinition;
+import org.apache.nifi.c2.protocol.component.api.PropertyDescriptor;
+import org.apache.nifi.c2.protocol.component.api.RuntimeManifest;
+import org.apache.nifi.controller.flow.VersionedDataflow;
+import org.apache.nifi.encrypt.PropertyEncryptor;
+import org.apache.nifi.flow.VersionedConfigurableExtension;
+import org.apache.nifi.flow.VersionedControllerService;
+import org.apache.nifi.flow.VersionedParameter;
+import org.apache.nifi.flow.VersionedParameterContext;
+import org.apache.nifi.flow.VersionedProcessGroup;
+import org.apache.nifi.flow.VersionedProcessor;
+import org.apache.nifi.flow.VersionedPropertyDescriptor;
+import org.apache.nifi.minifi.commons.service.FlowSerDeService;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.mockito.Mock;
+import org.mockito.junit.jupiter.MockitoExtension;
+
+@ExtendWith(MockitoExtension.class)
+public class FlowPropertyEncryptorTest {
+
+private static final String PROCESSOR_TYPE_1 = "processor_type_1";
+private static final String PROCESSOR_TYPE_2 = "processor_type_2";
+private static final String PROCESSOR_TYPE_3 = "processor_type_3";
+private static final String CONTROLLER_SERVICE_TYPE_1 = 
"controller_service_type_1";
+private static final String CONTROLLER_SERVICE_TYPE_2 = 
"controller_service_type_2";
+private static final String CONTROLLER_SERVICE_TYPE_3 = 
"controller_service_type_3";
+
+private static final String SENSITIVE_PROPERTY_NAME_PREFIX = "sensitive";
+
+private static final String NON_SENSITIVE_1 = "non-sensitive-1";
+private static final String SENSITIVE_1 = SENSITIVE_PROPERTY_NAME_PREFIX + 
"-1";
+private static final String NON_SENSITIVE_2 = "non-sensitive-2";
+private static final String SENSITIVE_3 = SENSITIVE_PROPERTY_NAME_PREFIX + 
"-3";
+
+private static final Map PARAMETERS1 = Map.of(
+NON_SENSITIVE_1, NON_SENSITIVE_1,
+SENSITIVE_1, SENSITIVE_1
+);
+private static final Map PARAMETERS2 = Map.of(
+NON_SENSITIVE_2, NON_SENSITIVE_2
+);
+
+private static final Map PARAMETERS3 = Map.of(
+SENSITIVE_3, SENSITIVE_3
+);
+private static final Map DESCRIPTORS1 
= Map.of(
+NON_SENSITIVE_1, versionedPropertyDescriptor(NON_SENSITIVE_1, false),
+SENSITIVE_1, versionedPropertyDescriptor(SENSITIVE_1, true)
+);
+private static final Map DESCRIPTORS2 
= Map.of(
+NON_SENSITIVE_2, versionedPropertyDescriptor(NON_SENSITIVE_2, false)
+);
+private static final Map DESCRIPTORS3 
= Map.of(
+SENSITIVE_3, versionedPropertyDescriptor(SENSITIVE_3, true)
+);
+
+
+@Mock
+private PropertyEncryptor mockPropertyEncryptor;
+@Mock
+  

Re: [PR] NIFI-12372 MiNiFi C2 Encrypt Flow Configuration Properties when Transferring [nifi]

2023-11-17 Thread via GitHub


briansolo1985 commented on code in PR #8028:
URL: https://github.com/apache/nifi/pull/8028#discussion_r1396847599


##
minifi/minifi-nar-bundles/minifi-framework-bundle/minifi-framework/minifi-framework-core/src/main/java/org/apache/nifi/minifi/c2/command/DefaultUpdateConfigurationStrategy.java:
##
@@ -90,11 +93,12 @@ public boolean update(byte[] rawFlow) {
 }
 
 try {
-byte[] enrichedFlowCandidate = 
flowEnrichService.enrichFlow(rawFlow);
+byte[] propertyEncryptedRawDataFlow = 
flowPropertyEncryptor.encryptSensitiveProperties(rawFlow);

Review Comment:
   We only need to encrypt the raw flow.json. Sensitive properties in the 
enriched flow are automatically encrypted when loaded by flowService.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (MINIFICPP-895) Modbus Integration

2023-11-17 Thread Christofer Dutz (Jira)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17787077#comment-17787077
 ] 

Christofer Dutz commented on MINIFICPP-895:
---

Well theoretically the nifty driver should be operational. 

However nobody's really actively working on it at the moment. The updates you 
see are usually code-generation updates. 

> Modbus Integration
> --
>
> Key: MINIFICPP-895
> URL: https://issues.apache.org/jira/browse/MINIFICPP-895
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Epic
>Reporter: Jeremy Dyer
>Priority: Minor
>
> Modbus is a de facto industry standard for communicating with electronic 
> devices. Our primary goal here would be to provide a modbus implementation 
> that would allow for users to connect to those electronic devices to both 
> transfer and receive information from them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)