[jira] [Updated] (NIFI-13533) All ParameterProviders in 2.0.0-M4 fail to fetch parameters with a period in the parameter name

2024-09-30 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13533:
--
Description: 
All Parameter Providers fail to fetch any parameters which have a period (aka 
dot) in the parameter name.

Parameter names containing periods work in NiFi 1.x and also worked in NiFi 
2.0.0 M3, but do NOT work in 2.0.0 M4.
This is a serious blocker for anyone using Parameter Providers who have periods 
in their parameter names.

KubernetesSecretParameterProvider 2.0.0-M4 does not allow dots in Keys

If we create a new parameter (named e.g. "foo.bar") on disk and try to fetch it 
by KubernetesSecretParameterProvider 2.0.0-M4, we always get the error message: 
Parameter sensitivity must be specified for parameter 

Dismissing the error does not work an the error occurs again.

 

If we set the parameter name to fooBar, foo_bar, etc, the fetch is working.

In M3, dots were working with FileParameterProvider

 

*Update:* 

*Even if I activate the sensitivity for a parameter, the transmitted JSON 
object "parameterSensitivities" in request always has null values...*

 

  was:
All Parameter Providers fail to fetch any parameters which have a period (aka 
dot) in the parameter name.

Parameter names containing periods work in NiFi 1.x and also worked in NiFi 
2.0.0 M3, but do NOT work in the M4.
This is a serious blocker for anyone using Parameter Providers who have periods 
in their parameter names.

KubernetesSecretParameterProvider 2.0.0-M4 does not allow dots in Keys

If we create a new parameter (named e.g. "foo.bar") on disk and try to fetch it 
by KubernetesSecretParameterProvider 2.0.0-M4, we always get the error message: 
Parameter sensitivity must be specified for parameter 

Dismissing the error does not work an the error occurs again.

 

If we set the parameter name to fooBar, foo_bar, etc, the fetch is working.

In M3, dots were working with FileParameterProvider

 

*Update:* 

*Even if I activate the sensitivity for a parameter, the transmitted JSON 
object "parameterSensitivities" in request always has null values...*

 


> All ParameterProviders in 2.0.0-M4 fail to fetch parameters with a period in 
> the parameter name
> ---
>
> Key: NIFI-13533
> URL: https://issues.apache.org/jira/browse/NIFI-13533
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 2.0.0-M4
>Reporter: Felix Schultze
>Priority: Blocker
> Attachments: Fetch-Parameter-error.jpg
>
>
> All Parameter Providers fail to fetch any parameters which have a period (aka 
> dot) in the parameter name.
> Parameter names containing periods work in NiFi 1.x and also worked in NiFi 
> 2.0.0 M3, but do NOT work in 2.0.0 M4.
> This is a serious blocker for anyone using Parameter Providers who have 
> periods in their parameter names.
> KubernetesSecretParameterProvider 2.0.0-M4 does not allow dots in Keys
> If we create a new parameter (named e.g. "foo.bar") on disk and try to fetch 
> it by KubernetesSecretParameterProvider 2.0.0-M4, we always get the error 
> message: Parameter sensitivity must be specified for parameter 
> Dismissing the error does not work an the error occurs again.
>  
> If we set the parameter name to fooBar, foo_bar, etc, the fetch is working.
> In M3, dots were working with FileParameterProvider
>  
> *Update:* 
> *Even if I activate the sensitivity for a parameter, the transmitted JSON 
> object "parameterSensitivities" in request always has null values...*
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13533) All ParameterProviders in 2.0.0-M4 fail to fetch parameters with a period in the parameter name

2024-09-30 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13533:
--
Summary: All ParameterProviders in 2.0.0-M4 fail to fetch parameters with a 
period in the parameter name  (was: KubernetesSecretParameterProvider 2.0.0-M4 
does not allow dots in Keys)

> All ParameterProviders in 2.0.0-M4 fail to fetch parameters with a period in 
> the parameter name
> ---
>
> Key: NIFI-13533
> URL: https://issues.apache.org/jira/browse/NIFI-13533
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 2.0.0-M4
>Reporter: Felix Schultze
>Priority: Blocker
> Attachments: Fetch-Parameter-error.jpg
>
>
> All Parameter Providers fail to fetch any parameters which have a period (aka 
> dot) in the parameter name.
> Parameter names containing periods work in NiFi 1.x and also worked in NiFi 
> 2.0.0 M3, but do NOT work in the M4.
> This is a serious blocker for anyone using Parameter Providers who have 
> periods in their parameter names.
> KubernetesSecretParameterProvider 2.0.0-M4 does not allow dots in Keys
> If we create a new parameter (named e.g. "foo.bar") on disk and try to fetch 
> it by KubernetesSecretParameterProvider 2.0.0-M4, we always get the error 
> message: Parameter sensitivity must be specified for parameter 
> Dismissing the error does not work an the error occurs again.
>  
> If we set the parameter name to fooBar, foo_bar, etc, the fetch is working.
> In M3, dots were working with FileParameterProvider
>  
> *Update:* 
> *Even if I activate the sensitivity for a parameter, the transmitted JSON 
> object "parameterSensitivities" in request always has null values...*
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13533) KubernetesSecretParameterProvider 2.0.0-M4 does not allow dots in Keys

2024-09-30 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13533:
--
Description: 
All Parameter Providers fail to fetch any parameters which have a period (aka 
dot) in the parameter name.

Parameter names containing periods work in NiFi 1.x and also worked in NiFi 
2.0.0 M3, but do NOT work in the M4.
This is a serious blocker for anyone using Parameter Providers who have periods 
in their parameter names.

KubernetesSecretParameterProvider 2.0.0-M4 does not allow dots in Keys

If we create a new parameter (named e.g. "foo.bar") on disk and try to fetch it 
by KubernetesSecretParameterProvider 2.0.0-M4, we always get the error message: 
Parameter sensitivity must be specified for parameter 

Dismissing the error does not work an the error occurs again.

 

If we set the parameter name to fooBar, foo_bar, etc, the fetch is working.

In M3, dots were working with FileParameterProvider

 

*Update:* 

*Even if I activate the sensitivity for a parameter, the transmitted JSON 
object "parameterSensitivities" in request always has null values...*

 

  was:
If we create a new parameter (named e.g. "foo.bar") on disk and try to fetch it 
by KubernetesSecretParameterProvider 2.0.0-M4, we always get the error message: 
Parameter sensitivity must be specified for parameter 

Dismissing the error does not work an the error occurs again.

 

If we set the parameter name to fooBar, foo_bar, etc, the fetch is working.

In M3, dots were working with FileParameterProvider

 

*Update:* 

*Even if I activate the sensitivity for a parameter, the transmitted JSON 
object "parameterSensitivities" in request always has null values...*

 


> KubernetesSecretParameterProvider 2.0.0-M4 does not allow dots in Keys
> --
>
> Key: NIFI-13533
> URL: https://issues.apache.org/jira/browse/NIFI-13533
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 2.0.0-M4
>Reporter: Felix Schultze
>Priority: Blocker
> Attachments: Fetch-Parameter-error.jpg
>
>
> All Parameter Providers fail to fetch any parameters which have a period (aka 
> dot) in the parameter name.
> Parameter names containing periods work in NiFi 1.x and also worked in NiFi 
> 2.0.0 M3, but do NOT work in the M4.
> This is a serious blocker for anyone using Parameter Providers who have 
> periods in their parameter names.
> KubernetesSecretParameterProvider 2.0.0-M4 does not allow dots in Keys
> If we create a new parameter (named e.g. "foo.bar") on disk and try to fetch 
> it by KubernetesSecretParameterProvider 2.0.0-M4, we always get the error 
> message: Parameter sensitivity must be specified for parameter 
> Dismissing the error does not work an the error occurs again.
>  
> If we set the parameter name to fooBar, foo_bar, etc, the fetch is working.
> In M3, dots were working with FileParameterProvider
>  
> *Update:* 
> *Even if I activate the sensitivity for a parameter, the transmitted JSON 
> object "parameterSensitivities" in request always has null values...*
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-13533) KubernetesSecretParameterProvider 2.0.0-M4 does not allow dots in Keys

2024-09-30 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reassigned NIFI-13533:
-

Assignee: (was: Jim Steinebrey)

> KubernetesSecretParameterProvider 2.0.0-M4 does not allow dots in Keys
> --
>
> Key: NIFI-13533
> URL: https://issues.apache.org/jira/browse/NIFI-13533
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 2.0.0-M4
>Reporter: Felix Schultze
>Priority: Blocker
> Attachments: Fetch-Parameter-error.jpg
>
>
> If we create a new parameter (named e.g. "foo.bar") on disk and try to fetch 
> it by KubernetesSecretParameterProvider 2.0.0-M4, we always get the error 
> message: Parameter sensitivity must be specified for parameter 
> Dismissing the error does not work an the error occurs again.
>  
> If we set the parameter name to fooBar, foo_bar, etc, the fetch is working.
> In M3, dots were working with FileParameterProvider
>  
> *Update:* 
> *Even if I activate the sensitivity for a parameter, the transmitted JSON 
> object "parameterSensitivities" in request always has null values...*
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-13533) KubernetesSecretParameterProvider 2.0.0-M4 does not allow dots in Keys

2024-09-30 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reassigned NIFI-13533:
-

Assignee: Jim Steinebrey

> KubernetesSecretParameterProvider 2.0.0-M4 does not allow dots in Keys
> --
>
> Key: NIFI-13533
> URL: https://issues.apache.org/jira/browse/NIFI-13533
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 2.0.0-M4
>Reporter: Felix Schultze
>Assignee: Jim Steinebrey
>Priority: Blocker
> Attachments: Fetch-Parameter-error.jpg
>
>
> If we create a new parameter (named e.g. "foo.bar") on disk and try to fetch 
> it by KubernetesSecretParameterProvider 2.0.0-M4, we always get the error 
> message: Parameter sensitivity must be specified for parameter 
> Dismissing the error does not work an the error occurs again.
>  
> If we set the parameter name to fooBar, foo_bar, etc, the fetch is working.
> In M3, dots were working with FileParameterProvider
>  
> *Update:* 
> *Even if I activate the sensitivity for a parameter, the transmitted JSON 
> object "parameterSensitivities" in request always has null values...*
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13755) Controller Services sometimes not restarted when NiFi is restarted

2024-09-17 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13755:
--
Description: 
Before Nifi was stopped, there were around 1000 controller services (cs) 
enabled and 500 disabled. After NiFi restarted, none of the controller services 
were enabled. This is very inconvenient because now someone has to manually 
restart just the 1000 that should be enabled. The normal NiFi restart behavior 
is to enable the 1000 cs and leave the 500 disabled to restore them to their 
state before the restart.

I found the cause in 
StandardControllerServiceProvider.enableControllerServices()
If a single enabled cs depends on another cs which is disabled, then NONE of 
the 1000 cs are enabled.

My change is to only SKIP the enabling of any cs which is dependent on any 
disabled cs ancestor (cs can be linked multiple levels deep) and proceed to 
enable any cs which is not dependent on a disabled cs ancestor.

  was:
Before Nifi was stopped, there were around 1000 controller services (cs) 
enabled and 500 disabled. After NiFi restarted and none of the controller 
services were enabled. This is very inconvenient because now someone has to 
manually restart just the 1000 that should be enabled. The normal NiFi restart 
behavior is to enable the 1000 cs and leave the 500 disabled to restore them to 
their state before the restart.

I found the cause in 
StandardControllerServiceProvider.enableControllerServices()
If a single enabled cs depends on another cs which is disabled, then NONE of 
the 1000 cs are enabled.

My change is to only SKIP the enabling of any cs which is dependent on any 
disabled cs ancestor (cs can be linked multiple levels deep) and proceed to 
enable any cs which is not dependent on a disabled cs ancestor.


> Controller Services sometimes not restarted when NiFi is restarted
> --
>
> Key: NIFI-13755
> URL: https://issues.apache.org/jira/browse/NIFI-13755
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.27.0, 2.0.0-M4
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>
> Before Nifi was stopped, there were around 1000 controller services (cs) 
> enabled and 500 disabled. After NiFi restarted, none of the controller 
> services were enabled. This is very inconvenient because now someone has to 
> manually restart just the 1000 that should be enabled. The normal NiFi 
> restart behavior is to enable the 1000 cs and leave the 500 disabled to 
> restore them to their state before the restart.
> I found the cause in 
> StandardControllerServiceProvider.enableControllerServices()
> If a single enabled cs depends on another cs which is disabled, then NONE of 
> the 1000 cs are enabled.
> My change is to only SKIP the enabling of any cs which is dependent on any 
> disabled cs ancestor (cs can be linked multiple levels deep) and proceed to 
> enable any cs which is not dependent on a disabled cs ancestor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13755) Controller Services sometimes not restarted when NiFi is restarted

2024-09-17 Thread Jim Steinebrey (Jira)
Jim Steinebrey created NIFI-13755:
-

 Summary: Controller Services sometimes not restarted when NiFi is 
restarted
 Key: NIFI-13755
 URL: https://issues.apache.org/jira/browse/NIFI-13755
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 2.0.0-M4, 1.27.0
Reporter: Jim Steinebrey
Assignee: Jim Steinebrey


Before Nifi was stopped, there were around 1000 controller services (cs) 
enabled and 500 disabled. After NiFi restarted and none of the controller 
services were enabled. This is very inconvenient because now someone has to 
manually restart just the 1000 that should be enabled. The normal NiFi restart 
behavior is to enable the 1000 cs and leave the 500 disabled to restore them to 
their state before the restart.

I found the cause in 
StandardControllerServiceProvider.enableControllerServices()
If a single enabled cs depends on another cs which is disabled, then NONE of 
the 1000 cs are enabled.

My change is to only SKIP the enabling of any cs which is dependent on any 
disabled cs ancestor (cs can be linked multiple levels deep) and proceed to 
enable any cs which is not dependent on a disabled cs ancestor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13754) Cannot fetch parameters from Database Parameter Provider

2024-09-17 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17882429#comment-17882429
 ] 

Jim Steinebrey commented on NIFI-13754:
---

The periods in the parameter names are causing this issue. The period issues 
was reported in NIFI-13533

> Cannot fetch parameters from Database Parameter Provider
> 
>
> Key: NIFI-13754
> URL: https://issues.apache.org/jira/browse/NIFI-13754
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 2.0.0-M4
> Environment: DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=22.04
> DISTRIB_CODENAME=jammy
> DISTRIB_DESCRIPTION="Ubuntu 22.04.5 LTS"
>Reporter: Marcin Hładki
>Priority: Major
> Attachments: image-2024-09-17-12-12-37-302.png
>
>
> The DatabaseParameterProvider 2.0.0-M4 can be created and configured but 
> during parameter fetching I got an error "Parameter sensitivity must be 
> specified for parameter..."
> It happens regardless checkbox I've selected in the "Select Parameters To Be 
> Set As Sensitive" section. Screen:
> !image-2024-09-17-12-12-37-302.png!
> I've also noticed that the event is logged only in nifi-request.log, the log 
> info looks like this:
> [17/Sep/2024:10:10:05 +] "GET /nifi-api/flow/current-user HTTP/2.0" 200 
> 451 "https://vmfeelingblue2-prod.http.host:9443/nifi/"; "Mozilla/5
> .0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) 
> Chrome/128.0.0.0 Safari/537.36"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-7202) TestListFile.testFilterAge appears to be unstable

2024-08-27 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey resolved NIFI-7202.
--
Resolution: Duplicate

> TestListFile.testFilterAge appears to be unstable
> -
>
> Key: NIFI-7202
> URL: https://issues.apache.org/jira/browse/NIFI-7202
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Joe Witt
>Assignee: Jim Steinebrey
>Priority: Major
>
> [ERROR]   Run 2: TestListFile.testFilterAge State dump:
> 
>   timestamp date from timestamp t0 delta
> ---   - --- 
> started at  = 1582663143668 2020-02-25T20:39:03.6680
> current time= 1582663151138 2020-02-25T20:39:11.1380
>  processor state ---
> processed.timestamp = 1582663146000 2020-02-25T20:39:06.000-5138
>   listing.timestamp = 1582663146000 2020-02-25T20:39:06.000-5138
>  input folder contents -
>age1.txt = 1582663146000 2020-02-25T20:39:06.000-5138
>age2.txt = 1582663136000 2020-02-25T20:38:56.000   -15138
>age3.txt = 1582663126000 2020-02-25T20:38:46.000   -25138
>  output flowfiles --
>age1.txt = 1582663146000 2020-02-25T20:39:06.000-5138
> REL_SUCCESS count = 1
> 
> [ERROR] Tests run: 1496, Failures: 1, Errors: 0, Skipped: 23
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.2:test (default-test) on 
> project nifi-standard-processors: There are test failures.
> https://github.com/joewitt/nifi/runs/468101519?check_suite_focus=true



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-7202) TestListFile.testFilterAge appears to be unstable

2024-08-27 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reassigned NIFI-7202:


Assignee: Jim Steinebrey

> TestListFile.testFilterAge appears to be unstable
> -
>
> Key: NIFI-7202
> URL: https://issues.apache.org/jira/browse/NIFI-7202
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Joe Witt
>Assignee: Jim Steinebrey
>Priority: Major
>
> [ERROR]   Run 2: TestListFile.testFilterAge State dump:
> 
>   timestamp date from timestamp t0 delta
> ---   - --- 
> started at  = 1582663143668 2020-02-25T20:39:03.6680
> current time= 1582663151138 2020-02-25T20:39:11.1380
>  processor state ---
> processed.timestamp = 1582663146000 2020-02-25T20:39:06.000-5138
>   listing.timestamp = 1582663146000 2020-02-25T20:39:06.000-5138
>  input folder contents -
>age1.txt = 1582663146000 2020-02-25T20:39:06.000-5138
>age2.txt = 1582663136000 2020-02-25T20:38:56.000   -15138
>age3.txt = 1582663126000 2020-02-25T20:38:46.000   -25138
>  output flowfiles --
>age1.txt = 1582663146000 2020-02-25T20:39:06.000-5138
> REL_SUCCESS count = 1
> 
> [ERROR] Tests run: 1496, Failures: 1, Errors: 0, Skipped: 23
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.2:test (default-test) on 
> project nifi-standard-processors: There are test failures.
> https://github.com/joewitt/nifi/runs/468101519?check_suite_focus=true



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13686) Intermittent Unit Test Failure in TestListFile.testFilterAge()

2024-08-27 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13686:
--
Issue Type: Bug  (was: Task)

> Intermittent Unit Test Failure in TestListFile.testFilterAge()
> --
>
> Key: NIFI-13686
> URL: https://issues.apache.org/jira/browse/NIFI-13686
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.27.0, 2.0.0-M4
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Minor
>
> This unit test only works when execution speed is very fast. If there is more 
> than a small fraction of a second delay while the unit test is running, then 
> it intermittently gives a false assertion failure even though the 
> ListFile.java class is unchanged and correct. I will make the unit test more 
> robust by making larger time differences between the test objects.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13686) Intermittent Unit Test Failure in TestListFile.testFilterAge()

2024-08-27 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13686:
--
Priority: Major  (was: Minor)

> Intermittent Unit Test Failure in TestListFile.testFilterAge()
> --
>
> Key: NIFI-13686
> URL: https://issues.apache.org/jira/browse/NIFI-13686
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.27.0, 2.0.0-M4
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>
> This unit test only works when execution speed is very fast. If there is more 
> than a small fraction of a second delay while the unit test is running, then 
> it intermittently gives a false assertion failure even though the 
> ListFile.java class is unchanged and correct. I will make the unit test more 
> robust by making larger time differences between the test objects.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13686) Intermittent Unit Test Failure in TestListFile.testFilterAge()

2024-08-27 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17877175#comment-17877175
 ] 

Jim Steinebrey commented on NIFI-13686:
---

[~joewitt] Yes that is the same error I am fixing. I will continue on this Jira 
because I are already into fixing the code with this Jira number and then mark 
them both resolved when it is all finished and merged. Ok?

> Intermittent Unit Test Failure in TestListFile.testFilterAge()
> --
>
> Key: NIFI-13686
> URL: https://issues.apache.org/jira/browse/NIFI-13686
> Project: Apache NiFi
>  Issue Type: Task
>Affects Versions: 1.27.0, 2.0.0-M4
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Minor
>
> This unit test only works when execution speed is very fast. If there is more 
> than a small fraction of a second delay while the unit test is running, then 
> it intermittently gives a false assertion failure even though the 
> ListFile.java class is unchanged and correct. I will make the unit test more 
> robust by making larger time differences between the test objects.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13686) Intermittent Unit Test Failure in TestListFile.testFilterAge()

2024-08-27 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13686:
--
Priority: Minor  (was: Major)

> Intermittent Unit Test Failure in TestListFile.testFilterAge()
> --
>
> Key: NIFI-13686
> URL: https://issues.apache.org/jira/browse/NIFI-13686
> Project: Apache NiFi
>  Issue Type: Task
>Affects Versions: 1.27.0, 2.0.0-M4
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Minor
>
> This unit test only works when execution speed is very fast. If there is more 
> than a small fraction of a second delay while the unit test is running, then 
> it intermittently gives a false assertion failure even though the 
> ListFile.java class is unchanged and correct. I will make the unit test more 
> robust by making larger time differences between the test objects.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13686) Intermittent Unit Test Failure in TestListFile.testFilterAge()

2024-08-27 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13686:
--
Description: This unit test only works when execution speed is very fast. 
If there is more than a small fraction of a second delay while the unit test is 
running, then it intermittently gives a false assertion failure even though the 
ListFile.java class is unchanged and correct. I will make the unit test more 
robust by making larger time differences between the test objects.

> Intermittent Unit Test Failure in TestListFile.testFilterAge()
> --
>
> Key: NIFI-13686
> URL: https://issues.apache.org/jira/browse/NIFI-13686
> Project: Apache NiFi
>  Issue Type: Task
>Affects Versions: 1.27.0, 2.0.0-M4
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>
> This unit test only works when execution speed is very fast. If there is more 
> than a small fraction of a second delay while the unit test is running, then 
> it intermittently gives a false assertion failure even though the 
> ListFile.java class is unchanged and correct. I will make the unit test more 
> robust by making larger time differences between the test objects.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13686) Intermittent Unit Test Failure in TestListFile.testFilterAge()

2024-08-27 Thread Jim Steinebrey (Jira)
Jim Steinebrey created NIFI-13686:
-

 Summary: Intermittent Unit Test Failure in 
TestListFile.testFilterAge()
 Key: NIFI-13686
 URL: https://issues.apache.org/jira/browse/NIFI-13686
 Project: Apache NiFi
  Issue Type: Task
Affects Versions: 2.0.0-M4, 1.27.0
Reporter: Jim Steinebrey
Assignee: Jim Steinebrey






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13543) HTTPRecordSink

2024-08-19 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13543:
--
Description: 
The idea is to create an HTTPRecordSink that would make a POST request to a 
given HTTP endpoint for every record. The controller service would take in
 * URL endpoint
 * StandardWebClientServiceProvider
 * OAuth2 Access Token Provider
 * Record Writer
 * Max Batch Size

The data that would be sent to the endpoint as the payload of the POST HTTP 
request would be the output of the writer. This is to allow potential format 
transformations that would be required to match the specifications of the 
target endpoint.

It should also be possible to add dynamic properties that would be added as 
HTTP attributes including sensitive ones - similar to InvokeHTTP

The potential use cases are:
 * Capturing bulletins via the QueryNiFiReportingTask and automatically filing 
Jira(s) for each bulletin (or similar systems PagerDuty, ServiceNow, etc)
 * Sending data with PutRecord processor and remove the need for splitting a 
flow file into many flow files made of a single record

  was:
The idea is to create an HTTPRecordSink that would make a POST request to a 
given HTTP endpoint for every record. The controller service would take in
 * URL endpoint
 * StandardWebClientServiceProvider
 * OAuth2 Access Token Provider
 * Record Writer

The data that would be sent to the endpoint as the payload of the POST HTTP 
request would be the output of the writer. This is to allow potential format 
transformations that would be required to match the specifications of the 
target endpoint.

It should also be possible to add dynamic properties that would be added as 
HTTP attributes including sensitive ones - similar to InvokeHTTP

The potential use cases are:
 * Capturing bulletins via the QueryNiFiReportingTask and automatically filing 
Jira(s) for each bulletin (or similar systems PagerDuty, ServiceNow, etc)
 * Sending data with PutRecord processor and remove the need for splitting a 
flow file into many flow files made of a single record


> HTTPRecordSink
> --
>
> Key: NIFI-13543
> URL: https://issues.apache.org/jira/browse/NIFI-13543
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Jim Steinebrey
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The idea is to create an HTTPRecordSink that would make a POST request to a 
> given HTTP endpoint for every record. The controller service would take in
>  * URL endpoint
>  * StandardWebClientServiceProvider
>  * OAuth2 Access Token Provider
>  * Record Writer
>  * Max Batch Size
> The data that would be sent to the endpoint as the payload of the POST HTTP 
> request would be the output of the writer. This is to allow potential format 
> transformations that would be required to match the specifications of the 
> target endpoint.
> It should also be possible to add dynamic properties that would be added as 
> HTTP attributes including sensitive ones - similar to InvokeHTTP
> The potential use cases are:
>  * Capturing bulletins via the QueryNiFiReportingTask and automatically 
> filing Jira(s) for each bulletin (or similar systems PagerDuty, ServiceNow, 
> etc)
>  * Sending data with PutRecord processor and remove the need for splitting a 
> flow file into many flow files made of a single record



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-13543) HTTPRecordSink

2024-08-19 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reassigned NIFI-13543:
-

Assignee: Jim Steinebrey

> HTTPRecordSink
> --
>
> Key: NIFI-13543
> URL: https://issues.apache.org/jira/browse/NIFI-13543
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Jim Steinebrey
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The idea is to create an HTTPRecordSink that would make a POST request to a 
> given HTTP endpoint for every record. The controller service would take in
>  * URL endpoint
>  * StandardWebClientServiceProvider
>  * OAuth2 Access Token Provider
>  * Record Writer
> The data that would be sent to the endpoint as the payload of the POST HTTP 
> request would be the output of the writer. This is to allow potential format 
> transformations that would be required to match the specifications of the 
> target endpoint.
> It should also be possible to add dynamic properties that would be added as 
> HTTP attributes including sensitive ones - similar to InvokeHTTP
> The potential use cases are:
>  * Capturing bulletins via the QueryNiFiReportingTask and automatically 
> filing Jira(s) for each bulletin (or similar systems PagerDuty, ServiceNow, 
> etc)
>  * Sending data with PutRecord processor and remove the need for splitting a 
> flow file into many flow files made of a single record



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13655) Upgrade 1.x Shared Dependencies including JacksonXML and others

2024-08-14 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13655:
--
Description: 
google libraries-bom 26.40.0 26.43.0
com.amazonaws 1.12.744 1.12.762
software.amazon.awssdk 2.25.70 2.26.21
com.box 4.8.0 4.11.1
org.bitbucket.b_c/jose4j  0.9.2 0.9.4
com.exceptionfactory.jagged 0.3.2 0.4.0
com.fasterxml.jackson.core 2.17.1 2.17.2
com.fasterxml.jackson.dataformat 2.17.1 2.17.2
com.fasterxml.jackson.datatype 2.17.1 2.17.2
com.fasterxml.jackson.jakarta.rs 2.17.1 2.17.2
com.fasterxml.jackson.jaxrs 2.17.1 2.17.2
com.fasterxml.jackson.jr 2.17.1 2.17.2
com.fasterxml.jackson.module 2.17.1 2.17.2
com.github.pjfanning 4.3.1 4.4.0
com.networknt/json-schema-validator 1.3.2 1.5.0
commons-codec 1.17.0 1.17.1
io.netty 4.1.111.Final 4.1.112.Final
net.sf.saxon Saxon-HE 12.3 12.5
org.apache.commons commons-lang3 3.14.0 3.15.0
org.jetbrains.kotlin 1.9.24 1.9.25
org.jsoup 1.17.2 1.18.1
org.testcontainers 1.19.8 1.20.0
google-api-services-drive v3-rev20240520-2.0.0 v3-rev20240628-2.0.0
com.slack.api bolt-socket-mode 1.37.0 1.40.3
wire-schema-jvm 4.9.3 5.0.0
io.projectreactor reactor-test 3.5.14 3.6.8
org.jline 3.25.1 3.26.3
org.mariadb.jdbc 3.3.0 3.4.1

com.amazonaws 1.12.762 1.12.767
software.amazon.awssdk 2.26.21 2.27.1 (or 2.26.31)
com.github.luben zstd-jni 1.5.6-3 1.5.6-4
azure-sdk-bom 1.2.25 1.2.26
com.networknt json-schema-validator 1.5.0 1.5.1
org.apache.commons commons-compress 1.26.2 1.27.0
org.apache.commons commons-lang3 3.15.0 3.16.0
org.apache.sshd 2.11.0 2.13.2
org.slf4j 2.0.13 2.0.15
org.testcontainers 1.20.0 1.20.1
org.tukaani xz 1.9 1.10
org.xerial.snappy snappy-java 1.1.10.5 1.1.10.6
com.google.apis google-api-services-drive v3-rev20240628-2.0.0 
v3-rev20240730-2.0.0
org.clojure 1.11.2 1.11.4
org.mongodb 4.11.1 4.11.3
org.neo4j.driver 5.2.0 5.23.0
org.neo4j.docker 5.1 5.19

  was:
google libraries-bom 26.40.0 26.43.0
com.amazonaws 1.12.744 1.12.762
software.amazon.awssdk 2.25.70 2.26.21
com.box 4.8.0 4.11.1
org.bitbucket.b_c/jose4j  0.9.2 0.9.4
com.exceptionfactory.jagged 0.3.2 0.4.0
com.fasterxml.jackson.core 2.17.1 2.17.2
com.fasterxml.jackson.dataformat 2.17.1 2.17.2
com.fasterxml.jackson.datatype 2.17.1 2.17.2
com.fasterxml.jackson.jakarta.rs 2.17.1 2.17.2
com.fasterxml.jackson.jaxrs 2.17.1 2.17.2
com.fasterxml.jackson.jr 2.17.1 2.17.2
com.fasterxml.jackson.module 2.17.1 2.17.2
com.github.pjfanning 4.3.1 4.4.0
com.networknt/json-schema-validator 1.3.2 1.5.0
commons-codec 1.17.0 1.17.1
io.netty 4.1.111.Final 4.1.112.Final
net.sf.saxon Saxon-HE 12.3 12.5
org.apache.commons commons-lang3 3.14.0 3.15.0
org.jetbrains.kotlin 1.9.24 1.9.25
org.jsoup 1.17.2 1.18.1
org.testcontainers 1.19.8 1.20.0
google-api-services-drive v3-rev20240520-2.0.0 v3-rev20240628-2.0.0
com.slack.api bolt-socket-mode 1.37.0 1.40.3
wire-schema-jvm 4.9.3 5.0.0
io.projectreactor reactor-test 3.5.14 3.6.8
org.jline 3.25.1 3.26.3
org.mariadb.jdbc 3.3.0 3.4.1


> Upgrade 1.x Shared Dependencies including JacksonXML and others
> ---
>
> Key: NIFI-13655
> URL: https://issues.apache.org/jira/browse/NIFI-13655
> Project: Apache NiFi
>  Issue Type: Task
>Affects Versions: 1.27.0
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> google libraries-bom 26.40.0 26.43.0
> com.amazonaws 1.12.744 1.12.762
> software.amazon.awssdk 2.25.70 2.26.21
> com.box 4.8.0 4.11.1
> org.bitbucket.b_c/jose4j  0.9.2 0.9.4
> com.exceptionfactory.jagged 0.3.2 0.4.0
> com.fasterxml.jackson.core 2.17.1 2.17.2
> com.fasterxml.jackson.dataformat 2.17.1 2.17.2
> com.fasterxml.jackson.datatype 2.17.1 2.17.2
> com.fasterxml.jackson.jakarta.rs 2.17.1 2.17.2
> com.fasterxml.jackson.jaxrs 2.17.1 2.17.2
> com.fasterxml.jackson.jr 2.17.1 2.17.2
> com.fasterxml.jackson.module 2.17.1 2.17.2
> com.github.pjfanning 4.3.1 4.4.0
> com.networknt/json-schema-validator 1.3.2 1.5.0
> commons-codec 1.17.0 1.17.1
> io.netty 4.1.111.Final 4.1.112.Final
> net.sf.saxon Saxon-HE 12.3 12.5
> org.apache.commons commons-lang3 3.14.0 3.15.0
> org.jetbrains.kotlin 1.9.24 1.9.25
> org.jsoup 1.17.2 1.18.1
> org.testcontainers 1.19.8 1.20.0
> google-api-services-drive v3-rev20240520-2.0.0 v3-rev20240628-2.0.0
> com.slack.api bolt-socket-mode 1.37.0 1.40.3
> wire-schema-jvm 4.9.3 5.0.0
> io.projectreactor reactor-test 3.5.14 3.6.8
> org.jline 3.25.1 3.26.3
> org.mariadb.jdbc 3.3.0 3.4.1
> com.amazonaws 1.12.762 1.12.767
> software.amazon.awssdk 2.26.21 2.27.1 (or 2.26.31)
> com.github.luben zstd-jni 1.5.6-3 1.5.6-4
> azure-sdk-bom 1.2.25 1.2.26
> com.networknt json-schema-validator 1.5.0 1.5.1
> org.apache.commons commons-compress 1.26.2 1.27.0
> org.apache.commons commons-lang3 3.15.0 3.16.0
> org.apache.sshd 2.11.0 2.13.2
> org.slf4j 2.0.13 2.0.15
> or

[jira] [Created] (NIFI-13655) Upgrade 1.x Shared Dependencies including JacksonXML and others

2024-08-13 Thread Jim Steinebrey (Jira)
Jim Steinebrey created NIFI-13655:
-

 Summary: Upgrade 1.x Shared Dependencies including JacksonXML and 
others
 Key: NIFI-13655
 URL: https://issues.apache.org/jira/browse/NIFI-13655
 Project: Apache NiFi
  Issue Type: Task
Affects Versions: 1.27.0
Reporter: Jim Steinebrey
Assignee: Jim Steinebrey


google libraries-bom 26.40.0 26.43.0
com.amazonaws 1.12.744 1.12.762
software.amazon.awssdk 2.25.70 2.26.21
com.box 4.8.0 4.11.1
org.bitbucket.b_c/jose4j  0.9.2 0.9.4
com.exceptionfactory.jagged 0.3.2 0.4.0
com.fasterxml.jackson.core 2.17.1 2.17.2
com.fasterxml.jackson.dataformat 2.17.1 2.17.2
com.fasterxml.jackson.datatype 2.17.1 2.17.2
com.fasterxml.jackson.jakarta.rs 2.17.1 2.17.2
com.fasterxml.jackson.jaxrs 2.17.1 2.17.2
com.fasterxml.jackson.jr 2.17.1 2.17.2
com.fasterxml.jackson.module 2.17.1 2.17.2
com.github.pjfanning 4.3.1 4.4.0
com.networknt/json-schema-validator 1.3.2 1.5.0
commons-codec 1.17.0 1.17.1
io.netty 4.1.111.Final 4.1.112.Final
net.sf.saxon Saxon-HE 12.3 12.5
org.apache.commons commons-lang3 3.14.0 3.15.0
org.jetbrains.kotlin 1.9.24 1.9.25
org.jsoup 1.17.2 1.18.1
org.testcontainers 1.19.8 1.20.0
google-api-services-drive v3-rev20240520-2.0.0 v3-rev20240628-2.0.0
com.slack.api bolt-socket-mode 1.37.0 1.40.3
wire-schema-jvm 4.9.3 5.0.0
io.projectreactor reactor-test 3.5.14 3.6.8
org.jline 3.25.1 3.26.3
org.mariadb.jdbc 3.3.0 3.4.1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-13154) Show Parameter set for Sensitive Property

2024-08-06 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reassigned NIFI-13154:
-

Assignee: (was: Jim Steinebrey)

> Show Parameter set for Sensitive Property
> -
>
> Key: NIFI-13154
> URL: https://issues.apache.org/jira/browse/NIFI-13154
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 2.0.0-M2
>Reporter: Brian Ghigiarelli
>Priority: Major
> Attachments: Screenshot 2024-05-06 at 2.33.03 PM.png, Screenshot 
> 2024-05-06 at 2.33.09 PM.png, Screenshot 2024-05-06 at 2.33.54 PM.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> When a parameter with a sensitive value is used for a sensitive property in a 
> processor, controller service, etc., we are able to see through the Parameter 
> Context that it is used on the processors, controller service, etc.
> However, the "Sensitive property set" for that property makes it difficult to 
> understand which parameter is set.
> Since the link between the parameter and the property is not sensitive, it 
> would be really nice to show the parameter instead of masking it with the 
> usual "Sensitive property set" banner.
> For example, we can set #\{SensitiveParameter} as the value of the 
> EncryptPGP's Passphrase property. Then, we can't see it again, unless we know 
> which parameter in the context we are using.
> !Screenshot 2024-05-06 at 2.33.03 PM.png!
> !Screenshot 2024-05-06 at 2.33.09 PM.png!
> !Screenshot 2024-05-06 at 2.33.54 PM.png!
>  
> !image-2024-05-06-14-36-35-759.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13437) Properties Parameter Provider

2024-06-26 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17860160#comment-17860160
 ] 

Jim Steinebrey commented on NIFI-13437:
---

[~joewitt] [~exceptionfactory] Thanks for you feedback. I see your points and 
have decided to withdraw this proposed change.

> Properties Parameter Provider
> -
>
> Key: NIFI-13437
> URL: https://issues.apache.org/jira/browse/NIFI-13437
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>
> Create a Parameter Provider that can take as input a .properties file and 
> create parameters for the property lines in the file. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-13437) Properties Parameter Provider

2024-06-26 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey resolved NIFI-13437.
---
Resolution: Won't Fix

> Properties Parameter Provider
> -
>
> Key: NIFI-13437
> URL: https://issues.apache.org/jira/browse/NIFI-13437
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>
> Create a Parameter Provider that can take as input a .properties file and 
> create parameters for the property lines in the file. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-13437) Properties Parameter Provider

2024-06-26 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reassigned NIFI-13437:
-

Assignee: (was: Jim Steinebrey)

> Properties Parameter Provider
> -
>
> Key: NIFI-13437
> URL: https://issues.apache.org/jira/browse/NIFI-13437
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Jim Steinebrey
>Priority: Major
>
> Create a Parameter Provider that can take as input a .properties file and 
> create parameters for the property lines in the file. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13437) Properties Parameter Provider

2024-06-24 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17859781#comment-17859781
 ] 

Jim Steinebrey commented on NIFI-13437:
---

[~joewitt] [~exceptionfactory] 

The intent is to let users import a parameter for each line in a properties 
file.
Like other existing parameter providers, during the first import the user can 
choose which
of the imported parameters treated as sensitive parameters.

I also plan to support reading property files which are in base64 format like 
FileParameterProvider does.

> Properties Parameter Provider
> -
>
> Key: NIFI-13437
> URL: https://issues.apache.org/jira/browse/NIFI-13437
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>
> Create a Parameter Provider that can take as input a .properties file and 
> create parameters for the property lines in the file. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13437) Properties Parameter Provider

2024-06-24 Thread Jim Steinebrey (Jira)
Jim Steinebrey created NIFI-13437:
-

 Summary: Properties Parameter Provider
 Key: NIFI-13437
 URL: https://issues.apache.org/jira/browse/NIFI-13437
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Jim Steinebrey
Assignee: Jim Steinebrey


Create a Parameter Provider that can take as input a .properties file and 
create parameters for the property lines in the file. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-10666) PrometheusReportingTask does not use UTF-8 encoding on /metrics/ endpoint

2024-06-20 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reassigned NIFI-10666:
-

Assignee: Jim Steinebrey

> PrometheusReportingTask does not use UTF-8 encoding on /metrics/ endpoint
> -
>
> Key: NIFI-10666
> URL: https://issues.apache.org/jira/browse/NIFI-10666
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.17.0, 1.16.3, 1.18.0, 1.23.2, 1.25.0, 2.0.0-M2
> Environment: JVM with non-UTF-8 default encoding (e.g. default 
> Windows installation)
>Reporter: René Zeidler
>Assignee: Jim Steinebrey
>Priority: Minor
>  Labels: encoding, prometheus, utf-8
> Attachments: missing-header.png
>
>
> We have created a default PrometheusReportingTask for our NiFi instance and 
> tried to consume the metrics with Prometheus. However, Prometheus threw the 
> following error:
> {code:java}
> ts=2022-10-19T12:25:18.110Z caller=scrape.go:1332 level=debug 
> component="scrape manager" scrape_pool=nifi-cluster 
> target=http://***nifi***:9092/metrics msg="Append failed" err="invalid UTF-8 
> label value" {code}
> Upon further inspection, we noticed that the /metrics/ endpoint exposed by 
> the reporting task does not use UTF-8 encoding, which is required by 
> Prometheus (as documented here: [Exposition formats | 
> Prometheus|https://prometheus.io/docs/instrumenting/exposition_formats/]).
> Our flow uses non-ASCII characters (in our case German umlauts like "ü"). As 
> a workaround, removing those characters fixes the Prometheus error, but this 
> is not practical for a large flow in German language.
> Opening the /metrics/ endpoint in a browser confirms that the encoding used 
> is not UTF-8:
> {code:java}
> > document.characterSet
> 'windows-1252' {code}
> 
> The responsible code might be here:
> [https://github.com/apache/nifi/blob/2be5c26f287469f4f19f0fa759d6c1b56dc0e348/nifi-nar-bundles/nifi-prometheus-bundle/nifi-prometheus-reporting-task/src/main/java/org/apache/nifi/reporting/prometheus/PrometheusServer.java#L67]
> The PrometheusServer used by the reporting task uses an OutputStreamWriter 
> with the default encoding, instead of explicitly using UTF-8. The 
> Content-Type header set in that function also does not get passed along (see 
> screenshot).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13413) Dependency upgrades for proto-buf-java 3.25.3 -> 4.27.1

2024-06-18 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855991#comment-17855991
 ] 

Jim Steinebrey commented on NIFI-13413:
---

[~exceptionfactory] Thanks for pointing that out. Since that upgrade numbers I 
was looking at will not fix the cve, I am closing this ticket.

> Dependency upgrades for proto-buf-java 3.25.3 -> 4.27.1
> ---
>
> Key: NIFI-13413
> URL: https://issues.apache.org/jira/browse/NIFI-13413
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Extensions
>Affects Versions: 1.26.0, 2.0.0-M3
>Reporter: Jim Steinebrey
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (NIFI-13413) Dependency upgrades for proto-buf-java 3.25.3 -> 4.27.1

2024-06-18 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reopened NIFI-13413:
---

> Dependency upgrades for proto-buf-java 3.25.3 -> 4.27.1
> ---
>
> Key: NIFI-13413
> URL: https://issues.apache.org/jira/browse/NIFI-13413
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Extensions
>Affects Versions: 1.26.0, 2.0.0-M3
>Reporter: Jim Steinebrey
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-13413) Dependency upgrades for proto-buf-java 3.25.3 -> 4.27.1

2024-06-18 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey resolved NIFI-13413.
---
Resolution: Won't Fix

> Dependency upgrades for proto-buf-java 3.25.3 -> 4.27.1
> ---
>
> Key: NIFI-13413
> URL: https://issues.apache.org/jira/browse/NIFI-13413
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Extensions
>Affects Versions: 1.26.0, 2.0.0-M3
>Reporter: Jim Steinebrey
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-13413) Dependency upgrades for proto-buf-java 3.25.3 -> 4.27.1

2024-06-18 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey resolved NIFI-13413.
---
Resolution: Fixed

> Dependency upgrades for proto-buf-java 3.25.3 -> 4.27.1
> ---
>
> Key: NIFI-13413
> URL: https://issues.apache.org/jira/browse/NIFI-13413
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Extensions
>Affects Versions: 1.26.0, 2.0.0-M3
>Reporter: Jim Steinebrey
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13413) Dependency upgrades for proto-buf-java 3.25.3 -> 4.27.1

2024-06-18 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855985#comment-17855985
 ] 

Jim Steinebrey commented on NIFI-13413:
---

[~exceptionfactory] Thanks so much for your comments.
I was exploring some different ones and if there is a latest version that fixes 
them>
I was going to put them all together, but given your comment I am fine with 
splitting them up.

Here is the first one I found that I will put in the ticket.
protobuf-java-2.5.0.jar (pkg:maven/com.google.protobuf/protobuf-java@2.5.0, 
cpe:2.3:a:google:protobuf-java:2.5.0:*:*:*:*:*:*:*, 
cpe:2.3:a:protobuf:protobuf:2.5.0:*:*:*:*:*:*:*) : CVE-2022-3171, 
CVE-2022-3509, CVE-2021-22569
3.25.3 -> 4.27.1
nifi-protobuf-services pom.xml

 

> Dependency upgrades for proto-buf-java 3.25.3 -> 4.27.1
> ---
>
> Key: NIFI-13413
> URL: https://issues.apache.org/jira/browse/NIFI-13413
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Extensions
>Affects Versions: 1.26.0, 2.0.0-M3
>Reporter: Jim Steinebrey
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13413) Dependency upgrades for proto-buf-java

2024-06-18 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13413:
--
Summary: Dependency upgrades for proto-buf-java   (was: Dependency upgrades 
to resolve cve's)

> Dependency upgrades for proto-buf-java 
> ---
>
> Key: NIFI-13413
> URL: https://issues.apache.org/jira/browse/NIFI-13413
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Extensions
>Affects Versions: 1.26.0, 2.0.0-M3
>Reporter: Jim Steinebrey
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13413) Dependency upgrades for proto-buf-java 3.25.3 -> 4.27.1

2024-06-18 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13413:
--
Summary: Dependency upgrades for proto-buf-java 3.25.3 -> 4.27.1  (was: 
Dependency upgrades for proto-buf-java )

> Dependency upgrades for proto-buf-java 3.25.3 -> 4.27.1
> ---
>
> Key: NIFI-13413
> URL: https://issues.apache.org/jira/browse/NIFI-13413
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Extensions
>Affects Versions: 1.26.0, 2.0.0-M3
>Reporter: Jim Steinebrey
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13413) Dependency upgrades to resolve cve's

2024-06-18 Thread Jim Steinebrey (Jira)
Jim Steinebrey created NIFI-13413:
-

 Summary: Dependency upgrades to resolve cve's
 Key: NIFI-13413
 URL: https://issues.apache.org/jira/browse/NIFI-13413
 Project: Apache NiFi
  Issue Type: Task
  Components: Extensions
Affects Versions: 2.0.0-M3, 1.26.0
Reporter: Jim Steinebrey






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-13393) Avoid making extra transient flow file during FF creation

2024-06-17 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey resolved NIFI-13393.
---
Resolution: Not A Problem

> Avoid making extra transient flow file during FF creation
> -
>
> Key: NIFI-13393
> URL: https://issues.apache.org/jira/browse/NIFI-13393
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 2.0.0-M3
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This PR is a proposal to see if the PMC agrees with this API change.
> I have not written any new unit tests for this PR yet until I get buy-in on 
> it.
> I added two new create methods to the API of ProcessSession to allow the 
> option of
> passing in ff attributes to be added to the flow file being created. This is 
> not
> a breaking change because the original create methods are still there.
> ProcessSession
> FlowFile create(FlowFile parent, final Map attributes);
> FlowFile create(final Map attributes);
> Changing a core API is a very significant change but I hope people will see 
> it 
> as worthwhile to allow us to to avoid creating an extra FlowFile object in 
> many 
> places that FlowFiles are created. Not all places set attributes right after \
> FF creation, but very many of them do and could benefit.
> There are 150+ places where these new methods can be used and I only changed 
> the 
> GenerateTableFetch processor to call them so you how the new methods are used.
> I expect using these new create methods has the potential to 
> avoid noticeable transient memory allocation like the earlier API addition of
> SessionContext.isAutoTerminated did.
> Also if this PR is approved, myself and others can change processors to call 
> these 
> new methods in a future PRs (not as part of this PR).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-13390) MergeRecord: Skip setting an attribute on flow file if it is being sent to autoTerminated relationship

2024-06-17 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey resolved NIFI-13390.
---
Resolution: Won't Fix

> MergeRecord: Skip setting an attribute on flow file if it is being sent to 
> autoTerminated relationship
> --
>
> Key: NIFI-13390
> URL: https://issues.apache.org/jira/browse/NIFI-13390
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 2.0.0-M3
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> MergeRecord: Enhance to not write any attributes to original flow file if 
> original relationship is autoTerminated.
> This code is actually in RecordBin.java which gets used by MergeRecord.
> MergeRecord can merge large numbers of flow files and often has the original 
> relationship autoTerminated, so it is worthwhile to change the code to not 
> update all those original flow files if they are going to an autoTerminated 
> relationship where they will be immediately discarded. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (NIFI-13390) MergeRecord: Skip setting an attribute on flow file if it is being sent to autoTerminated relationship

2024-06-17 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reopened NIFI-13390:
---

> MergeRecord: Skip setting an attribute on flow file if it is being sent to 
> autoTerminated relationship
> --
>
> Key: NIFI-13390
> URL: https://issues.apache.org/jira/browse/NIFI-13390
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 2.0.0-M3
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> MergeRecord: Enhance to not write any attributes to original flow file if 
> original relationship is autoTerminated.
> This code is actually in RecordBin.java which gets used by MergeRecord.
> MergeRecord can merge large numbers of flow files and often has the original 
> relationship autoTerminated, so it is worthwhile to change the code to not 
> update all those original flow files if they are going to an autoTerminated 
> relationship where they will be immediately discarded. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-13390) MergeRecord: Skip setting an attribute on flow file if it is being sent to autoTerminated relationship

2024-06-17 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey resolved NIFI-13390.
---
Resolution: Won't Fix

See reason in the discussion of the closed PR.

> MergeRecord: Skip setting an attribute on flow file if it is being sent to 
> autoTerminated relationship
> --
>
> Key: NIFI-13390
> URL: https://issues.apache.org/jira/browse/NIFI-13390
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 2.0.0-M3
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> MergeRecord: Enhance to not write any attributes to original flow file if 
> original relationship is autoTerminated.
> This code is actually in RecordBin.java which gets used by MergeRecord.
> MergeRecord can merge large numbers of flow files and often has the original 
> relationship autoTerminated, so it is worthwhile to change the code to not 
> update all those original flow files if they are going to an autoTerminated 
> relationship where they will be immediately discarded. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13390) MergeRecord: Skip setting an attribute on flow file if it is being sent to autoTerminated relationship

2024-06-17 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13390:
--
Status: Reopened  (was: Reopened)

> MergeRecord: Skip setting an attribute on flow file if it is being sent to 
> autoTerminated relationship
> --
>
> Key: NIFI-13390
> URL: https://issues.apache.org/jira/browse/NIFI-13390
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 2.0.0-M3
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> MergeRecord: Enhance to not write any attributes to original flow file if 
> original relationship is autoTerminated.
> This code is actually in RecordBin.java which gets used by MergeRecord.
> MergeRecord can merge large numbers of flow files and often has the original 
> relationship autoTerminated, so it is worthwhile to change the code to not 
> update all those original flow files if they are going to an autoTerminated 
> relationship where they will be immediately discarded. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13393) Avoid making extra transient flow file during FF creation

2024-06-15 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13393:
--
Description: 
This PR is a proposal to see if the PMC agrees with this API change.
I have not written any new unit tests for this PR yet until I get buy-in on it.

I added two new create methods to the API of ProcessSession to allow the option 
of
passing in ff attributes to be added to the flow file being created. This is not
a breaking change because the original create methods are still there.

ProcessSession
FlowFile create(FlowFile parent, final Map attributes);
FlowFile create(final Map attributes);

Changing a core API is a very significant change but I hope people will see it 
as worthwhile to allow us to to avoid creating an extra FlowFile object in many 
places that FlowFiles are created. Not all places set attributes right after \
FF creation, but very many of them do and could benefit.

There are 150+ places where these new methods can be used and I only changed 
the 
GenerateTableFetch processor to call them so you how the new methods are used.
I expect using these new create methods has the potential to 
avoid noticeable transient memory allocation like the earlier API addition of
SessionContext.isAutoTerminated did.

Also if this PR is approved, myself and others can change processors to call 
these 
new methods in a future PRs (not as part of this PR).

> Avoid making extra transient flow file during FF creation
> -
>
> Key: NIFI-13393
> URL: https://issues.apache.org/jira/browse/NIFI-13393
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 2.0.0-M3
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This PR is a proposal to see if the PMC agrees with this API change.
> I have not written any new unit tests for this PR yet until I get buy-in on 
> it.
> I added two new create methods to the API of ProcessSession to allow the 
> option of
> passing in ff attributes to be added to the flow file being created. This is 
> not
> a breaking change because the original create methods are still there.
> ProcessSession
> FlowFile create(FlowFile parent, final Map attributes);
> FlowFile create(final Map attributes);
> Changing a core API is a very significant change but I hope people will see 
> it 
> as worthwhile to allow us to to avoid creating an extra FlowFile object in 
> many 
> places that FlowFiles are created. Not all places set attributes right after \
> FF creation, but very many of them do and could benefit.
> There are 150+ places where these new methods can be used and I only changed 
> the 
> GenerateTableFetch processor to call them so you how the new methods are used.
> I expect using these new create methods has the potential to 
> avoid noticeable transient memory allocation like the earlier API addition of
> SessionContext.isAutoTerminated did.
> Also if this PR is approved, myself and others can change processors to call 
> these 
> new methods in a future PRs (not as part of this PR).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13393) Avoid making extra transient flow file during FF creation

2024-06-15 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13393:
--
Component/s: Core Framework
 (was: Extensions)

> Avoid making extra transient flow file during FF creation
> -
>
> Key: NIFI-13393
> URL: https://issues.apache.org/jira/browse/NIFI-13393
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 2.0.0-M3
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13397) PutDatabaseRecord check for cause of ProcessException being a SQLtransientException

2024-06-13 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854767#comment-17854767
 ] 

Jim Steinebrey commented on NIFI-13397:
---

The [d...@nifi.apache.org|mailto:d...@nifi.apache.org] mailing list got this 
email from a Nifi user.
{quote}I have tested the PutDatabaseRecord processor in case the database 
connection fails on Nifi version 1.26.0, and I propose to handle an error with 
retry policy.
{quote}
After I disconnected the database connection, I got an error message below

 
*ERROR* [Timer-Driven Process Thread-8] o.a.n.p.standard.PutDatabaseRecord 
PutDatabaseRecord[id=bcec93b5-306b-3fec-6eac-dfd3916a5dab] 
Failed to put Records to database for 
StandardFlowFileRecord[uuid=ba18e9a8-2cc8-4f7e-adcd-1da757b483b1,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1716964021086-21, container=default, 
section=21], offset=0, 
length=8450],offset=0,name=3bbaf37a-f692-4389-8cfa-59ce265ceaee,size=8450]. 
*Routing to failure.*
*org.apache.nifi.processor.exception.ProcessException: Connection retrieval 
failed*
at 
org.apache.nifi.dbcp.HikariCPConnectionPool.getConnection(HikariCPConnectionPool.java:363)
at org.apache.nifi.dbcp.DBCPService.getConnection(DBCPService.java:55)
at 
java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at 
org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:254)
at 
org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:105)
at jdk.proxy15/jdk.proxy15.$Proxy93.getConnection(Unknown Source)
at 
org.apache.nifi.processors.standard.PutDatabaseRecord.onTrigger(PutDatabaseRecord.java:486)
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1361)
at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:247)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:358)
at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
at java.base/java.lang.Thread.run(Thread.java:1583)
*Caused by: java.sql.SQLTransientConnectionException:* 
HikariCPConnectionPool[id=c25182df-725f-3c25-649b-9481538a3ec2] - Connection is 
not available, request timed out after 5004ms.
at com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:696)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:197)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:162)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:128)
at 
org.apache.nifi.dbcp.HikariCPConnectionPool.getConnection(HikariCPConnectionPool.java:354)
... 18 common frames omitted
*Caused by: java.sql.SQLRecoverableException:* IO Error: Invalid Operation, NOT 
Connected
at oracle.jdbc.driver.T4CConnection.doSetNetworkTimeout(T4CConnection.java:9395)
at 
oracle.jdbc.driver.PhysicalConnection.setNetworkTimeout(PhysicalConnection.java:1)
at com.zaxxer.hikari.pool.PoolBase.setNetworkTimeout(PoolBase.java:566)
at com.zaxxer.hikari.pool.PoolBase.isConnectionAlive(PoolBase.java:173)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:186)
... 21 common frames omitted
*Caused by: oracle.net.ns.NetException:* Invalid Operation, NOT Connected
at oracle.net.nt.TcpNTAdapter.setOption(TcpNTAdapter.java:757)
at oracle.net.ns.NSProtocol.setOption(NSProtocol.java:730)
at oracle.net.ns.NSProtocol.setSocketReadTimeout(NSProtocol.java:1045)
at oracle.jdbc.driver.T4CConnection.doSetNetworkTimeout(T4CConnection.java:9392)

... 25 common frames omitted  
{quote}Then I checked on the source code of the PutDatabaseRecord processor and 
found code that handles Throwable with If clause. 
{quote}
I see that it only has "if(toAnalyze instanceof SQLTransientException)" but not 
the ProcessException, thus, fails to catch this exception to handle as retry 
relationship. 

So I would like to clarify why this error routes to failure or if this is a 
bug. 

> PutDatabaseRecord check for cause of ProcessException being a 
> SQLtransientException
> ---
>
>  

[jira] [Assigned] (NIFI-13397) PutDatabaseRecord check for cause of ProcessException being a SQLtransientException

2024-06-13 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reassigned NIFI-13397:
-

Assignee: Jim Steinebrey

> PutDatabaseRecord check for cause of ProcessException being a 
> SQLtransientException
> ---
>
> Key: NIFI-13397
> URL: https://issues.apache.org/jira/browse/NIFI-13397
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13397) PutDatabaseRecord check for cause of ProcessException being a SQLTransientException

2024-06-13 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13397:
--
Summary: PutDatabaseRecord check for cause of ProcessException being a 
SQLTransientException  (was: PutDatabaseRecord check for cause of 
ProcessException being a SQLtransientException)

> PutDatabaseRecord check for cause of ProcessException being a 
> SQLTransientException
> ---
>
> Key: NIFI-13397
> URL: https://issues.apache.org/jira/browse/NIFI-13397
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13397) PutDatabaseRecord check for cause of ProcessException being a SQLtransientException

2024-06-13 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854768#comment-17854768
 ] 

Jim Steinebrey commented on NIFI-13397:
---

I looked at the existing code and 
        final Throwable toAnalyze = (e instanceof BatchUpdateException) ? 
e.getCause() : e;
        if (toAnalyze instanceof SQLTransientException) {
and I feel it would be safer to change the first line this way (and still 
satisfy your use case)
        final Throwable toAnalyze = (e instanceof BatchUpdateException || e 
instanceof ProcessException) ? e.getCause() : e;
        if (toAnalyze instanceof SQLTransientException) {

> PutDatabaseRecord check for cause of ProcessException being a 
> SQLtransientException
> ---
>
> Key: NIFI-13397
> URL: https://issues.apache.org/jira/browse/NIFI-13397
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13397) PutDatabaseRecord check for cause of ProcessException being a SQLtransientException

2024-06-13 Thread Jim Steinebrey (Jira)
Jim Steinebrey created NIFI-13397:
-

 Summary: PutDatabaseRecord check for cause of ProcessException 
being a SQLtransientException
 Key: NIFI-13397
 URL: https://issues.apache.org/jira/browse/NIFI-13397
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Jim Steinebrey






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13393) Avoid making extra transient flow file during FF creation

2024-06-12 Thread Jim Steinebrey (Jira)
Jim Steinebrey created NIFI-13393:
-

 Summary: Avoid making extra transient flow file during FF creation
 Key: NIFI-13393
 URL: https://issues.apache.org/jira/browse/NIFI-13393
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 2.0.0-M3
Reporter: Jim Steinebrey
Assignee: Jim Steinebrey






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13390) MergeRecord: Skip setting an attribute on flow file if it is being sent to autoTerminated relationship

2024-06-12 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13390:
--
Summary: MergeRecord: Skip setting an attribute on flow file if it is being 
sent to autoTerminated relationship  (was: MergeRecord: Skip setting an 
attribute on flow file which is being sent to autoTerminated relationship)

> MergeRecord: Skip setting an attribute on flow file if it is being sent to 
> autoTerminated relationship
> --
>
> Key: NIFI-13390
> URL: https://issues.apache.org/jira/browse/NIFI-13390
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 2.0.0-M3
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> MergeRecord: Enhance to not write any attributes to original flow file if 
> original relationship is autoTerminated.
> This code is actually in RecordBin.java which gets used by MergeRecord.
> MergeRecord can merge large numbers of flow files and often has the original 
> relationship autoTerminated, so it is worthwhile to change the code to not 
> update all those original flow files if they are going to an autoTerminated 
> relationship where they will be immediately discarded. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13199) Update ValidateRecord to avoid writing to FlowFiles that will be auto-terminated

2024-06-12 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854438#comment-17854438
 ] 

Jim Steinebrey commented on NIFI-13199:
---

[~markap14] I have submitted a PR for this ticket. Would you like to review it?

> Update ValidateRecord to avoid writing to FlowFiles that will be 
> auto-terminated
> 
>
> Key: NIFI-13199
> URL: https://issues.apache.org/jira/browse/NIFI-13199
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Jim Steinebrey
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> NIFI-13196 introduces the ability to check if a relationship is 
> auto-terminated. In the case of ValidateRecord, the processor is commonly 
> used to filter out invalid records. Before writing records to an 'invalid' 
> FlowFile we should first check if the relationship is auto-terminated and not 
> spend the resources to create the data if it will be auto-terminated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13198) Update RouteText not to write to FlowFiles for auto-terminated relationships

2024-06-12 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854437#comment-17854437
 ] 

Jim Steinebrey commented on NIFI-13198:
---

[~markap14] I have submitted a PR. Would you like to review it?

> Update RouteText not to write to FlowFiles for auto-terminated relationships
> 
>
> Key: NIFI-13198
> URL: https://issues.apache.org/jira/browse/NIFI-13198
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Jim Steinebrey
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> NIFI-13196 introduces the ability to check if a relationship is 
> auto-terminated. In the case of RouteText, the processor is commonly used to 
> filter out unwanted lines of text. For anything that is auto-terminated, 
> though, we still write out the data. We should instead check if the 
> Relationship that we're writing to is auto-terminated and if so, don't bother 
> creating the flowfile or writing to it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13390) MergeRecord: Skip setting an attribute on flow file which is being sent to autoTerminated relationship

2024-06-11 Thread Jim Steinebrey (Jira)
Jim Steinebrey created NIFI-13390:
-

 Summary: MergeRecord: Skip setting an attribute on flow file which 
is being sent to autoTerminated relationship
 Key: NIFI-13390
 URL: https://issues.apache.org/jira/browse/NIFI-13390
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 2.0.0-M3
Reporter: Jim Steinebrey
Assignee: Jim Steinebrey


MergeRecord: Enhance to not write any attributes to original flow file if 
original relationship is autoTerminated.

This code is actually in RecordBin.java which gets used by MergeRecord.
MergeRecord can merge large numbers of flow files and often has the original 
relationship autoTerminated, so it is worthwhile to change the code to not 
update all those original flow files if they are going to an autoTerminated 
relationship where they will be immediately discarded. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13389) ConsumeJMS - use putAllAttributes instead of multiple putAttribute to avoid creating temp flow files

2024-06-11 Thread Jim Steinebrey (Jira)
Jim Steinebrey created NIFI-13389:
-

 Summary: ConsumeJMS - use putAllAttributes instead of multiple 
putAttribute to avoid creating temp flow files
 Key: NIFI-13389
 URL: https://issues.apache.org/jira/browse/NIFI-13389
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Jim Steinebrey
Assignee: Jim Steinebrey






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-13199) Update ValidateRecord to avoid writing to FlowFiles that will be auto-terminated

2024-06-07 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reassigned NIFI-13199:
-

Assignee: Jim Steinebrey

> Update ValidateRecord to avoid writing to FlowFiles that will be 
> auto-terminated
> 
>
> Key: NIFI-13199
> URL: https://issues.apache.org/jira/browse/NIFI-13199
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Jim Steinebrey
>Priority: Major
>
> NIFI-13196 introduces the ability to check if a relationship is 
> auto-terminated. In the case of ValidateRecord, the processor is commonly 
> used to filter out invalid records. Before writing records to an 'invalid' 
> FlowFile we should first check if the relationship is auto-terminated and not 
> spend the resources to create the data if it will be auto-terminated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-13198) Update RouteText not to write to FlowFiles for auto-terminated relationships

2024-06-07 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reassigned NIFI-13198:
-

Assignee: Jim Steinebrey

> Update RouteText not to write to FlowFiles for auto-terminated relationships
> 
>
> Key: NIFI-13198
> URL: https://issues.apache.org/jira/browse/NIFI-13198
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Jim Steinebrey
>Priority: Major
>
> NIFI-13196 introduces the ability to check if a relationship is 
> auto-terminated. In the case of RouteText, the processor is commonly used to 
> filter out unwanted lines of text. For anything that is auto-terminated, 
> though, we still write out the data. We should instead check if the 
> Relationship that we're writing to is auto-terminated and if so, don't bother 
> creating the flowfile or writing to it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (NIFI-13359) Tune ExecuteSQL/Record to create fewer transient flow files

2024-06-06 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852974#comment-17852974
 ] 

Jim Steinebrey edited comment on NIFI-13359 at 6/7/24 1:02 AM:
---

[https://github.com/apache/nifi/pull/8928]


was (Author: JIRAUSER303705):
[https://github.com/apache/nifi/pull/8928]

 

> Tune ExecuteSQL/Record to create fewer transient flow files
> ---
>
> Key: NIFI-13359
> URL: https://issues.apache.org/jira/browse/NIFI-13359
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.26.0, 2.0.0-M3
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>
> AbstractExecuteSQL (which is extended by ExecuteSQL and ExceuteSQLRecord) 
> creates several unneeded temp flow files by using putAttribute instead of 
> putAllAttributes. Tune the code to create fewer intermediate flow files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13359) Tune ExecuteSQL/Record to create fewer transient flow files

2024-06-06 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852974#comment-17852974
 ] 

Jim Steinebrey commented on NIFI-13359:
---

[https://github.com/apache/nifi/pull/8928]

> Tune ExecuteSQL/Record to create fewer transient flow files
> ---
>
> Key: NIFI-13359
> URL: https://issues.apache.org/jira/browse/NIFI-13359
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.26.0, 2.0.0-M3
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>
> AbstractExecuteSQL (which is extended by ExecuteSQL and ExceuteSQLRecord) 
> creates several unneeded temp flow files by using putAttribute instead of 
> putAllAttributes. Tune the code to create fewer intermediate flow files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (NIFI-13359) Tune ExecuteSQL/Record to create fewer transient flow files

2024-06-06 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852974#comment-17852974
 ] 

Jim Steinebrey edited comment on NIFI-13359 at 6/7/24 1:01 AM:
---

[https://github.com/apache/nifi/pull/8928]

 


was (Author: JIRAUSER303705):
[https://github.com/apache/nifi/pull/8928]

> Tune ExecuteSQL/Record to create fewer transient flow files
> ---
>
> Key: NIFI-13359
> URL: https://issues.apache.org/jira/browse/NIFI-13359
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.26.0, 2.0.0-M3
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>
> AbstractExecuteSQL (which is extended by ExecuteSQL and ExceuteSQLRecord) 
> creates several unneeded temp flow files by using putAttribute instead of 
> putAllAttributes. Tune the code to create fewer intermediate flow files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13196) Add a new isAutoTerminated(Relationship) method to ProcessContext

2024-06-04 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852218#comment-17852218
 ] 

Jim Steinebrey commented on NIFI-13196:
---

Thanks for the explanation, [~joewitt]. 

> Add a new isAutoTerminated(Relationship) method to ProcessContext
> -
>
> Key: NIFI-13196
> URL: https://issues.apache.org/jira/browse/NIFI-13196
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 2.0.0-M3
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Currently a Processor has no way of determining whether or not a Relationship 
> is auto-terminated. There are cases where a Processor forks an incoming 
> FlowFile and updates it (in a potentially expensive manner) and then 
> transfers it to a Relationship that is auto-terminated.
> We should add the ability to determine whether or not a given relationship is 
> auto-terminated so that we can be more efficient



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13196) Add a new isAutoTerminated(Relationship) method to ProcessContext

2024-06-04 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852150#comment-17852150
 ] 

Jim Steinebrey commented on NIFI-13196:
---

[~markap14] I am going to backport this to apache/nifi:support/nifi-1.x unless 
someone has a reason it should not be there. Having this will allow other PRs 
which call this method to be backported.

> Add a new isAutoTerminated(Relationship) method to ProcessContext
> -
>
> Key: NIFI-13196
> URL: https://issues.apache.org/jira/browse/NIFI-13196
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 2.0.0-M3
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently a Processor has no way of determining whether or not a Relationship 
> is auto-terminated. There are cases where a Processor forks an incoming 
> FlowFile and updates it (in a potentially expensive manner) and then 
> transfers it to a Relationship that is auto-terminated.
> We should add the ability to determine whether or not a given relationship is 
> auto-terminated so that we can be more efficient



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13359) Tune ExecuteSQL/Record to create fewer transient flow files

2024-06-04 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13359:
--
Description: AbstractExecuteSQL (which is extended by ExecuteSQL and 
ExceuteSQLRecord) creates several unneeded temp flow files by using 
putAttribute instead of putAllAttributes. Tune the code to create fewer 
intermediate flow files.  (was: AbstractExecuteSQL (which is extended by 
ExecuteSQL and ExceuteSQLRecord) creates a flow file which in a certain case 
gets immediately removed in onTrigger. Also they use putAttribute instead of 
putAllAttributes. Tune the code to create fewer intermediate flow files.)

> Tune ExecuteSQL/Record to create fewer transient flow files
> ---
>
> Key: NIFI-13359
> URL: https://issues.apache.org/jira/browse/NIFI-13359
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.26.0, 2.0.0-M3
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>
> AbstractExecuteSQL (which is extended by ExecuteSQL and ExceuteSQLRecord) 
> creates several unneeded temp flow files by using putAttribute instead of 
> putAllAttributes. Tune the code to create fewer intermediate flow files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13359) Tune ExecuteSQL/Record to create fewer transient flow files

2024-06-04 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13359:
--
Description: AbstractExecuteSQL (which is extended by ExecuteSQL and 
ExceuteSQLRecord) creates a flow file which in a certain case gets immediately 
removed in onTrigger. Also they use putAttribute instead of putAllAttributes. 
Tune the code to create fewer intermediate flow files.  (was: 
AbstractExecuteSQL (which is extended by ExecuteSQL and ExceuteSQLRecord) 
creates a flow file which in a certain case gets immediately removed in 
onTrigger. Also they use putAttribute instead of putAllAttributes. Tune the 
code to create fewer intermediate flow files. ++ )

> Tune ExecuteSQL/Record to create fewer transient flow files
> ---
>
> Key: NIFI-13359
> URL: https://issues.apache.org/jira/browse/NIFI-13359
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.26.0, 2.0.0-M3
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>
> AbstractExecuteSQL (which is extended by ExecuteSQL and ExceuteSQLRecord) 
> creates a flow file which in a certain case gets immediately removed in 
> onTrigger. Also they use putAttribute instead of putAllAttributes. Tune the 
> code to create fewer intermediate flow files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13359) Tune ExecuteSQL/Record to create fewer transient flow files

2024-06-04 Thread Jim Steinebrey (Jira)
Jim Steinebrey created NIFI-13359:
-

 Summary: Tune ExecuteSQL/Record to create fewer transient flow 
files
 Key: NIFI-13359
 URL: https://issues.apache.org/jira/browse/NIFI-13359
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 2.0.0-M3, 1.26.0
Reporter: Jim Steinebrey
Assignee: Jim Steinebrey


AbstractExecuteSQL (which is extended by ExecuteSQL and ExceuteSQLRecord) 
creates a flow file which in a certain case gets immediately removed in 
onTrigger. Also they use putAttribute instead of putAllAttributes. Tune the 
code to create fewer intermediate flow files. ++ 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13351) Enhance QueryDatabaseTable and QueryDatabaseTableRecord processors not to call session.putAttribute multiple times

2024-06-03 Thread Jim Steinebrey (Jira)
Jim Steinebrey created NIFI-13351:
-

 Summary: Enhance QueryDatabaseTable and QueryDatabaseTableRecord 
processors not to call session.putAttribute multiple times
 Key: NIFI-13351
 URL: https://issues.apache.org/jira/browse/NIFI-13351
 Project: Apache NiFi
  Issue Type: Improvement
Affects Versions: 2.0.0-M3, 1.26.0
Reporter: Jim Steinebrey
Assignee: Jim Steinebrey


Per [~markap14] in the following 
[post|https://lists.apache.org/thread/7zo2px31r3377c7vhby4h6nrngdf3llf] one 
should avoid calling session.putAttribute many times since in order to maintain 
object immutability it has to create a new FlowFile object (and a new HashMap 
of all attributes!) for every call to session.putAttribute which leads to 
potentially a large amount of unneeded garbage getting created.

Enhance QueryDatabaseTable and QueryDatabaseTableRecord processors not to call 
session.putAttribute multiple times in a for loop for column max values. The 
repeated putAttribute calls exist in AbstractQueryDatabaseTable which is 
extended by those two processors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13341) Update MergeContent and JoinEnrichment to not write attributes to FlowFiles for auto-terminated Original relationship

2024-06-03 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13341:
--
Summary: Update MergeContent and JoinEnrichment to not write attributes to 
FlowFiles for auto-terminated Original relationship  (was: Update MergeContent 
and JoinEnrichment to not write attributes to FlowFiles for auto-terminated 
relationship)

> Update MergeContent and JoinEnrichment to not write attributes to FlowFiles 
> for auto-terminated Original relationship
> -
>
> Key: NIFI-13341
> URL: https://issues.apache.org/jira/browse/NIFI-13341
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.26.0, 2.0.0-M3
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> NIFI-13196 introduces the ability to check if a relationship is 
> auto-terminated. In the case of BinFiles (which is extended by MergeContent 
> and JoinEnrichment processors), the processor can have the  REL_ORIGINAL 
> auto-terminated. When it is auto-terminated, skip copying attributes into the 
> flow files which are being auto-terminated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13341) Update MergeContent and JoinEnrichment to not write attributes to FlowFiles for auto-terminated relationship

2024-06-03 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13341:
--
Summary: Update MergeContent and JoinEnrichment to not write attributes to 
FlowFiles for auto-terminated relationship  (was: Update BinFiles not to write 
attributes to FlowFiles for auto-terminated relationship)

> Update MergeContent and JoinEnrichment to not write attributes to FlowFiles 
> for auto-terminated relationship
> 
>
> Key: NIFI-13341
> URL: https://issues.apache.org/jira/browse/NIFI-13341
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.26.0, 2.0.0-M3
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> NIFI-13196 introduces the ability to check if a relationship is 
> auto-terminated. In the case of BinFiles (which is extended by MergeContent 
> and JoinEnrichment processors), the processor can have the  REL_ORIGINAL 
> auto-terminated. When it is auto-terminated, skip copying attributes into the 
> flow files which are being auto-terminated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-13341) Update BinFiles not to write attributes to FlowFiles for auto-terminated relationship

2024-06-03 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reassigned NIFI-13341:
-

Assignee: Jim Steinebrey

> Update BinFiles not to write attributes to FlowFiles for auto-terminated 
> relationship
> -
>
> Key: NIFI-13341
> URL: https://issues.apache.org/jira/browse/NIFI-13341
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.26.0, 2.0.0-M3
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>
> NIFI-13196 introduces the ability to check if a relationship is 
> auto-terminated. In the case of BinFiles (which is extended by MergeContent 
> and JoinEnrichment processors), the processor can have the  REL_ORIGINAL 
> auto-terminated. When it is auto-terminated, skip copying attributes into the 
> flow files which are being auto-terminated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13341) Update BinFiles not to write attributes to FlowFiles for auto-terminated relationship

2024-06-03 Thread Jim Steinebrey (Jira)
Jim Steinebrey created NIFI-13341:
-

 Summary: Update BinFiles not to write attributes to FlowFiles for 
auto-terminated relationship
 Key: NIFI-13341
 URL: https://issues.apache.org/jira/browse/NIFI-13341
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 2.0.0-M3, 1.26.0
Reporter: Jim Steinebrey


NIFI-13196 introduces the ability to check if a relationship is 
auto-terminated. In the case of BinFiles (which is extended by MergeContent and 
JoinEnrichment processors), the processor can have the  REL_ORIGINAL 
auto-terminated. When it is auto-terminated, skip copying attributes into the 
flow files which are being auto-terminated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13198) Update RouteText not to write to FlowFiles for auto-terminated relationships

2024-06-02 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17851466#comment-17851466
 ] 

Jim Steinebrey commented on NIFI-13198:
---

[~markap14]  I would like to implement this ticket (unless you want to do it 
yourself), OK?

> Update RouteText not to write to FlowFiles for auto-terminated relationships
> 
>
> Key: NIFI-13198
> URL: https://issues.apache.org/jira/browse/NIFI-13198
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Mark Payne
>Priority: Major
>
> NIFI-13196 introduces the ability to check if a relationship is 
> auto-terminated. In the case of RouteText, the processor is commonly used to 
> filter out unwanted lines of text. For anything that is auto-terminated, 
> though, we still write out the data. We should instead check if the 
> Relationship that we're writing to is auto-terminated and if so, don't bother 
> creating the flowfile or writing to it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13154) Show Parameter set for Sensitive Property

2024-05-20 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17847907#comment-17847907
 ] 

Jim Steinebrey commented on NIFI-13154:
---

Doing implementation, I learned that the current and new UIs do NOT allow any 
leading or trailing spaces around a parameter reference. Therefore I do not 
search for leading or trailing spaces in the regular expression. It displays a 
parameter reference for the sensitive value only if the value begins with #\{ 
and ends with }

> Show Parameter set for Sensitive Property
> -
>
> Key: NIFI-13154
> URL: https://issues.apache.org/jira/browse/NIFI-13154
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 2.0.0-M2
>Reporter: Brian Ghigiarelli
>Assignee: Jim Steinebrey
>Priority: Major
> Attachments: Screenshot 2024-05-06 at 2.33.03 PM.png, Screenshot 
> 2024-05-06 at 2.33.09 PM.png, Screenshot 2024-05-06 at 2.33.54 PM.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When a parameter with a sensitive value is used for a sensitive property in a 
> processor, controller service, etc., we are able to see through the Parameter 
> Context that it is used on the processors, controller service, etc.
> However, the "Sensitive property set" for that property makes it difficult to 
> understand which parameter is set.
> Since the link between the parameter and the property is not sensitive, it 
> would be really nice to show the parameter instead of masking it with the 
> usual "Sensitive property set" banner.
> For example, we can set #\{SensitiveParameter} as the value of the 
> EncryptPGP's Passphrase property. Then, we can't see it again, unless we know 
> which parameter in the context we are using.
> !Screenshot 2024-05-06 at 2.33.03 PM.png!
> !Screenshot 2024-05-06 at 2.33.09 PM.png!
> !Screenshot 2024-05-06 at 2.33.54 PM.png!
>  
> !image-2024-05-06-14-36-35-759.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12670) JoltTransform processors incorrectly encode/decode text in the Jolt Specification

2024-05-15 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-12670:
--
Affects Version/s: 1.26.0
   2.0.0-M3

> JoltTransform processors incorrectly encode/decode text in the Jolt 
> Specification
> -
>
> Key: NIFI-12670
> URL: https://issues.apache.org/jira/browse/NIFI-12670
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration, Extensions
>Affects Versions: 2.0.0-M1, 1.24.0, 1.25.0, 2.0.0-M2, 1.26.0, 2.0.0-M3
> Environment: JVM with non-UTF-8 default encoding (e.g. default 
> Windows installation)
>Reporter: René Zeidler
>Assignee: Jim Steinebrey
>Priority: Minor
>  Labels: encoding, jolt, json, utf8, windows
> Attachments: Jolt_Transform_Encoding_Bug.json, 
> Jolt_Transform_Encoding_Bug_M2.json, image-2024-01-25-11-01-15-405.png, 
> image-2024-01-25-11-59-56-662.png, image-2024-01-25-12-00-09-544.png
>
>
> h2. Environment
> This issue affects environments where the JVM default encoding is not 
> {{{}UTF-8{}}}. Standard Java installations on Windows are affected, as they 
> usually use the default encoding {{{}windows-1252{}}}. To reproduce the issue 
> on Linux, change the default encoding to {{windows-1252}} by adding the 
> following line to your {{{}bootstrap.conf{}}}:
> {quote}{{java.arg.21=-Dfile.encoding=windows-1252}}
> {quote}
> h2. Summary
> The Jolt Specification of both the JoltTransformJSON and JoltTransformRecord 
> processors is read interally using the system default encoding, even though 
> it is always stored in UTF-8. This causes non-ASCII characters to be garbled 
> in the Jolt Specification, resulting in incorrect transformations (missing 
> data or garbled keys).
> h2. Steps to reproduce
>  # Make sure NiFi runs with a non-UTF-8 default encoding, see "Environment"
>  # Create a GenerateFlowFile processor with the following content:
> {quote}{
>   "regularString": "string with only ASCII characters",
>   "umlautString": "string with non-ASCII characters: ÄÖÜäöüßéèóò",
>   "keyWithÜmlaut": "any string"
> }
> {quote}
>  # Connect the processor to a JoltTransformJSON and/or JoltTransformRecord 
> processor.
> (If using the record based processor, use a default JsonTreeReader and 
> JsonRecordSetWriter. The record reader/writer don't affect this bug.)
> Set the Jolt Specification to:
> {quote}[
>   {
>     "operation": "shift",
>     "spec": {
>       "regularString": "Remapped to Umlaut ÄÖÜ",
>       "umlautString": "Umlaut String",
>       "keyWithÜmlaut": "Key with Umlaut"
>     }
>   }
> ]
> {quote}
>  # Connect the outputs of the Jolt processor(s) to funnels to be able to 
> observe the result in the queue.
>  # Start the Jolt processor(s) and run the GenerateFlowFile processor once.
> The flow should look similar to this:
> !image-2024-01-25-11-01-15-405.png!
> I also attached a JSON export of the example flow.
>  # Observe the content of the resulting FlowFile(s) in the queue.
> h3. Expected Result
> !image-2024-01-25-12-00-09-544.png!
> h3. Actual Result
> !image-2024-01-25-11-59-56-662.png!
>  * Remapped key containing non-ASCII characters is garbled, since the key 
> value originated from the Jolt Specification.
>  * The key "{{{}keyWithÜmlaut{}}}" could not be matched at all, since it 
> contains non-ASCII characters, resulting in missing data in the output.
> h2. Root Cause Analysis
> Both processors use the 
> {{[readTransform|https://github.com/apache/nifi/blob/2e3f83eb54cbc040b5a1da5bce9a74a558f08ea4/nifi-nar-bundles/nifi-jolt-bundle/nifi-jolt-processors/src/main/java/org/apache/nifi/processors/jolt/AbstractJoltTransform.java#L242-L249]}}
>  method of {{AbstractJoltTransform}} to read the Jolt Specification property. 
> This method uses an 
> [{{InputStreamReader}}|https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/io/InputStreamReader.html]
>  without specifying an encoding, which then defaults to the default charset 
> of the environment. Text properties are [always encoded in 
> UTF-8|https://github.com/apache/nifi/blob/89836f32d017d77972a4de09c4e864b0e11899a8/nifi-api/src/main/java/org/apache/nifi/components/resource/StandardResourceReferenceFactory.java#L111].
>  When the default charset is not UTF-8, this results in UTF-8 bytes to be 
> interpreted in a different encoding when converting to a string, resulting in 
> a garbled Jolt Specification being used.
> h2. Workaround
> This issue is not present when any attribute expression language is present 
> in the Jolt Specification. Simply adding {{${literal('')}}} anywhere in the 
> Jolt Specification works around this issue.
> This happens because [a different code path is 
> used|https://github.com/apache/nifi/blob/2e3f83eb54cbc040b5

[jira] [Assigned] (NIFI-12670) JoltTransform processors incorrectly encode/decode text in the Jolt Specification

2024-05-15 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reassigned NIFI-12670:
-

Assignee: Jim Steinebrey

> JoltTransform processors incorrectly encode/decode text in the Jolt 
> Specification
> -
>
> Key: NIFI-12670
> URL: https://issues.apache.org/jira/browse/NIFI-12670
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration, Extensions
>Affects Versions: 2.0.0-M1, 1.24.0, 1.25.0, 2.0.0-M2
> Environment: JVM with non-UTF-8 default encoding (e.g. default 
> Windows installation)
>Reporter: René Zeidler
>Assignee: Jim Steinebrey
>Priority: Minor
>  Labels: encoding, jolt, json, utf8, windows
> Attachments: Jolt_Transform_Encoding_Bug.json, 
> Jolt_Transform_Encoding_Bug_M2.json, image-2024-01-25-11-01-15-405.png, 
> image-2024-01-25-11-59-56-662.png, image-2024-01-25-12-00-09-544.png
>
>
> h2. Environment
> This issue affects environments where the JVM default encoding is not 
> {{{}UTF-8{}}}. Standard Java installations on Windows are affected, as they 
> usually use the default encoding {{{}windows-1252{}}}. To reproduce the issue 
> on Linux, change the default encoding to {{windows-1252}} by adding the 
> following line to your {{{}bootstrap.conf{}}}:
> {quote}{{java.arg.21=-Dfile.encoding=windows-1252}}
> {quote}
> h2. Summary
> The Jolt Specification of both the JoltTransformJSON and JoltTransformRecord 
> processors is read interally using the system default encoding, even though 
> it is always stored in UTF-8. This causes non-ASCII characters to be garbled 
> in the Jolt Specification, resulting in incorrect transformations (missing 
> data or garbled keys).
> h2. Steps to reproduce
>  # Make sure NiFi runs with a non-UTF-8 default encoding, see "Environment"
>  # Create a GenerateFlowFile processor with the following content:
> {quote}{
>   "regularString": "string with only ASCII characters",
>   "umlautString": "string with non-ASCII characters: ÄÖÜäöüßéèóò",
>   "keyWithÜmlaut": "any string"
> }
> {quote}
>  # Connect the processor to a JoltTransformJSON and/or JoltTransformRecord 
> processor.
> (If using the record based processor, use a default JsonTreeReader and 
> JsonRecordSetWriter. The record reader/writer don't affect this bug.)
> Set the Jolt Specification to:
> {quote}[
>   {
>     "operation": "shift",
>     "spec": {
>       "regularString": "Remapped to Umlaut ÄÖÜ",
>       "umlautString": "Umlaut String",
>       "keyWithÜmlaut": "Key with Umlaut"
>     }
>   }
> ]
> {quote}
>  # Connect the outputs of the Jolt processor(s) to funnels to be able to 
> observe the result in the queue.
>  # Start the Jolt processor(s) and run the GenerateFlowFile processor once.
> The flow should look similar to this:
> !image-2024-01-25-11-01-15-405.png!
> I also attached a JSON export of the example flow.
>  # Observe the content of the resulting FlowFile(s) in the queue.
> h3. Expected Result
> !image-2024-01-25-12-00-09-544.png!
> h3. Actual Result
> !image-2024-01-25-11-59-56-662.png!
>  * Remapped key containing non-ASCII characters is garbled, since the key 
> value originated from the Jolt Specification.
>  * The key "{{{}keyWithÜmlaut{}}}" could not be matched at all, since it 
> contains non-ASCII characters, resulting in missing data in the output.
> h2. Root Cause Analysis
> Both processors use the 
> {{[readTransform|https://github.com/apache/nifi/blob/2e3f83eb54cbc040b5a1da5bce9a74a558f08ea4/nifi-nar-bundles/nifi-jolt-bundle/nifi-jolt-processors/src/main/java/org/apache/nifi/processors/jolt/AbstractJoltTransform.java#L242-L249]}}
>  method of {{AbstractJoltTransform}} to read the Jolt Specification property. 
> This method uses an 
> [{{InputStreamReader}}|https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/io/InputStreamReader.html]
>  without specifying an encoding, which then defaults to the default charset 
> of the environment. Text properties are [always encoded in 
> UTF-8|https://github.com/apache/nifi/blob/89836f32d017d77972a4de09c4e864b0e11899a8/nifi-api/src/main/java/org/apache/nifi/components/resource/StandardResourceReferenceFactory.java#L111].
>  When the default charset is not UTF-8, this results in UTF-8 bytes to be 
> interpreted in a different encoding when converting to a string, resulting in 
> a garbled Jolt Specification being used.
> h2. Workaround
> This issue is not present when any attribute expression language is present 
> in the Jolt Specification. Simply adding {{${literal('')}}} anywhere in the 
> Jolt Specification works around this issue.
> This happens because [a different code path is 
> used|https://github.com/apache/nifi/blob/2e3f83eb54cbc040b5a1da5bce9a74a558f08ea4/nifi-nar-bundles/nifi

[jira] [Created] (NIFI-13240) Create Marker Interface for AbstractSingleAttributeControllerService

2024-05-14 Thread Jim Steinebrey (Jira)
Jim Steinebrey created NIFI-13240:
-

 Summary: Create Marker Interface for 
AbstractSingleAttributeControllerService
 Key: NIFI-13240
 URL: https://issues.apache.org/jira/browse/NIFI-13240
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Jim Steinebrey






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13154) Show Parameter set for Sensitive Property

2024-05-10 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17845485#comment-17845485
 ] 

Jim Steinebrey commented on NIFI-13154:
---

To be extra safe, I am going to make it so it displays the parameter name if 
the property value after trimming leading and trailing spaces contains only a 
parameter and nothing else.
That means if there is a case where there is a sensitive prop value containing 
some EL which includes a parameter in it, I will not change how it works and it 
will continue to show "Sensitive property set". That is safest because the EL 
could include some sensitive info (like a hard-coded string) which should not 
be exposed. 

> Show Parameter set for Sensitive Property
> -
>
> Key: NIFI-13154
> URL: https://issues.apache.org/jira/browse/NIFI-13154
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 2.0.0-M2
>Reporter: Brian Ghigiarelli
>Assignee: Jim Steinebrey
>Priority: Major
> Attachments: Screenshot 2024-05-06 at 2.33.03 PM.png, Screenshot 
> 2024-05-06 at 2.33.09 PM.png, Screenshot 2024-05-06 at 2.33.54 PM.png
>
>
> When a parameter with a sensitive value is used for a sensitive property in a 
> processor, controller service, etc., we are able to see through the Parameter 
> Context that it is used on the processors, controller service, etc.
> However, the "Sensitive property set" for that property makes it difficult to 
> understand which parameter is set.
> Since the link between the parameter and the property is not sensitive, it 
> would be really nice to show the parameter instead of masking it with the 
> usual "Sensitive property set" banner.
> For example, we can set #\{SensitiveParameter} as the value of the 
> EncryptPGP's Passphrase property. Then, we can't see it again, unless we know 
> which parameter in the context we are using.
> !Screenshot 2024-05-06 at 2.33.03 PM.png!
> !Screenshot 2024-05-06 at 2.33.09 PM.png!
> !Screenshot 2024-05-06 at 2.33.54 PM.png!
>  
> !image-2024-05-06-14-36-35-759.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13154) Show Parameter set for Sensitive Property

2024-05-10 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17845484#comment-17845484
 ] 

Jim Steinebrey commented on NIFI-13154:
---

I am starting on this change unless someone else really wants to do it.

> Show Parameter set for Sensitive Property
> -
>
> Key: NIFI-13154
> URL: https://issues.apache.org/jira/browse/NIFI-13154
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 2.0.0-M2
>Reporter: Brian Ghigiarelli
>Assignee: Jim Steinebrey
>Priority: Major
> Attachments: Screenshot 2024-05-06 at 2.33.03 PM.png, Screenshot 
> 2024-05-06 at 2.33.09 PM.png, Screenshot 2024-05-06 at 2.33.54 PM.png
>
>
> When a parameter with a sensitive value is used for a sensitive property in a 
> processor, controller service, etc., we are able to see through the Parameter 
> Context that it is used on the processors, controller service, etc.
> However, the "Sensitive property set" for that property makes it difficult to 
> understand which parameter is set.
> Since the link between the parameter and the property is not sensitive, it 
> would be really nice to show the parameter instead of masking it with the 
> usual "Sensitive property set" banner.
> For example, we can set #\{SensitiveParameter} as the value of the 
> EncryptPGP's Passphrase property. Then, we can't see it again, unless we know 
> which parameter in the context we are using.
> !Screenshot 2024-05-06 at 2.33.03 PM.png!
> !Screenshot 2024-05-06 at 2.33.09 PM.png!
> !Screenshot 2024-05-06 at 2.33.54 PM.png!
>  
> !image-2024-05-06-14-36-35-759.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13154) Show Parameter set for Sensitive Property

2024-05-10 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13154:
--
Priority: Major  (was: Minor)

> Show Parameter set for Sensitive Property
> -
>
> Key: NIFI-13154
> URL: https://issues.apache.org/jira/browse/NIFI-13154
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 2.0.0-M2
>Reporter: Brian Ghigiarelli
>Assignee: Jim Steinebrey
>Priority: Major
> Attachments: Screenshot 2024-05-06 at 2.33.03 PM.png, Screenshot 
> 2024-05-06 at 2.33.09 PM.png, Screenshot 2024-05-06 at 2.33.54 PM.png
>
>
> When a parameter with a sensitive value is used for a sensitive property in a 
> processor, controller service, etc., we are able to see through the Parameter 
> Context that it is used on the processors, controller service, etc.
> However, the "Sensitive property set" for that property makes it difficult to 
> understand which parameter is set.
> Since the link between the parameter and the property is not sensitive, it 
> would be really nice to show the parameter instead of masking it with the 
> usual "Sensitive property set" banner.
> For example, we can set #\{SensitiveParameter} as the value of the 
> EncryptPGP's Passphrase property. Then, we can't see it again, unless we know 
> which parameter in the context we are using.
> !Screenshot 2024-05-06 at 2.33.03 PM.png!
> !Screenshot 2024-05-06 at 2.33.09 PM.png!
> !Screenshot 2024-05-06 at 2.33.54 PM.png!
>  
> !image-2024-05-06-14-36-35-759.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-13154) Show Parameter set for Sensitive Property

2024-05-10 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reassigned NIFI-13154:
-

Assignee: Jim Steinebrey

> Show Parameter set for Sensitive Property
> -
>
> Key: NIFI-13154
> URL: https://issues.apache.org/jira/browse/NIFI-13154
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 2.0.0-M2
>Reporter: Brian Ghigiarelli
>Assignee: Jim Steinebrey
>Priority: Minor
> Attachments: Screenshot 2024-05-06 at 2.33.03 PM.png, Screenshot 
> 2024-05-06 at 2.33.09 PM.png, Screenshot 2024-05-06 at 2.33.54 PM.png
>
>
> When a parameter with a sensitive value is used for a sensitive property in a 
> processor, controller service, etc., we are able to see through the Parameter 
> Context that it is used on the processors, controller service, etc.
> However, the "Sensitive property set" for that property makes it difficult to 
> understand which parameter is set.
> Since the link between the parameter and the property is not sensitive, it 
> would be really nice to show the parameter instead of masking it with the 
> usual "Sensitive property set" banner.
> For example, we can set #\{SensitiveParameter} as the value of the 
> EncryptPGP's Passphrase property. Then, we can't see it again, unless we know 
> which parameter in the context we are using.
> !Screenshot 2024-05-06 at 2.33.03 PM.png!
> !Screenshot 2024-05-06 at 2.33.09 PM.png!
> !Screenshot 2024-05-06 at 2.33.54 PM.png!
>  
> !image-2024-05-06-14-36-35-759.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-12217) In PutDatabaseRecord processor, when you try to insert a CLOB and a SQLException gets catched, the exception message gets lost

2024-05-09 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey resolved NIFI-12217.
---
Fix Version/s: 2.0.0-M3
   1.26.0
   Resolution: Fixed

This was noticed independently and fixed as part of NIFI-13103

> In PutDatabaseRecord processor, when you try to insert a CLOB and a 
> SQLException gets catched, the exception message gets lost
> --
>
> Key: NIFI-12217
> URL: https://issues.apache.org/jira/browse/NIFI-12217
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.23.2
>Reporter: Alessandro Polselli
>Priority: Trivial
>  Labels: putdatabaserecord
> Fix For: 2.0.0-M3, 1.26.0
>
>
> In PutDatabaseRecord processor, when you try to insert a CLOB and a 
> SQLException gets catched
> [https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java#L866C4-L866C4]
> the original Exception message (the most valuable part) gets completely lost, 
> because only its .getCause() is wrapped in a generic IOException that states 
> "Unable to parse data as CLOB/String", making it extremely difficult to 
> identify which is the real problem.
> In my case, the problem was something like "ORA-25153: Tablespace temporanea 
> vuota" but this valuable message wasn't logged at all.
>  
> I suggest to replace 
> {code:java}
>     } catch (SQLException e) {
>     throw new IOException("Unable to parse data as 
> CLOB/String " + value, e.getCause());
>     } {code}
> with
> {code:java}
>     } catch (SQLException e) {
>     throw new IOException("Unable to parse data as 
> CLOB/String " + value, e);
>     } {code}
>  
> Thank you



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (NIFI-8004) PutDatabaseRecord doesn't route error flow file to failure and leaves it in the input queue

2024-05-09 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17845088#comment-17845088
 ] 

Jim Steinebrey edited comment on NIFI-8004 at 5/9/24 6:15 PM:
--

I could not reproduce this error when I set up a NiFi flow with the same 
conditions.
So I looked in the code and found that NIFI-8146 changed one of the catch 
statements in onTrigger from SQLException to Exception which fixed this ticket.


was (Author: JIRAUSER303705):
I could not reproduce this error when I set up a NiFi flow with the same 
conditions.
So I looked in the code and found that NIFI-8146 changed one of the catch 
statements in onTrigger from SQLException to Exception which would fixed this 
ticket.

> PutDatabaseRecord doesn't route error flow file to failure and leaves it in 
> the input queue
> ---
>
> Key: NIFI-8004
> URL: https://issues.apache.org/jira/browse/NIFI-8004
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.12.1
>Reporter: Svyatoslav
>Priority: Critical
> Fix For: 1.13.0
>
> Attachments: image-2020-11-14-21-30-56-114.png, 
> image-2020-11-14-21-32-12-673.png, image-2020-11-14-21-33-46-046.png, 
> image-2020-11-14-21-42-25-537.png
>
>
> Input flow file is in Avro format, contains array of records, one of them 
> with following fields
> !image-2020-11-14-21-32-12-673.png!
> Field in line 124 (let's name it f124) in one of the records contains string 
> value. The corresponding field in PostgreSQL database, where 
> PutDatabaseRecords upserts the data is of type numeric.
> PutDatabaseRecord writes errors to the bulletin board, but the flow file 
> causing error doesn't route to failure and stays in the input queue:
> !image-2020-11-14-21-33-46-046.png!
> PutDatabaseRecord processor configuration:
> !image-2020-11-14-21-42-25-537.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-8004) PutDatabaseRecord doesn't route error flow file to failure and leaves it in the input queue

2024-05-09 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey resolved NIFI-8004.
--
Fix Version/s: 1.13.0
   Resolution: Fixed

I could not reproduce this error when I set up a NiFi flow with the same 
conditions.
So I looked in the code and found that NIFI-8146 changed one of the catch 
statements in onTrigger from SQLException to Exception which would fixed this 
ticket.

> PutDatabaseRecord doesn't route error flow file to failure and leaves it in 
> the input queue
> ---
>
> Key: NIFI-8004
> URL: https://issues.apache.org/jira/browse/NIFI-8004
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.12.1
>Reporter: Svyatoslav
>Priority: Critical
> Fix For: 1.13.0
>
> Attachments: image-2020-11-14-21-30-56-114.png, 
> image-2020-11-14-21-32-12-673.png, image-2020-11-14-21-33-46-046.png, 
> image-2020-11-14-21-42-25-537.png
>
>
> Input flow file is in Avro format, contains array of records, one of them 
> with following fields
> !image-2020-11-14-21-32-12-673.png!
> Field in line 124 (let's name it f124) in one of the records contains string 
> value. The corresponding field in PostgreSQL database, where 
> PutDatabaseRecords upserts the data is of type numeric.
> PutDatabaseRecord writes errors to the bulletin board, but the flow file 
> causing error doesn't route to failure and stays in the input queue:
> !image-2020-11-14-21-33-46-046.png!
> PutDatabaseRecord processor configuration:
> !image-2020-11-14-21-42-25-537.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-12669) EvaluateXQuery processor incorrectly encodes result attributes

2024-05-08 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reassigned NIFI-12669:
-

Assignee: Jim Steinebrey

> EvaluateXQuery processor incorrectly encodes result attributes
> --
>
> Key: NIFI-12669
> URL: https://issues.apache.org/jira/browse/NIFI-12669
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration, Extensions
>Affects Versions: 2.0.0-M1, 1.24.0, 1.25.0, 2.0.0-M2
> Environment: JVM with non-UTF-8 default encoding (e.g. default 
> Windows installation)
>Reporter: René Zeidler
>Assignee: Jim Steinebrey
>Priority: Major
>  Labels: encoding, utf8, windows, xml
> Attachments: EvaluateXQuery_Encoding_Bug.json, 
> image-2024-01-25-10-24-17-005.png, image-2024-01-25-10-31-35-200.png
>
>
> h2. Environment
> This issue affects environments where the JVM default encoding is not 
> {{{}UTF-8{}}}. Standard Java installations on Windows are affected, as they 
> usually use the default encoding {{{}windows-1252{}}}. To reproduce the issue 
> on Linux, change the default encoding to {{windows-1252}} by adding the 
> following line to your {{{}bootstrap.conf{}}}:
> {quote}{{java.arg.21=-Dfile.encoding=windows-1252}}
> {quote}
> h2. Summary
> The EvaluateXQuery incorrectly encodes result values when storing them in 
> attributes. This causes non-ASCII characters to be garbled.
> Example:
> !image-2024-01-25-10-24-17-005.png!
> h2. Steps to reproduce
>  # Make sure NiFi runs with a non-UTF-8 default encoding, see "Environment"
>  # Create a GenerateFlowFile processor with the following content:
> {quote}{{}}
> {{}}
> {{  This text contains non-ASCII characters: ÄÖÜäöüßéèóò}}
> {{}}
> {quote}
>  # Connect the processor to an EvaluateXQuery processor.
> Set the {{Destination}} to {{{}flowfile-attribute{}}}.
> Create a custom property {{myData}} with value {{{}string(/myRoot/myData){}}}.
>  # Connect the outputs of the EvaluateXQuery processor to funnels to be able 
> to observe the result in the queue.
>  # Start the EvaluateXQuery processor and run the GenerateFlowFile processor 
> once.
> The flow should look similar to this:
> !image-2024-01-25-10-31-35-200.png!
> I also attached a JSON export of the example flow.
>  # Observe the attributes of the resulting FlowFile in the queue.
> h3. Expected Result
> The FlowFile should contain an attribute {{myData}} with the value {{{}"This 
> text contains non-ASCII characters: ÄÖÜäöüßéèóò"{}}}.
> h3. Actual Result
> The attribute has the value "This text contains non-ASCII characters: 
> ÄÖÜäöüßéèóò".
> h2. Root Cause Analysis
> EvaluateXQuery uses the method 
> [{{formatItem}}|https://github.com/apache/nifi/blob/2e3f83eb54cbc040b5a1da5bce9a74a558f08ea4/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/EvaluateXQuery.java#L368-L372]
>  to write the query result to an attribute. This method calls 
> {{{}ByteArrayOutputStream{}}}'s 
> [toString|https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/io/ByteArrayOutputStream.html#toString()]
>  method without an encoding argument, which then defaults to the default 
> charset of the environment. Bytes are always written to this output stream 
> using UTF-8 
> ([.getBytes(StandardCharsets.UTF8)|https://github.com/apache/nifi/blob/2e3f83eb54cbc040b5a1da5bce9a74a558f08ea4/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/EvaluateXQuery.java#L397]).
>  When the default charset is not UTF-8, this results in UTF-8 bytes to be 
> interpreted in a different encoding when converting to a string, resulting in 
> garbled text (see above).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-13124) PutDatabaseRecord: when AUTO_COMMIT property equals "No value set", an NPE occurs

2024-05-02 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey resolved NIFI-13124.
---
Resolution: Duplicate

> PutDatabaseRecord: when AUTO_COMMIT property equals "No value set", an NPE 
> occurs
> -
>
> Key: NIFI-13124
> URL: https://issues.apache.org/jira/browse/NIFI-13124
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.25.0
>Reporter: Jim Steinebrey
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13124) PutDatabaseRecord: when AUTO_COMMIT property equals "No value set", an NPE occurs

2024-05-02 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13124:
--
Summary: PutDatabaseRecord: when AUTO_COMMIT property equals "No value 
set", an NPE occurs  (was: PutSQL: when AUTO_COMMIT property equals "No value 
set", an NPE occurs)

> PutDatabaseRecord: when AUTO_COMMIT property equals "No value set", an NPE 
> occurs
> -
>
> Key: NIFI-13124
> URL: https://issues.apache.org/jira/browse/NIFI-13124
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.25.0
>Reporter: Jim Steinebrey
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13124) PutSQL: when AUTO_COMMIT property equals "No value set", an NPE occurs

2024-05-02 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13124:
--
Description: (was: If user manually sets PutSQL property called 
"Database Session AutoCommit" to "No value set", then when a flow file attempts 
to be processed, a NullPointerException is thrown)

> PutSQL: when AUTO_COMMIT property equals "No value set", an NPE occurs
> --
>
> Key: NIFI-13124
> URL: https://issues.apache.org/jira/browse/NIFI-13124
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.25.0
>Reporter: Jim Steinebrey
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-13124) PutSQL: when AUTO_COMMIT property equals "No value set", an NPE occurs

2024-05-02 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reassigned NIFI-13124:
-

Assignee: (was: Jim Steinebrey)

> PutSQL: when AUTO_COMMIT property equals "No value set", an NPE occurs
> --
>
> Key: NIFI-13124
> URL: https://issues.apache.org/jira/browse/NIFI-13124
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.25.0
>Reporter: Jim Steinebrey
>Priority: Minor
>
> If user manually sets PutSQL property called "Database Session AutoCommit" to 
> "No value set", then when a flow file attempts to be processed, a 
> NullPointerException is thrown



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13124) PutSQL: when AUTO_COMMIT property equals "No value set", an NPE occurs

2024-05-02 Thread Jim Steinebrey (Jira)
Jim Steinebrey created NIFI-13124:
-

 Summary: PutSQL: when AUTO_COMMIT property equals "No value set", 
an NPE occurs
 Key: NIFI-13124
 URL: https://issues.apache.org/jira/browse/NIFI-13124
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.25.0
Reporter: Jim Steinebrey
Assignee: Jim Steinebrey


If user manually sets PutSQL property called "Database Session AutoCommit" to 
"No value set", then when a flow file attempts to be processed, a 
NullPointerException is thrown



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13103) Enhance AutoCommit property to allow no value set in PutDatabaseRecord

2024-05-01 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13103:
--
Description: Enhance AutoCommit property to allow value of "No value set" 
in PutDatabaseRecord to be consistent with PutSQL processor. No value set 
(null) leaves the database connection's autoCommit mode unmodified.   (was: 
Make AutoCommit default to no values set in PutDatabaseRecord so the database 
connection's autoCommit mode is left unchanged by default. )

> Enhance AutoCommit property to allow no value set in PutDatabaseRecord
> --
>
> Key: NIFI-13103
> URL: https://issues.apache.org/jira/browse/NIFI-13103
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Minor
> Fix For: 2.0.0-M3, 1.26.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Enhance AutoCommit property to allow value of "No value set" in 
> PutDatabaseRecord to be consistent with PutSQL processor. No value set (null) 
> leaves the database connection's autoCommit mode unmodified. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13103) Enhance AutoCommit property to allow no value set in PutDatabaseRecord

2024-05-01 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13103:
--
Summary: Enhance AutoCommit property to allow no value set in 
PutDatabaseRecord  (was: Make AutoCommit default to no value set in 
PutDatabaseRecord)

> Enhance AutoCommit property to allow no value set in PutDatabaseRecord
> --
>
> Key: NIFI-13103
> URL: https://issues.apache.org/jira/browse/NIFI-13103
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Minor
> Fix For: 2.0.0-M3, 1.26.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Make AutoCommit default to no values set in PutDatabaseRecord so the database 
> connection's autoCommit mode is left unchanged by default. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13103) Make AutoCommit default to no value set in PutDatabaseRecord

2024-04-26 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13103:
--
Description: Make AutoCommit default to no values set in PutDatabaseRecord 
so the database connection's autoCommit mode is left unchanged by default.   
(was: Make AutoCommit default to no values set in PutDatabaseTable so the 
database connection's autoCommit mode is left unchanged by default. )

> Make AutoCommit default to no value set in PutDatabaseRecord
> 
>
> Key: NIFI-13103
> URL: https://issues.apache.org/jira/browse/NIFI-13103
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Minor
>
> Make AutoCommit default to no values set in PutDatabaseRecord so the database 
> connection's autoCommit mode is left unchanged by default. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13103) Make AutoCommit default to no value set in PutDatabaseRecord

2024-04-26 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13103:
--
Summary: Make AutoCommit default to no value set in PutDatabaseRecord  
(was: Make AutoCommit default to no value set in PutDatabaseTable)

> Make AutoCommit default to no value set in PutDatabaseRecord
> 
>
> Key: NIFI-13103
> URL: https://issues.apache.org/jira/browse/NIFI-13103
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Minor
>
> Make AutoCommit default to no values set in PutDatabaseTable so the database 
> connection's autoCommit mode is left unchanged by default. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13103) Make AutoCommit default to no value set in PutDatabaseTable

2024-04-26 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey updated NIFI-13103:
--
Summary: Make AutoCommit default to no value set in PutDatabaseTable  (was: 
Make AutoCommit default to no values set in PutDatabaseTable)

> Make AutoCommit default to no value set in PutDatabaseTable
> ---
>
> Key: NIFI-13103
> URL: https://issues.apache.org/jira/browse/NIFI-13103
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Minor
>
> Make AutoCommit default to no values set in PutDatabaseTable so the database 
> connection's autoCommit mode is left unchanged by default. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13103) Make AutoCommit default to no values set in PutDatabaseTable

2024-04-26 Thread Jim Steinebrey (Jira)
Jim Steinebrey created NIFI-13103:
-

 Summary: Make AutoCommit default to no values set in 
PutDatabaseTable
 Key: NIFI-13103
 URL: https://issues.apache.org/jira/browse/NIFI-13103
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Jim Steinebrey
Assignee: Jim Steinebrey


Make AutoCommit default to no values set in PutDatabaseTable so the database 
connection's autoCommit mode is left unchanged by default. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11449) Investigate Iceberg insert on Object Storage

2024-04-15 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17837460#comment-17837460
 ] 

Jim Steinebrey commented on NIFI-11449:
---

On AWS, Iceberg uses AWS Glue Catalog and the Athena engine. This is a good 
overview:
https://docs.aws.amazon.com/athena/latest/ug/querying-iceberg.html

> Investigate Iceberg insert on Object Storage
> 
>
> Key: NIFI-11449
> URL: https://issues.apache.org/jira/browse/NIFI-11449
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.21.0
> Environment: Any Nifi Deployment
>Reporter: Abdelrahim Ahmad
>Priority: Blocker
>  Labels: Trino, autocommit, database, iceberg, putdatabaserecord
>
> The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
> using the processor with the Trino-JDBC-Driver or Dremio-JDBC-Driver to write 
> to an Iceberg catalog, it disables the autocommit feature. This leads to 
> errors such as "{*}Catalog only supports writes using autocommit: iceberg{*}".
> the autocommit feature needs to be added in the processor to be 
> enabled/disabled.
> enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
> Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
> allowing atomic writes to be performed in the underlying database. This will 
> allow the process to be widely used with bigger range of databases.
> _Improving this processor will allow Nifi to be the main tool to ingest data 
> into these new Technologies. So we don't have to deal with another tool to do 
> so._
> +*_{color:#de350b}BUT:{color}_*+
> I have reviewed The {{PutDatabaseRecord}} processor in NiFi. It inserts 
> records one by one into the database using a prepared statement, and commits 
> the transaction at the end of the loop that processes each record. This 
> approach can be inefficient and slow when inserting large volumes of data 
> into tables that are optimized for bulk ingestion, such as Delta Lake, 
> Iceberg, and Hudi tables.
> These tables use various techniques to optimize the performance of bulk 
> ingestion, such as partitioning, clustering, and indexing. Inserting records 
> one by one using a prepared statement can bypass these optimizations, leading 
> to poor performance and potentially causing issues such as excessive disk 
> usage, increased memory consumption, and decreased query performance.
> To avoid these issues, it is recommended to have a new processor, or add 
> feature to the current one, to bulk insert method with AutoCommit feature 
> when inserting large volumes of data into Delta Lake, Iceberg, and Hudi 
> tables. 
>  
> P.S.: using PutSQL is not a have autoCommit but have the same performance 
> problem described above..
> Thanks and best regards :)
> Abdelrahim Ahmad



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-10657) When multiple CLOB columns in a table are configured for processing in Oracle, NIFI is throwing Out of Memory exception.

2024-04-15 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reassigned NIFI-10657:
-

Assignee: Jim Steinebrey

> When multiple CLOB columns in a table are configured for processing in 
> Oracle, NIFI is throwing Out of Memory exception.
> 
>
> Key: NIFI-10657
> URL: https://issues.apache.org/jira/browse/NIFI-10657
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Docker
>Affects Versions: 1.17.0
> Environment: Docker container
>Reporter: Shiv
>Assignee: Jim Steinebrey
>Priority: Major
>  Labels: performance
>
> This issue is happening because Nifi Processor PutDatabaseRecord is not 
> clearing CLOBs once the data is written to the databases. The Nifi Processor 
> code needs to be modified to clear CLOB before closing Resultsets.
> Table being processed has four CLOB data type columns. Average row size is 
> 10K, and each CLOB column average length is 2K
> I don't have the permission to create a branch.. The below code change is 
> needed to fix the issue *(tested against rel/nifi-1.17.0 branch)*
> {code:java}
> diff --git 
> a/nifi-nar-bundles/nifi-extension-utils/nifi-database-utils/src/main/java/org/apache/nifi/util/db/JdbcCommon.java
>  
> b/nifi-nar-bundles/nifi-extension-utils/nifi-database-utils/src/main/java/org/apache/nifi/util/db/JdbcCommon.java
> index b78408c912..bfc5328603 100644
> --- 
> a/nifi-nar-bundles/nifi-extension-utils/nifi-database-utils/src/main/java/org/apache/nifi/util/db/JdbcCommon.java
> +++ 
> b/nifi-nar-bundles/nifi-extension-utils/nifi-database-utils/src/main/java/org/apache/nifi/util/db/JdbcCommon.java
> @@ -280,6 +280,11 @@ public class JdbcCommon {
>                                  }
>                              }
>                              rec.put(i - 1, sb.toString());
> +                            try {
> +                               clob.free();
> +                           } catch (SQLFeatureNotSupportedException sfnse) {
> +                              // The driver doesn't support free, but allow 
> processing to continue
> +                           }
>                          } else {
>                              rec.put(i - 1, null);
>                          } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-5519) Allow ListDatabaseTables to accept incoming connections

2024-04-12 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-5519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836733#comment-17836733
 ] 

Jim Steinebrey commented on NIFI-5519:
--

https://github.com/apache/nifi/pull/8639

> Allow ListDatabaseTables to accept incoming connections
> ---
>
> Key: NIFI-5519
> URL: https://issues.apache.org/jira/browse/NIFI-5519
> Project: Apache NiFi
>  Issue Type: Wish
>Reporter: Matt Burgess
>Assignee: Jim Steinebrey
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As of [NIFI-5229|https://issues.apache.org/jira/browse/NIFI-5229], 
> DBCPConnectionPoolLookup allows the dynamic selection of a DBCPConnectionPool 
> by name. This allows processors who are to perform the same work on multiple 
> databases to be able to do so by providing individual flow files upstream 
> with the database.name attribute set.
> However ListDatabaseTables does not accept incoming connections, so you 
> currently need 1 DBCPConnectionPool per database, plus 1 ListDatabaseTables 
> per database, each using a corresponding DBCPConnectionPool. It would be nice 
> if ListDatabaseTables could accept incoming connection(s), if only to provide 
> attributes for selecting the DBCPConnectionPool.
> I propose the behavior be like other processors that can generate data with 
> or without an incoming connection (such as GenerateTableFetch, see 
> [NIFI-2881|https://issues.apache.org/jira/browse/NIFI-2881] for details). In 
> general that means if there is an incoming non-loop connection, it becomes 
> more "event-driven" in the sense that it will not execute if there is no 
> FlowFile on which to work. If there is no incoming connection, then it would 
> run as it always has, on its Run Schedule and with State Management, so as 
> not to re-list the same tables every time it executes. 
> However with an incoming connection and an available FlowFile, the behavior 
> could be that all tables for that database are listed, meaning that processor 
> state would not be updated nor queried, making it fully "event-driven". If 
> the tables for a database are not to be re-listed, the onus would be on the 
> upstream flow to not send a flow file for that database. This is not a 
> requirement, just a suggestion; it could be more flexible by honoring 
> processor state if the Refresh Interval is non-zero, but I think that adds 
> too much complexity for the user, for little payoff.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-5519) Allow ListDatabaseTables to accept incoming connections

2024-04-12 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-5519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reassigned NIFI-5519:


Assignee: Jim Steinebrey

> Allow ListDatabaseTables to accept incoming connections
> ---
>
> Key: NIFI-5519
> URL: https://issues.apache.org/jira/browse/NIFI-5519
> Project: Apache NiFi
>  Issue Type: Wish
>Reporter: Matt Burgess
>Assignee: Jim Steinebrey
>Priority: Major
>
> As of [NIFI-5229|https://issues.apache.org/jira/browse/NIFI-5229], 
> DBCPConnectionPoolLookup allows the dynamic selection of a DBCPConnectionPool 
> by name. This allows processors who are to perform the same work on multiple 
> databases to be able to do so by providing individual flow files upstream 
> with the database.name attribute set.
> However ListDatabaseTables does not accept incoming connections, so you 
> currently need 1 DBCPConnectionPool per database, plus 1 ListDatabaseTables 
> per database, each using a corresponding DBCPConnectionPool. It would be nice 
> if ListDatabaseTables could accept incoming connection(s), if only to provide 
> attributes for selecting the DBCPConnectionPool.
> I propose the behavior be like other processors that can generate data with 
> or without an incoming connection (such as GenerateTableFetch, see 
> [NIFI-2881|https://issues.apache.org/jira/browse/NIFI-2881] for details). In 
> general that means if there is an incoming non-loop connection, it becomes 
> more "event-driven" in the sense that it will not execute if there is no 
> FlowFile on which to work. If there is no incoming connection, then it would 
> run as it always has, on its Run Schedule and with State Management, so as 
> not to re-list the same tables every time it executes. 
> However with an incoming connection and an available FlowFile, the behavior 
> could be that all tables for that database are listed, meaning that processor 
> state would not be updated nor queried, making it fully "event-driven". If 
> the tables for a database are not to be re-listed, the onus would be on the 
> upstream flow to not send a flow file for that database. This is not a 
> requirement, just a suggestion; it could be more flexible by honoring 
> processor state if the Refresh Interval is non-zero, but I think that adds 
> too much complexity for the user, for little payoff.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13010) UpdateDatabaseTable Processor cannot use DBCPConnectionPoolLookup

2024-04-08 Thread Jim Steinebrey (Jira)
Jim Steinebrey created NIFI-13010:
-

 Summary: UpdateDatabaseTable Processor cannot use 
DBCPConnectionPoolLookup
 Key: NIFI-13010
 URL: https://issues.apache.org/jira/browse/NIFI-13010
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 2.0.0-M2, 1.25.0
Reporter: Jim Steinebrey
Assignee: Jim Steinebrey


The UpdateDatabaseTable processor fails to execute when it is configured to use 
a DBCP Connection Pool Lookup. The flow file has database.name attribute, 
However, when the processor runs, it gets the following error:

org.apache.nifi.processor.exception.ProcessException: 
java.lang.UnsupportedOperationException: Cannot lookup DBCPConnectionPool 
without attributes



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-11449) Investigate Iceberg insert on Object Storage

2024-04-04 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reassigned NIFI-11449:
-

Assignee: (was: Jim Steinebrey)

> Investigate Iceberg insert on Object Storage
> 
>
> Key: NIFI-11449
> URL: https://issues.apache.org/jira/browse/NIFI-11449
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.21.0
> Environment: Any Nifi Deployment
>Reporter: Abdelrahim Ahmad
>Priority: Blocker
>  Labels: Trino, autocommit, database, iceberg, putdatabaserecord
>
> The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
> using the processor with the Trino-JDBC-Driver or Dremio-JDBC-Driver to write 
> to an Iceberg catalog, it disables the autocommit feature. This leads to 
> errors such as "{*}Catalog only supports writes using autocommit: iceberg{*}".
> the autocommit feature needs to be added in the processor to be 
> enabled/disabled.
> enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
> Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
> allowing atomic writes to be performed in the underlying database. This will 
> allow the process to be widely used with bigger range of databases.
> _Improving this processor will allow Nifi to be the main tool to ingest data 
> into these new Technologies. So we don't have to deal with another tool to do 
> so._
> +*_{color:#de350b}BUT:{color}_*+
> I have reviewed The {{PutDatabaseRecord}} processor in NiFi. It inserts 
> records one by one into the database using a prepared statement, and commits 
> the transaction at the end of the loop that processes each record. This 
> approach can be inefficient and slow when inserting large volumes of data 
> into tables that are optimized for bulk ingestion, such as Delta Lake, 
> Iceberg, and Hudi tables.
> These tables use various techniques to optimize the performance of bulk 
> ingestion, such as partitioning, clustering, and indexing. Inserting records 
> one by one using a prepared statement can bypass these optimizations, leading 
> to poor performance and potentially causing issues such as excessive disk 
> usage, increased memory consumption, and decreased query performance.
> To avoid these issues, it is recommended to have a new processor, or add 
> feature to the current one, to bulk insert method with AutoCommit feature 
> when inserting large volumes of data into Delta Lake, Iceberg, and Hudi 
> tables. 
>  
> P.S.: using PutSQL is not a have autoCommit but have the same performance 
> problem described above..
> Thanks and best regards :)
> Abdelrahim Ahmad



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12993) PutDatabaseRecord: add auto commit property and fully implement Batch Size for sql statement type

2024-04-02 Thread Jim Steinebrey (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833290#comment-17833290
 ] 

Jim Steinebrey commented on NIFI-12993:
---

https://github.com/apache/nifi/pull/8597

> PutDatabaseRecord: add auto commit property and fully implement Batch Size 
> for sql statement type
> -
>
> Key: NIFI-12993
> URL: https://issues.apache.org/jira/browse/NIFI-12993
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.25.0, 2.0.0-M2
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Add auto_commit property to PutDatabaseRecord.
> Batch size  property exists in PutDatabaseRecord and is implemented for some 
> statement types, but batch size is ignored for the SQL statement type 
> processing. Implement batch size processing for SQL statement types so all 
> statement types in PutDatabaseRecord support it equally.
> PutSQL and other SQL processors have auto commit and batch size properties so 
> it will be beneficial for PutDatabaseRecord to also implement them fully for 
> consistency.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-12993) PutDatabaseRecord: add auto commit property and fully implement Batch Size for sql statement type

2024-04-02 Thread Jim Steinebrey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Steinebrey reassigned NIFI-12993:
-

Assignee: Jim Steinebrey

> PutDatabaseRecord: add auto commit property and fully implement Batch Size 
> for sql statement type
> -
>
> Key: NIFI-12993
> URL: https://issues.apache.org/jira/browse/NIFI-12993
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.25.0, 2.0.0-M2
>Reporter: Jim Steinebrey
>Assignee: Jim Steinebrey
>Priority: Major
>
> Add auto_commit property to PutDatabaseRecord.
> Batch size  property exists in PutDatabaseRecord and is implemented for some 
> statement types, but batch size is ignored for the SQL statement type 
> processing. Implement batch size processing for SQL statement types so all 
> statement types in PutDatabaseRecord support it equally.
> PutSQL and other SQL processors have auto commit and batch size properties so 
> it will be beneficial for PutDatabaseRecord to also implement them fully for 
> consistency.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >