[jira] [Updated] (NIFI-8130) PutDatabaseRecord after MergeRecord randomly hangs forcing to discard the whole queue

2021-01-11 Thread Alessandro D'Armiento (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-8130:

Attachment: Screenshot 2021-01-11 at 11.38.45.png

> PutDatabaseRecord after MergeRecord randomly hangs forcing to discard the 
> whole queue
> -
>
> Key: NIFI-8130
> URL: https://issues.apache.org/jira/browse/NIFI-8130
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Alessandro D'Armiento
>Priority: Major
> Attachments: Screenshot 2021-01-11 at 11.34.06.png, Screenshot 
> 2021-01-11 at 11.38.45.png
>
>
> This bug is hard to replicate as it happens randomly.
> In the following (common) configuration, in which multiple records are merged 
> and then sent to a PutDatabaseRecord, it happens sometimes that a specific 
> FlowFile cause the PutDatabaseRecord to fail with `FlowFileHandlingException: 
> FlowFile already marked for transfer`
> !Screenshot 2021-01-11 at 11.38.45.png!
> !Screenshot 2021-01-11 at 11.34.06.png!
> In case of such an event, the processor remains stuck trying to process that 
> specific FlowFile (i.e. it is not routed to the failure relationship). This 
> forces the user to empty the whole queue in order to continue, which causes 
> data loss. 
> I noticed the following: 
>  * The issue is bound with the FlowFile: the same FlowFile will make multiple 
> processors to fail with the same error.
>  * Creating a new FlowFile with the same content (i.e. publishing the 
> FlowFile on a Kafka queue and consuming it right after) doesn't solve the 
> issue, and the FlowFile will raise the error again once sent to the 
> PutDatabaseRecord
>  * This error happened to me only when using the PutDatabaseRecord after a 
> MergeRecord (in order to batch multiple records in a single DB transaction). 
>  * This issue was already raised in the [Cloudera Community 
> Forum|https://community.cloudera.com/t5/Support-Questions/quot-is-already-marked-for-transfer-quot-in/td-p/236588],
>  alas, without any answer. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8130) PutDatabaseRecord after MergeRecord randomly hangs forcing to discard the whole queue

2021-01-11 Thread Alessandro D'Armiento (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-8130:

Description: 
This bug is hard to replicate as it happens randomly.

In the following (common) configuration, in which multiple records are merged 
and then sent to a PutDatabaseRecord, it happens sometimes that a specific 
FlowFile cause the PutDatabaseRecord to fail with `FlowFileHandlingException: 
FlowFile already marked for transfer`

!Screenshot 2021-01-11 at 11.38.45.png!

!Screenshot 2021-01-11 at 11.34.06.png!

In case of such an event, the processor remains stuck trying to process that 
specific FlowFile (i.e. it is not routed to the failure relationship). This 
forces the user to empty the whole queue in order to continue, which causes 
data loss. 

I noticed the following: 
 * The issue is bound with the FlowFile: the same FlowFile will make multiple 
processors to fail with the same error.
 * Creating a new FlowFile with the same content (i.e. publishing the FlowFile 
on a Kafka queue and consuming it right after) doesn't solve the issue, and the 
FlowFile will raise the error again once sent to the PutDatabaseRecord
 * This error happened to me only when using the PutDatabaseRecord after a 
MergeRecord (in order to batch multiple records in a single DB transaction). 
 * This issue was already raised in the [Cloudera Community 
Forum|https://community.cloudera.com/t5/Support-Questions/quot-is-already-marked-for-transfer-quot-in/td-p/236588],
 alas, without any answer. 

 

 

  was:
This bug is hard to replicate as it happens randomly.

In the following (common) configuration, in which multiple records are merged 
and then sent to a PutDatabaseRecord, it happens sometimes that a specific 
FlowFile cause the PutDatabaseRecord to fail with `FlowFileHandlingException: 
FlowFile already marked for transfer`

!Screenshot 2021-01-11 at 11.38.45.png!

!Screenshot 2021-01-11 at 11.34.06.png!

In case of such an event, the processor remains stuck trying to process that 
specific FlowFile (i.e. it is not routed to the failure relationship). This 
forces the user to empty the whole queue in order to continue, which causes 
data loss. 

I noticed the following: 
 * The issue is bound with the FlowFile: the same FlowFile will make multiple 
processors to fail with the same error.
 * Creating a new FlowFile with the same content (i.e. publishing the FlowFile 
on a Kafka queue and consuming it right after) doesn't solve the issue, and the 
FlowFile will raise the error again once sent to the PutDatabaseRecord
 * This error happened to me only when using the PutDatabaseRecord after a 
MergeRecord (in order to batch multiple records in a single DB transaction). 
 * This issue was already raised in the Cloudera Community Forum, [alas, 
without any 
|https://community.cloudera.com/t5/Support-Questions/quot-is-already-marked-for-transfer-quot-in/td-p/236588]answer.
 

 

 


> PutDatabaseRecord after MergeRecord randomly hangs forcing to discard the 
> whole queue
> -
>
> Key: NIFI-8130
> URL: https://issues.apache.org/jira/browse/NIFI-8130
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Alessandro D'Armiento
>Priority: Major
> Attachments: Screenshot 2021-01-11 at 11.34.06.png, Screenshot 
> 2021-01-11 at 11.38.45.png
>
>
> This bug is hard to replicate as it happens randomly.
> In the following (common) configuration, in which multiple records are merged 
> and then sent to a PutDatabaseRecord, it happens sometimes that a specific 
> FlowFile cause the PutDatabaseRecord to fail with `FlowFileHandlingException: 
> FlowFile already marked for transfer`
> !Screenshot 2021-01-11 at 11.38.45.png!
> !Screenshot 2021-01-11 at 11.34.06.png!
> In case of such an event, the processor remains stuck trying to process that 
> specific FlowFile (i.e. it is not routed to the failure relationship). This 
> forces the user to empty the whole queue in order to continue, which causes 
> data loss. 
> I noticed the following: 
>  * The issue is bound with the FlowFile: the same FlowFile will make multiple 
> processors to fail with the same error.
>  * Creating a new FlowFile with the same content (i.e. publishing the 
> FlowFile on a Kafka queue and consuming it right after) doesn't solve the 
> issue, and the FlowFile will raise the error again once sent to the 
> PutDatabaseRecord
>  * This error happened to me only when using the PutDatabaseRecord after a 
> MergeRecord (in order to batch multiple records in a single DB transaction). 
>  * This issue was already raised in the [Cloudera Community 
> Forum|https://community.cloudera.com/t5/Support-Questions/quot-is-already-marked-for-transfer-quot-in/td-p/236588],
>  alas, without any answer. 
>  
>

[jira] [Updated] (NIFI-8130) PutDatabaseRecord after MergeRecord randomly hangs forcing to discard the whole queue

2021-01-11 Thread Alessandro D'Armiento (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-8130:

Description: 
This bug is hard to replicate as it happens randomly.

In the following (common) configuration, in which multiple records are merged 
and then sent to a PutDatabaseRecord, it happens sometimes that a specific 
FlowFile cause the PutDatabaseRecord to fail with `FlowFileHandlingException: 
FlowFile already marked for transfer`

 !Screenshot 2021-01-11 at 11.38.45.png! 

 !Screenshot 2021-01-11 at 11.34.06.png! 

In case of such an event, the processor remains stuck trying to process that 
specific FlowFile (i.e. it is not routed to the failure relationship). This 
forces the user to empty the whole queue in order to continue, which causes 
data loss. 

I noticed the following: 
 * The issue is bound with the FlowFile: the same FlowFile will make multiple 
processors to fail with the same error.
 * Creating a new FlowFile with the same content (i.e. publishing the FlowFile 
on a Kafka queue and consuming it right after) doesn't solve the issue, and the 
FlowFile will raise the error again once sent to the PutDatabaseRecord
 * This error happened to me only when using the PutDatabaseRecord after a 
MergeRecord (in order to batch multiple records in a single DB transaction). 
 * This issue was already raised in the [Cloudera Community 
Forum|https://community.cloudera.com/t5/Support-Questions/quot-is-already-marked-for-transfer-quot-in/td-p/236588],
 alas, without any answer. 

 

 

  was:
This bug is hard to replicate as it happens randomly.

In the following (common) configuration, in which multiple records are merged 
and then sent to a PutDatabaseRecord, it happens sometimes that a specific 
FlowFile cause the PutDatabaseRecord to fail with `FlowFileHandlingException: 
FlowFile already marked for transfer`

!Screenshot 2021-01-11 at 11.38.45.png!

!Screenshot 2021-01-11 at 11.34.06.png!

In case of such an event, the processor remains stuck trying to process that 
specific FlowFile (i.e. it is not routed to the failure relationship). This 
forces the user to empty the whole queue in order to continue, which causes 
data loss. 

I noticed the following: 
 * The issue is bound with the FlowFile: the same FlowFile will make multiple 
processors to fail with the same error.
 * Creating a new FlowFile with the same content (i.e. publishing the FlowFile 
on a Kafka queue and consuming it right after) doesn't solve the issue, and the 
FlowFile will raise the error again once sent to the PutDatabaseRecord
 * This error happened to me only when using the PutDatabaseRecord after a 
MergeRecord (in order to batch multiple records in a single DB transaction). 
 * This issue was already raised in the [Cloudera Community 
Forum|https://community.cloudera.com/t5/Support-Questions/quot-is-already-marked-for-transfer-quot-in/td-p/236588],
 alas, without any answer. 

 

 


> PutDatabaseRecord after MergeRecord randomly hangs forcing to discard the 
> whole queue
> -
>
> Key: NIFI-8130
> URL: https://issues.apache.org/jira/browse/NIFI-8130
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Alessandro D'Armiento
>Priority: Major
> Attachments: Screenshot 2021-01-11 at 11.34.06.png, Screenshot 
> 2021-01-11 at 11.38.45.png
>
>
> This bug is hard to replicate as it happens randomly.
> In the following (common) configuration, in which multiple records are merged 
> and then sent to a PutDatabaseRecord, it happens sometimes that a specific 
> FlowFile cause the PutDatabaseRecord to fail with `FlowFileHandlingException: 
> FlowFile already marked for transfer`
>  !Screenshot 2021-01-11 at 11.38.45.png! 
>  !Screenshot 2021-01-11 at 11.34.06.png! 
> In case of such an event, the processor remains stuck trying to process that 
> specific FlowFile (i.e. it is not routed to the failure relationship). This 
> forces the user to empty the whole queue in order to continue, which causes 
> data loss. 
> I noticed the following: 
>  * The issue is bound with the FlowFile: the same FlowFile will make multiple 
> processors to fail with the same error.
>  * Creating a new FlowFile with the same content (i.e. publishing the 
> FlowFile on a Kafka queue and consuming it right after) doesn't solve the 
> issue, and the FlowFile will raise the error again once sent to the 
> PutDatabaseRecord
>  * This error happened to me only when using the PutDatabaseRecord after a 
> MergeRecord (in order to batch multiple records in a single DB transaction). 
>  * This issue was already raised in the [Cloudera Community 
> Forum|https://community.cloudera.com/t5/Support-Questions/quot-is-already-marked-for-transfer-quot-in/td-p/236588],
>  alas, without any answer.

[jira] [Updated] (NIFI-8130) PutDatabaseRecord after MergeRecord randomly hangs forcing to discard the whole queue

2021-01-11 Thread Alessandro D'Armiento (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-8130:

Attachment: Screenshot 2021-01-11 at 11.34.06.png

> PutDatabaseRecord after MergeRecord randomly hangs forcing to discard the 
> whole queue
> -
>
> Key: NIFI-8130
> URL: https://issues.apache.org/jira/browse/NIFI-8130
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Alessandro D'Armiento
>Priority: Major
> Attachments: Screenshot 2021-01-11 at 11.34.06.png, Screenshot 
> 2021-01-11 at 11.38.45.png
>
>
> This bug is hard to replicate as it happens randomly.
> In the following (common) configuration, in which multiple records are merged 
> and then sent to a PutDatabaseRecord, it happens sometimes that a specific 
> FlowFile cause the PutDatabaseRecord to fail with `FlowFileHandlingException: 
> FlowFile already marked for transfer`
> !Screenshot 2021-01-11 at 11.38.45.png!
> !Screenshot 2021-01-11 at 11.34.06.png!
> In case of such an event, the processor remains stuck trying to process that 
> specific FlowFile (i.e. it is not routed to the failure relationship). This 
> forces the user to empty the whole queue in order to continue, which causes 
> data loss. 
> I noticed the following: 
>  * The issue is bound with the FlowFile: the same FlowFile will make multiple 
> processors to fail with the same error.
>  * Creating a new FlowFile with the same content (i.e. publishing the 
> FlowFile on a Kafka queue and consuming it right after) doesn't solve the 
> issue, and the FlowFile will raise the error again once sent to the 
> PutDatabaseRecord
>  * This error happened to me only when using the PutDatabaseRecord after a 
> MergeRecord (in order to batch multiple records in a single DB transaction). 
>  * This issue was already raised in the [Cloudera Community 
> Forum|https://community.cloudera.com/t5/Support-Questions/quot-is-already-marked-for-transfer-quot-in/td-p/236588],
>  alas, without any answer. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8130) PutDatabaseRecord after MergeRecord randomly hangs forcing to discard the whole queue

2021-01-11 Thread Alessandro D'Armiento (Jira)
Alessandro D'Armiento created NIFI-8130:
---

 Summary: PutDatabaseRecord after MergeRecord randomly hangs 
forcing to discard the whole queue
 Key: NIFI-8130
 URL: https://issues.apache.org/jira/browse/NIFI-8130
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Alessandro D'Armiento


This bug is hard to replicate as it happens randomly.

In the following (common) configuration, in which multiple records are merged 
and then sent to a PutDatabaseRecord, it happens sometimes that a specific 
FlowFile cause the PutDatabaseRecord to fail with `FlowFileHandlingException: 
FlowFile already marked for transfer`

!Screenshot 2021-01-11 at 11.38.45.png!

!Screenshot 2021-01-11 at 11.34.06.png!

In case of such an event, the processor remains stuck trying to process that 
specific FlowFile (i.e. it is not routed to the failure relationship). This 
forces the user to empty the whole queue in order to continue, which causes 
data loss. 

I noticed the following: 
 * The issue is bound with the FlowFile: the same FlowFile will make multiple 
processors to fail with the same error.
 * Creating a new FlowFile with the same content (i.e. publishing the FlowFile 
on a Kafka queue and consuming it right after) doesn't solve the issue, and the 
FlowFile will raise the error again once sent to the PutDatabaseRecord
 * This error happened to me only when using the PutDatabaseRecord after a 
MergeRecord (in order to batch multiple records in a single DB transaction). 
 * This issue was already raised in the Cloudera Community Forum, alas, without 
any answer. 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8130) PutDatabaseRecord after MergeRecord randomly hangs forcing to discard the whole queue

2021-01-11 Thread Alessandro D'Armiento (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-8130:

Description: 
This bug is hard to replicate as it happens randomly.

In the following (common) configuration, in which multiple records are merged 
and then sent to a PutDatabaseRecord, it happens sometimes that a specific 
FlowFile cause the PutDatabaseRecord to fail with `FlowFileHandlingException: 
FlowFile already marked for transfer`

!Screenshot 2021-01-11 at 11.38.45.png!

!Screenshot 2021-01-11 at 11.34.06.png!

In case of such an event, the processor remains stuck trying to process that 
specific FlowFile (i.e. it is not routed to the failure relationship). This 
forces the user to empty the whole queue in order to continue, which causes 
data loss. 

I noticed the following: 
 * The issue is bound with the FlowFile: the same FlowFile will make multiple 
processors to fail with the same error.
 * Creating a new FlowFile with the same content (i.e. publishing the FlowFile 
on a Kafka queue and consuming it right after) doesn't solve the issue, and the 
FlowFile will raise the error again once sent to the PutDatabaseRecord
 * This error happened to me only when using the PutDatabaseRecord after a 
MergeRecord (in order to batch multiple records in a single DB transaction). 
 * This issue was already raised in the Cloudera Community Forum, [alas, 
without any 
|https://community.cloudera.com/t5/Support-Questions/quot-is-already-marked-for-transfer-quot-in/td-p/236588]answer.
 

 

 

  was:
This bug is hard to replicate as it happens randomly.

In the following (common) configuration, in which multiple records are merged 
and then sent to a PutDatabaseRecord, it happens sometimes that a specific 
FlowFile cause the PutDatabaseRecord to fail with `FlowFileHandlingException: 
FlowFile already marked for transfer`

!Screenshot 2021-01-11 at 11.38.45.png!

!Screenshot 2021-01-11 at 11.34.06.png!

In case of such an event, the processor remains stuck trying to process that 
specific FlowFile (i.e. it is not routed to the failure relationship). This 
forces the user to empty the whole queue in order to continue, which causes 
data loss. 

I noticed the following: 
 * The issue is bound with the FlowFile: the same FlowFile will make multiple 
processors to fail with the same error.
 * Creating a new FlowFile with the same content (i.e. publishing the FlowFile 
on a Kafka queue and consuming it right after) doesn't solve the issue, and the 
FlowFile will raise the error again once sent to the PutDatabaseRecord
 * This error happened to me only when using the PutDatabaseRecord after a 
MergeRecord (in order to batch multiple records in a single DB transaction). 
 * This issue was already raised in the Cloudera Community Forum, alas, without 
any answer. 

 

 


> PutDatabaseRecord after MergeRecord randomly hangs forcing to discard the 
> whole queue
> -
>
> Key: NIFI-8130
> URL: https://issues.apache.org/jira/browse/NIFI-8130
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Alessandro D'Armiento
>Priority: Major
>
> This bug is hard to replicate as it happens randomly.
> In the following (common) configuration, in which multiple records are merged 
> and then sent to a PutDatabaseRecord, it happens sometimes that a specific 
> FlowFile cause the PutDatabaseRecord to fail with `FlowFileHandlingException: 
> FlowFile already marked for transfer`
> !Screenshot 2021-01-11 at 11.38.45.png!
> !Screenshot 2021-01-11 at 11.34.06.png!
> In case of such an event, the processor remains stuck trying to process that 
> specific FlowFile (i.e. it is not routed to the failure relationship). This 
> forces the user to empty the whole queue in order to continue, which causes 
> data loss. 
> I noticed the following: 
>  * The issue is bound with the FlowFile: the same FlowFile will make multiple 
> processors to fail with the same error.
>  * Creating a new FlowFile with the same content (i.e. publishing the 
> FlowFile on a Kafka queue and consuming it right after) doesn't solve the 
> issue, and the FlowFile will raise the error again once sent to the 
> PutDatabaseRecord
>  * This error happened to me only when using the PutDatabaseRecord after a 
> MergeRecord (in order to batch multiple records in a single DB transaction). 
>  * This issue was already raised in the Cloudera Community Forum, [alas, 
> without any 
> |https://community.cloudera.com/t5/Support-Questions/quot-is-already-marked-for-transfer-quot-in/td-p/236588]answer.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7858) ExecuteSQLRecord processor Fails

2020-09-30 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17204719#comment-17204719
 ] 

Alessandro D'Armiento commented on NIFI-7858:
-

Hello [~cdebe...@borderstates.com], can you show how to reproduce the bug? 
Processor configuration and flowfile content/attributes

> ExecuteSQLRecord processor Fails
> 
>
> Key: NIFI-7858
> URL: https://issues.apache.org/jira/browse/NIFI-7858
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0
> Environment: java 8
>Reporter: Chris
>Priority: Major
> Attachments: image-2020-09-28-23-00-33-671.png
>
>
> When attempting to run a sql record with ExecuteSQLRecord processor I'm 
> getting error 
>  java.lang.NullPointerException; routing to failure: 
> org.apache.nifi.processor.exception.ProcessException: 
> java.lang.NullPointerException
> I do not see anything else in NiFi that shows up with this error when 
> processor is in debug level logging and this processor did work on version 
> 1.11.x. I can see that the sql statement also works with ExecuteSQL 
> processor. I've tried with different configurations on the processor with 
> same result.
> !image-2020-09-28-23-00-33-671.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7505) Add InvokeHTTPRecord processor

2020-06-08 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17128261#comment-17128261
 ] 

Alessandro D'Armiento commented on NIFI-7505:
-

Do you think such a Processor should also emit a different FlowFile for each 
record (as each different invocation could succeed or fail independently) 
acting in a similar way than SplitRecord? 

As per the main behaviour, I guess this processor should be able to add the 
record content to the request body (or parameters in case of GET)

What do you think?

> Add InvokeHTTPRecord processor
> --
>
> Key: NIFI-7505
> URL: https://issues.apache.org/jira/browse/NIFI-7505
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: Andy LoPresto
>Priority: Major
>  Labels: Record, http, processor, rest
>
> Some users have recently requested being able to invoke a specific URL via 
> {{InvokeHTTP}} on every record in a flowfile. Currently, the {{InvokeHTTP}} 
> processor can only handle one piece of data per flowfile. There are some 
> workarounds available for specific patterns with {{LookupRecord}} + 
> {{RestLookupService}}, but this is not a complete solution. I propose 
> introducing an {{InvokeHTTPRecord}} processor, providing the {{InvokeHTTP}} 
> functionality in conjunction with the record processing behavior. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-6672) Expression language plus operation doesn't check for overflow

2020-05-28 Thread Alessandro D'Armiento (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento reassigned NIFI-6672:
---

Assignee: Alessandro D'Armiento

> Expression language plus operation doesn't check for overflow
> -
>
> Key: NIFI-6672
> URL: https://issues.apache.org/jira/browse/NIFI-6672
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Assignee: Alessandro D'Armiento
>Priority: Major
> Fix For: 1.12.0
>
> Attachments: image-2019-09-14-17-32-58-740.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> To reproduce the bug, create a FF with an attribute equals to Long.MAX, then 
> add 100 to that attribute in a following UpdateAttribute processor. The 
> property will overflow to a negative number without throwing any exception
> !image-2019-09-14-17-32-58-740.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-6674) Expression language minus operation doesn't check for overflow

2020-05-28 Thread Alessandro D'Armiento (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento reassigned NIFI-6674:
---

Assignee: Alessandro D'Armiento

> Expression language minus operation doesn't check for overflow
> --
>
> Key: NIFI-6674
> URL: https://issues.apache.org/jira/browse/NIFI-6674
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Alessandro D'Armiento
>Assignee: Alessandro D'Armiento
>Priority: Major
> Fix For: 1.12.0
>
> Attachments: image-2019-09-14-17-51-41-809.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> To reproduce the bug, create a FF with an attribute equals to Long.MIN, then 
> subtract 100 to that attribute in a following UpdateAttribute processor. The 
> property will overflow to a positive number without throwing any exception
> !image-2019-09-14-17-51-41-809.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-6673) Expression language multiply operation doesn't check for overflow

2020-05-28 Thread Alessandro D'Armiento (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento reassigned NIFI-6673:
---

Assignee: Alessandro D'Armiento

> Expression language multiply operation doesn't check for overflow
> -
>
> Key: NIFI-6673
> URL: https://issues.apache.org/jira/browse/NIFI-6673
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Assignee: Alessandro D'Armiento
>Priority: Major
> Fix For: 1.12.0
>
> Attachments: image-2019-09-14-17-38-19-397.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> To reproduce the bug, create a FF with an attribute equals to Long.MAX, then 
> multiply it by 2 to that attribute in a following UpdateAttribute processor. 
> The property will overflow to a negative number without throwing any exception
> !image-2019-09-14-17-38-19-397.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7464) CSVRecordSetWriter does not output header for record sets with zero records

2020-05-26 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17116547#comment-17116547
 ] 

Alessandro D'Armiento commented on NIFI-7464:
-

This could be made optional with an additional parameter in CSVRecordSetWriter

> CSVRecordSetWriter does not output header for record sets with zero records
> ---
>
> Key: NIFI-7464
> URL: https://issues.apache.org/jira/browse/NIFI-7464
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.11.3
>Reporter: Karl Fredrickson
>Priority: Major
>
> If you configure CSVRecordSetWriter to output a header row, and a processor 
> such as QueryRecord or ConvertRecord writes out a flowfile with zero records 
> using the CSVRecordSetWriter, the header row will not be included.
> This affects QueryRecord and ConvertRecord processors and presumably all 
> other processors that can be configured to use CSVRecordWriter.
> I suppose this could be intentional behavior but older versions of NiFi like 
> 1.3 do output a header even when writing a zero record flowfile, and this 
> caused some non-trivial issues for us in the process of upgrading from 1.3 to 
> 1.11.  We fixed this on our NiFi installation by making a small change to the 
> WriteCSVResult.java file and then rebuilding the NiFi record serialization 
> services NAR.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7488) Listening Port property on HandleHttpRequest is not validated when Variable registry is used

2020-05-25 Thread Alessandro D'Armiento (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-7488:

Description: 
The value of the listening port is not validated against negative or too high 
values when the variable registry is used.

Related with #NIFI-7479

  was:The value of the listening port is not validated against negative or too 
high values when the variable registry is used.


> Listening Port property on HandleHttpRequest is not validated when Variable 
> registry is used
> 
>
> Key: NIFI-7488
> URL: https://issues.apache.org/jira/browse/NIFI-7488
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> The value of the listening port is not validated against negative or too high 
> values when the variable registry is used.
> Related with #NIFI-7479



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7488) Listening Port property on HandleHttpRequest is not validated when Variable registry is used

2020-05-25 Thread Alessandro D'Armiento (Jira)
Alessandro D'Armiento created NIFI-7488:
---

 Summary: Listening Port property on HandleHttpRequest is not 
validated when Variable registry is used
 Key: NIFI-7488
 URL: https://issues.apache.org/jira/browse/NIFI-7488
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Alessandro D'Armiento


The value of the listening port is not validated against negative or too high 
values when the variable registry is used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7479) Listening Port property on HandleHttpRequest doesn't work with parameters

2020-05-25 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17116173#comment-17116173
 ] 

Alessandro D'Armiento commented on NIFI-7479:
-

I am trying to reproduce the error, but it seems to be working when using the 
variable registry to set the listening port.

However, I found out that the value of the port is not validated against 
negative or too high values when the variable registry is used and this leads 
to possibly misleading errors. 

> Listening Port property on HandleHttpRequest doesn't work with parameters
> -
>
> Key: NIFI-7479
> URL: https://issues.apache.org/jira/browse/NIFI-7479
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.11.4
>Reporter: David Malament
>Priority: Major
> Attachments: image-2020-05-22-10-29-01-827.png
>
>
> The Listening Port property on the HandleHttpRequest processor clearly 
> indicates that parameters are supported (see screenshot) and the processor 
> starts up successfully, but any requests to the configured port give a 
> "connection refused" error. Switching the property to a hard-coded value or a 
> variable instead of a parameter restores functionality.
> !image-2020-05-22-10-29-01-827.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6882) PutFile throws NullPointerException if destination directory doesn't exist

2019-12-09 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16991236#comment-16991236
 ] 

Alessandro D'Armiento commented on NIFI-6882:
-

Taking a look at TestPutFile, there is the `testCreateDirectory()` method which 
seems to already enforce the creation of the folder when it doesn't exist.

 
{code:java}
@Test
public void testCreateDirectory() throws IOException {
 final TestRunner runner = TestRunners.newTestRunner(new PutFile());
 String newDir = targetDir.getAbsolutePath()+"/new-folder";
 Files.deleteIfExists(new File(newDir).toPath());<- I added 
this one to make sure the folder didn't already exist
 runner.setProperty(PutFile.DIRECTORY, newDir);
 runner.setProperty(PutFile.CONFLICT_RESOLUTION, PutFile.REPLACE_RESOLUTION);

 Map attributes = new HashMap<>();
 attributes.put(CoreAttributes.FILENAME.key(), "targetFile.txt");
 runner.enqueue("Hello world!!".getBytes(), attributes);
 runner.run();
 runner.assertAllFlowFilesTransferred(FetchFile.REL_SUCCESS, 1);
 Path targetPath = Paths.get(TARGET_DIRECTORY + "/new-folder/targetFile.txt");
 byte[] content = Files.readAllBytes(targetPath);
 assertEquals("Hello world!!", new String(content));
}{code}
I think that maybe the problem here is more complex, but is drowned somewhere 
where it is unchecked, eventually producing the NPE. I'll further investigate 
for example what happens if your user does not have the writing permission on 
the folder, or the executing permission in one of the super-folder you have to 
traverse (I'm not sure if this was already tested or not)

> PutFile throws NullPointerException if destination directory doesn't exist
> --
>
> Key: NIFI-6882
> URL: https://issues.apache.org/jira/browse/NIFI-6882
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
>Reporter: Mark Payne
>Priority: Critical
>
> I have a PutFile processor that is configured to create directories if they 
> don't exist. I have the directory set to "heap-dumps", which does not exist. 
> When data came into the processor, it started throwing the following NPE:
> {code}
> 2019-11-18 16:46:14,879 ERROR [Timer-Driven Process Thread-2] 
> o.a.nifi.processors.standard.PutFile 
> PutFile[id=0b173f77-d60a-1ce8-d59e-4a9aeff49dbf] Penalizing 
> StandardFlowFileRecord[uuid=38a6849b-965b-46b2-96d3-fe37a1c22cf8,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1574113470589-2, container=default, 
> section=2], offset=0, 
> length=1139822320],offset=0,name=heap.bin.gz,size=1139822320] and 
> transferring to failure due to java.lang.NullPointerException: 
> java.lang.NullPointerException
> java.lang.NullPointerException: null
> at java.nio.file.Files.provider(Files.java:97)
> at java.nio.file.Files.exists(Files.java:2385)
> at 
> org.apache.nifi.processors.standard.PutFile.onTrigger(PutFile.java:243)
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
> at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
> at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6815) NiFi status variables are not consistent with use of case

2019-10-28 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16961461#comment-16961461
 ] 

Alessandro D'Armiento commented on NIFI-6815:
-

Hello! 
I think you are talking about {code:java} 
org.apache.nifi.controller.ScheduledState{code} (which is all uppercase) and 
{code:java}org.apache.nifi.controller.status.RunStatus{code} (which instead is 
PascalCase). Am I correct? 
In case, both these enums contain a "running" value, but they work in different 
contexts. Should we maybe put all the enum values all in uppercase? I'm not 
sure about this could break anything in the UI.


> NiFi status variables are not consistent with use of case
> -
>
> Key: NIFI-6815
> URL: https://issues.apache.org/jira/browse/NIFI-6815
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Jay Barra
>Priority: Major
>
> Responses for statuses can vary between "RUNNING" and "Running" for different 
> calls. They should share casing



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6684) Add more property to Hive3ConnectionPool

2019-09-23 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16935984#comment-16935984
 ] 

Alessandro D'Armiento commented on NIFI-6684:
-

I agree with you on updating the dependency, however, I am not sure if this 
should be done in a separated issue, for better tracking. 

> Add more property to Hive3ConnectionPool
> 
>
> Key: NIFI-6684
> URL: https://issues.apache.org/jira/browse/NIFI-6684
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: jamescheng
>Assignee: Peter Wicks
>Priority: Minor
> Attachments: PutHive3 enhance.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The Hive3ConnectionPool is similar with DBCPConnectionPool as both of them 
> are using DBCP BasicDataSource. However, Hive3ConnectionPool  doesn't provide 
> some properties of what DBCPConnectionPool  has. Such as "Minimum Idle 
> Connections", "Max Idle Connections", "Max Connection Lifetime", "Time 
> Between Eviction Runs", "Minimum Evictable Idle Time" and "Soft Minimum 
> Evictable Idle Time".
> This improvement is try to provide more properties for developer to set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (NIFI-6684) Add more property to Hive3ConnectionPool

2019-09-23 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16935863#comment-16935863
 ] 

Alessandro D'Armiento edited comment on NIFI-6684 at 9/23/19 1:39 PM:
--

Hello Peter, feel free to take inspiration from the PR I've attached to this 
Jira. It's still a work in progress (I found some potential issues with the 
Unit testing). I didn't know this was already taken


was (Author: axelsync):
Hello Peter, feel free to take some from the PR I've attached to this Jira. 
It's still a work in progress (I found some potential issues with the Unit 
testing). I didn't know this was already taken

> Add more property to Hive3ConnectionPool
> 
>
> Key: NIFI-6684
> URL: https://issues.apache.org/jira/browse/NIFI-6684
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: jamescheng
>Assignee: Peter Wicks
>Priority: Minor
> Attachments: PutHive3 enhance.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The Hive3ConnectionPool is similar with DBCPConnectionPool as both of them 
> are using DBCP BasicDataSource. However, Hive3ConnectionPool  doesn't provide 
> some properties of what DBCPConnectionPool  has. Such as "Minimum Idle 
> Connections", "Max Idle Connections", "Max Connection Lifetime", "Time 
> Between Eviction Runs", "Minimum Evictable Idle Time" and "Soft Minimum 
> Evictable Idle Time".
> This improvement is try to provide more properties for developer to set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6684) Add more property to Hive3ConnectionPool

2019-09-23 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16935863#comment-16935863
 ] 

Alessandro D'Armiento commented on NIFI-6684:
-

Hello Peter, feel free to take some from the PR I've attached to this Jira. 
It's still a work in progress (I found some potential issues with the Unit 
testing). I didn't know this was already taken

> Add more property to Hive3ConnectionPool
> 
>
> Key: NIFI-6684
> URL: https://issues.apache.org/jira/browse/NIFI-6684
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: jamescheng
>Assignee: Peter Wicks
>Priority: Minor
> Attachments: PutHive3 enhance.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The Hive3ConnectionPool is similar with DBCPConnectionPool as both of them 
> are using DBCP BasicDataSource. However, Hive3ConnectionPool  doesn't provide 
> some properties of what DBCPConnectionPool  has. Such as "Minimum Idle 
> Connections", "Max Idle Connections", "Max Connection Lifetime", "Time 
> Between Eviction Runs", "Minimum Evictable Idle Time" and "Soft Minimum 
> Evictable Idle Time".
> This improvement is try to provide more properties for developer to set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6684) Add more property to Hive3ConnectionPool

2019-09-22 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16935295#comment-16935295
 ] 

Alessandro D'Armiento commented on NIFI-6684:
-

Hello [~jamescheng], I took a look on the two processors. 
Some of the features you ask are definetely possible, however, unfortunately 
you are comparing two processors which relies on different versions of 
commons-dbcp (that is, Hive3ConnectionPool uses an older implementation). 

For this reason, in a minor update MAX_CONN_LIFETIME and 
SOFT_MIN_EVICTABLE_IDLE_TIME could not be added. I am however working on 
introducing MIN_IDLE,  MAX_IDLE, EVICTION_RUN_PERIOD and MIN_EVICTABLE_IDLE_TIME

> Add more property to Hive3ConnectionPool
> 
>
> Key: NIFI-6684
> URL: https://issues.apache.org/jira/browse/NIFI-6684
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: jamescheng
>Priority: Minor
> Attachments: PutHive3 enhance.png
>
>
> The Hive3ConnectionPool is similar with DBCPConnectionPool as both of them 
> are using DBCP BasicDataSource. However, Hive3ConnectionPool  doesn't provide 
> some properties of what DBCPConnectionPool  has. Such as "Minimum Idle 
> Connections", "Max Idle Connections", "Max Connection Lifetime", "Time 
> Between Eviction Runs", "Minimum Evictable Idle Time" and "Soft Minimum 
> Evictable Idle Time".
> This improvement is try to provide more properties for developer to set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6698) HandleHttpRequest does not handle multiple URL parameters

2019-09-22 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16935288#comment-16935288
 ] 

Alessandro D'Armiento commented on NIFI-6698:
-

Hello Patrick, 

I am trying to reproduce the issue, however, when I try to reach 
[http://localhost:8099/resource?name=ferret&color=purple|http://myserver.com/resource?name=ferret&color=purple:],
 the generated flowfile seems quite what you were looking for:

!image-2019-09-22-15-19-18-523.png!

I instead confirm that using the Url encoding (i.e. %26 for the '&' character), 
the parameters scramble a bit: 

!image-2019-09-22-15-22-44-053.png!

 

I don't have a CentOS machine to perform the tests. I'm using an Ubuntu 18.04 
machine with Nifi 1.9.2

> HandleHttpRequest does not handle multiple URL parameters
> -
>
> Key: NIFI-6698
> URL: https://issues.apache.org/jira/browse/NIFI-6698
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.9.2
> Environment: CentOS
>Reporter: Patrick Laneville
>Priority: Minor
> Attachments: image-2019-09-22-15-19-18-523.png, 
> image-2019-09-22-15-22-44-053.png
>
>
> The http.query.string parameter is updated.  However, it is only updated 
> properly when the URL contains a single parameter.  Additional parameters are 
> truncated when separated by an "&" in the URL.
>  
> Example
> Attempt a GET on the following resource: 
> [http://myserver.com/resource?name=ferret&color=purple:]
> Result is the http.query.string attribute in the outgoing flow file is set to 
> "name=ferret".  The problem is the http.query.string should be set to 
> "name=ferret&color=purple"
> However, if you use URL encoding (encode & as %26) and specify  
> [http://myserver.com/resource?name=ferret%26color=purple|http://myserver.com/resource?name=ferret&color=purple:]
> then then http.query.string attribute in the outgoing flow file is set 
> properly to "name=ferret&color=purple"
> However, with the URL encoding work-around the attribute 
> http.query.param.name is incorrectly set to "ferret&color=purple"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6698) HandleHttpRequest does not handle multiple URL parameters

2019-09-22 Thread Alessandro D'Armiento (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6698:

Attachment: image-2019-09-22-15-22-44-053.png

> HandleHttpRequest does not handle multiple URL parameters
> -
>
> Key: NIFI-6698
> URL: https://issues.apache.org/jira/browse/NIFI-6698
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.9.2
> Environment: CentOS
>Reporter: Patrick Laneville
>Priority: Minor
> Attachments: image-2019-09-22-15-19-18-523.png, 
> image-2019-09-22-15-22-44-053.png
>
>
> The http.query.string parameter is updated.  However, it is only updated 
> properly when the URL contains a single parameter.  Additional parameters are 
> truncated when separated by an "&" in the URL.
>  
> Example
> Attempt a GET on the following resource: 
> [http://myserver.com/resource?name=ferret&color=purple:]
> Result is the http.query.string attribute in the outgoing flow file is set to 
> "name=ferret".  The problem is the http.query.string should be set to 
> "name=ferret&color=purple"
> However, if you use URL encoding (encode & as %26) and specify  
> [http://myserver.com/resource?name=ferret%26color=purple|http://myserver.com/resource?name=ferret&color=purple:]
> then then http.query.string attribute in the outgoing flow file is set 
> properly to "name=ferret&color=purple"
> However, with the URL encoding work-around the attribute 
> http.query.param.name is incorrectly set to "ferret&color=purple"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6698) HandleHttpRequest does not handle multiple URL parameters

2019-09-22 Thread Alessandro D'Armiento (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6698:

Attachment: image-2019-09-22-15-19-18-523.png

> HandleHttpRequest does not handle multiple URL parameters
> -
>
> Key: NIFI-6698
> URL: https://issues.apache.org/jira/browse/NIFI-6698
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.9.2
> Environment: CentOS
>Reporter: Patrick Laneville
>Priority: Minor
> Attachments: image-2019-09-22-15-19-18-523.png
>
>
> The http.query.string parameter is updated.  However, it is only updated 
> properly when the URL contains a single parameter.  Additional parameters are 
> truncated when separated by an "&" in the URL.
>  
> Example
> Attempt a GET on the following resource: 
> [http://myserver.com/resource?name=ferret&color=purple:]
> Result is the http.query.string attribute in the outgoing flow file is set to 
> "name=ferret".  The problem is the http.query.string should be set to 
> "name=ferret&color=purple"
> However, if you use URL encoding (encode & as %26) and specify  
> [http://myserver.com/resource?name=ferret%26color=purple|http://myserver.com/resource?name=ferret&color=purple:]
> then then http.query.string attribute in the outgoing flow file is set 
> properly to "name=ferret&color=purple"
> However, with the URL encoding work-around the attribute 
> http.query.param.name is incorrectly set to "ferret&color=purple"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (NIFI-6672) Expression language plus operation doesn't check for overflow

2019-09-16 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16929937#comment-16929937
 ] 

Alessandro D'Armiento edited comment on NIFI-6672 at 9/16/19 8:15 AM:
--

I think this situation could be handled in a number of ways (This, of course, 
counts also for #NIFI-6673 and #NIFI-6674)

At the moment, the behavior is "let's not say anything" that IMO is the less 
reliable behavior possible.

Some possible upgrade could be: 
 * Evaluating `java.math.BigInteger`
 * Using `Math.addExact` to throw an `ArithmeticException` in case of overflow
 * Leaving the overflow issues as they are, but adding a `logger.warn` where a 
possible overflow is detected

 

Another source of confusion to be tackled IMO, is that EL `plus` (as well as 
minus, divide, multiply...) function allows both integral and decimal numbers, 
treating them either in one way or another at runtime:

```

if (subjectValue instanceof Double || plus instanceof Double)

{   result = subjectValue.doubleValue() + plus.doubleValue(); }

else

{   result = subjectValue.longValue() + plus.longValue(); }

```

This means that using `plus` to sum [Long.MAX_VALUE] `9223372036854775807L` and 
`100` will produce unmanaged overflow, while using it to sum 
`9223372036854775807L` and `100.0` will instead promote both values to Double, 
avoiding overflow but losing precision. 

 


was (Author: axelsync):
I think this situation could be handled in several ways (This, of course, 
counts also for #NIFI-6673 and #NIFI-6674)

At the moment, the behavior is "let's not say anything" that IMO is the less 
reliable behavior possible.

Some possible upgrade could be: 
 * Evaluating `java.math.BigInteger`
 * Using `Math.addExact` to throw an `ArithmeticException` in case of overflow
 * Leaving the overflow issues as they are, but adding a `logger.warn` where a 
possible overflow is detected

 

Another source of confusion to be tackled IMO, is that EL `plus` (as well as 
minus, divide, multiply...) function allows both integral and decimal numbers, 
treating them either in one way or another at runtime:

```

if (subjectValue instanceof Double || plus instanceof Double)

{   result = subjectValue.doubleValue() + plus.doubleValue(); }

else

{   result = subjectValue.longValue() + plus.longValue(); }

```

This means that using `plus` to sum [Long.MAX_VALUE] `9223372036854775807L` and 
`100` will produce unmanaged overflow, while using it to sum 
`9223372036854775807L` and `100.0` will instead promote both values to Double, 
avoiding overflow but losing precision. 

 

> Expression language plus operation doesn't check for overflow
> -
>
> Key: NIFI-6672
> URL: https://issues.apache.org/jira/browse/NIFI-6672
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Major
> Attachments: image-2019-09-14-17-32-58-740.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> To reproduce the bug, create a FF with an attribute equals to Long.MAX, then 
> add 100 to that attribute in a following UpdateAttribute processor. The 
> property will overflow to a negative number without throwing any exception
> !image-2019-09-14-17-32-58-740.png!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (NIFI-6674) Expression language minus operation doesn't check for overflow

2019-09-15 Thread Alessandro D'Armiento (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6674:

Summary: Expression language minus operation doesn't check for overflow  
(was: Expression language minus operation doesn't check for underflow)

> Expression language minus operation doesn't check for overflow
> --
>
> Key: NIFI-6674
> URL: https://issues.apache.org/jira/browse/NIFI-6674
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Alessandro D'Armiento
>Priority: Major
> Fix For: 1.9.2
>
> Attachments: image-2019-09-14-17-51-41-809.png
>
>
> To reproduce the bug, create a FF with an attribute equals to Long.MIN, then 
> subtract 100 to that attribute in a following UpdateAttribute processor. The 
> property will underflow to a positive number without throwing any exception
> !image-2019-09-14-17-51-41-809.png!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (NIFI-6674) Expression language minus operation doesn't check for overflow

2019-09-15 Thread Alessandro D'Armiento (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6674:

Description: 
To reproduce the bug, create a FF with an attribute equals to Long.MIN, then 
subtract 100 to that attribute in a following UpdateAttribute processor. The 
property will overflow to a positive number without throwing any exception

!image-2019-09-14-17-51-41-809.png!

  was:
To reproduce the bug, create a FF with an attribute equals to Long.MIN, then 
subtract 100 to that attribute in a following UpdateAttribute processor. The 
property will underflow to a positive number without throwing any exception

!image-2019-09-14-17-51-41-809.png!


> Expression language minus operation doesn't check for overflow
> --
>
> Key: NIFI-6674
> URL: https://issues.apache.org/jira/browse/NIFI-6674
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Alessandro D'Armiento
>Priority: Major
> Fix For: 1.9.2
>
> Attachments: image-2019-09-14-17-51-41-809.png
>
>
> To reproduce the bug, create a FF with an attribute equals to Long.MIN, then 
> subtract 100 to that attribute in a following UpdateAttribute processor. The 
> property will overflow to a positive number without throwing any exception
> !image-2019-09-14-17-51-41-809.png!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (NIFI-6672) Expression language plus operation doesn't check for overflow

2019-09-15 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16929937#comment-16929937
 ] 

Alessandro D'Armiento edited comment on NIFI-6672 at 9/15/19 1:18 PM:
--

I think this situation could be handled in several ways (This, of course, 
counts also for #NIFI-6673 and #NIFI-6674)

At the moment, the behavior is "let's not say anything" that IMO is the less 
reliable behavior possible.

Some possible upgrade could be: 
 * Evaluating `java.math.BigInteger`
 * Using `Math.addExact` to throw an `ArithmeticException` in case of overflow
 * Leaving the overflow issues as they are, but adding a `logger.warn` where a 
possible overflow is detected

 

Another source of confusion to be tackled IMO, is that EL `plus` (as well as 
minus, divide, multiply...) function allows both integral and decimal numbers, 
treating them either in one way or another at runtime:

```

if (subjectValue instanceof Double || plus instanceof Double)

{   result = subjectValue.doubleValue() + plus.doubleValue(); }

else

{   result = subjectValue.longValue() + plus.longValue(); }

```

This means that using `plus` to sum [Long.MAX_VALUE] `9223372036854775807L` and 
`100` will produce unmanaged overflow, while using it to sum 
`9223372036854775807L` and `100.0` will instead promote both values to Double, 
avoiding overflow but losing precision. 

 


was (Author: axelsync):
I think this situation could be handled in several ways (This, of course, 
counts also for #NIFI-6673 and #NIFI-6674)

At the moment, the behavior is "let's not say anything" that IMO is the less 
reliable behavior possible.

Some possible upgrade could be: 
 * Evaluating `java.math.BigInteger`
 * Using `Math.addExact` to throw an `ArithmeticException` in case of overflow
 * Leaving the overflow issues as they are, but adding a `logger.warn` where a 
possible overflow is detected

 

Another source of confusion to be tackled IMO, is that EL `plus` (as well as 
minus, divide, multiply...) function allows both integral and decimal numbers, 
treating them either in one way or another at runtime:

```

if (subjectValue instanceof Double || plus instanceof Double)

{   result = subjectValue.doubleValue() + plus.doubleValue(); }

else

{   result = subjectValue.longValue() + plus.longValue(); }

```

This means that using `plus` to sum [Long.MAX_VALUE] `9223372036854775807L` and 
`100` will produce unmanaged overflow, while using it to sum 
`9223372036854775807L` and `100.0` will instead promote both values to Double, 
avoiding overflow but losing precision. 

Note also that causing overflow with two `Double` won't cause the output value 
to be rendered to `Double.POSITIVE_INFINITE` but will instead leave an empty 
`String` in the property. This could also be considered part of the bug IMO. 

 

> Expression language plus operation doesn't check for overflow
> -
>
> Key: NIFI-6672
> URL: https://issues.apache.org/jira/browse/NIFI-6672
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Major
> Attachments: image-2019-09-14-17-32-58-740.png
>
>
> To reproduce the bug, create a FF with an attribute equals to Long.MAX, then 
> add 100 to that attribute in a following UpdateAttribute processor. The 
> property will overflow to a negative number without throwing any exception
> !image-2019-09-14-17-32-58-740.png!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (NIFI-6672) Expression language plus operation doesn't check for overflow

2019-09-15 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16929937#comment-16929937
 ] 

Alessandro D'Armiento edited comment on NIFI-6672 at 9/15/19 8:50 AM:
--

I think this situation could be handled in several ways (This, of course, 
counts also for #NIFI-6673 and #NIFI-6674)

At the moment, the behavior is "let's not say anything" that IMO is the less 
reliable behavior possible.

Some possible upgrade could be: 
 * Evaluating `java.math.BigInteger`
 * Using `Math.addExact` to throw an `ArithmeticException` in case of overflow
 * Leaving the overflow issues as they are, but adding a `logger.warn` where a 
possible overflow is detected

 

Another source of confusion to be tackled IMO, is that EL `plus` (as well as 
minus, divide, multiply...) function allows both integral and decimal numbers, 
treating them either in one way or another at runtime:

```

if (subjectValue instanceof Double || plus instanceof Double)

{   result = subjectValue.doubleValue() + plus.doubleValue(); }

else

{   result = subjectValue.longValue() + plus.longValue(); }

```

This means that using `plus` to sum [Long.MAX_VALUE] `9223372036854775807L` and 
`100` will produce unmanaged overflow, while using it to sum 
`9223372036854775807L` and `100.0` will instead promote both values to Double, 
avoiding overflow but losing precision. 

Note also that causing overflow with two `Double` won't cause the output value 
to be rendered to `Double.POSITIVE_INFINITE` but will instead leave an empty 
`String` in the property. This could also be considered part of the bug IMO. 

 


was (Author: axelsync):
I think this situation could be handled in several ways (This, of course, 
counts also for #NIFI-6673 and #NIFI-6674)

At the moment, the behavior is "let's not say anything" that IMO is the less 
reliable behavior possible.

Some possible upgrade could be: 
 * Evaluating `java.math.BigInteger`
 * Using `Math.addExact` to throw an `ArithmeticException` in case of overflow
 * Leaving the overflow issues as they are, but adding a `logger.warn` where a 
possible overflow is detected

 

Another source of confusion to be tackled IMO, is that EL `plus` (as well as 
minus, divide, multiply...) function allow both integral and decimal numbers, 
threating them either way at runtime:

```

if (subjectValue instanceof Double || plus instanceof Double)

{   result = subjectValue.doubleValue() + plus.doubleValue(); }

else

{   result = subjectValue.longValue() + plus.longValue(); }

```

This means that using `plus` to sum [Long.MAX_VALUE] `9223372036854775807L` and 
`100` will produce unmanaged overflow, while using it to sum 
`9223372036854775807L` and `100.0` will instead promote both values to Double, 
avoiding overflow but losing precision. 

Note also that causing overflow with two `Double` won't cause the output value 
to be rendered to `Double.POSITIVE_INFINITE` but will instead leave an empty 
`String` in the property. This could also be considered part of the bug IMO. 

 

> Expression language plus operation doesn't check for overflow
> -
>
> Key: NIFI-6672
> URL: https://issues.apache.org/jira/browse/NIFI-6672
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Major
> Attachments: image-2019-09-14-17-32-58-740.png
>
>
> To reproduce the bug, create a FF with an attribute equals to Long.MAX, then 
> add 100 to that attribute in a following UpdateAttribute processor. The 
> property will overflow to a negative number without throwing any exception
> !image-2019-09-14-17-32-58-740.png!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (NIFI-6672) Expression language plus operation doesn't check for overflow

2019-09-15 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16929937#comment-16929937
 ] 

Alessandro D'Armiento edited comment on NIFI-6672 at 9/15/19 8:49 AM:
--

I think this situation could be handled in several ways (This, of course, 
counts also for #NIFI-6673 and #NIFI-6674)

At the moment, the behavior is "let's not say anything" that IMO is the less 
reliable behavior possible.

Some possible upgrade could be: 
 * Evaluating `java.math.BigInteger`
 * Using `Math.addExact` to throw an `ArithmeticException` in case of overflow
 * Leaving the overflow issues as they are, but adding a `logger.warn` where a 
possible overflow is detected

 

Another source of confusion to be tackled IMO, is that EL `plus` (as well as 
minus, divide, multiply...) function allow both integral and decimal numbers, 
threating them either way at runtime:

```

if (subjectValue instanceof Double || plus instanceof Double)

{   result = subjectValue.doubleValue() + plus.doubleValue(); }

else

{   result = subjectValue.longValue() + plus.longValue(); }

```

This means that using `plus` to sum [Long.MAX_VALUE] `9223372036854775807L` and 
`100` will produce unmanaged overflow, while using it to sum 
`9223372036854775807L` and `100.0` will instead promote both values to Double, 
avoiding overflow but losing precision. 

Note also that causing overflow with two `Double` won't cause the output value 
to be rendered to `Double.POSITIVE_INFINITE` but will instead leave an empty 
`String` in the property. This could also be considered part of the bug IMO. 

 


was (Author: axelsync):
I think this situation could be handled in several ways (This, of course, 
counts also for #NIFI-6673 and #NIFI-6674)

At the moment, the behavior is "let's not say anything" that IMO is the less 
reliable behavior possible.

Some possible upgrade could be: 
 * Evaluating `java.math.BigInteger`
 * Using `Math.addExact` to throw an `ArithmeticException` in case of overflow
 * Leaving the overflow issues as they are, but adding a `logger.warn` where a 
possible overflow is detected

A source of confusion IMO is that EL `plus` (as well as minus, divide, 
multiply...) function allow both integral and decimal numbers, threating them 
either way at runtime:

```

if (subjectValue instanceof Double || plus instanceof Double){
  result = subjectValue.doubleValue() + plus.doubleValue();
} else {
  result = subjectValue.longValue() + plus.longValue();
}

```

This means that using `plus` to sum [Long.MAX_VALUE] `9223372036854775807L` and 
`100` will produce unmanaged overflow, while using it to sum 
`9223372036854775807L` and `100.0` will instead promote both values to Double, 
avoiding overflow but losing precision. 

Note also that causing overflow with two `Double` won't cause the output value 
to be rendered to `Double.POSITIVE_INFINITE` but will instead leave an empty 
`String` in the property. This could also be considered part of the bug IMO. 

 

> Expression language plus operation doesn't check for overflow
> -
>
> Key: NIFI-6672
> URL: https://issues.apache.org/jira/browse/NIFI-6672
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Major
> Attachments: image-2019-09-14-17-32-58-740.png
>
>
> To reproduce the bug, create a FF with an attribute equals to Long.MAX, then 
> add 100 to that attribute in a following UpdateAttribute processor. The 
> property will overflow to a negative number without throwing any exception
> !image-2019-09-14-17-32-58-740.png!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (NIFI-6672) Expression language plus operation doesn't check for overflow

2019-09-15 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16929937#comment-16929937
 ] 

Alessandro D'Armiento commented on NIFI-6672:
-

I think this situation could be handled in several ways (This, of course, 
counts also for #NIFI-6673 and #NIFI-6674)

At the moment, the behavior is "let's not say anything" that IMO is the less 
reliable behavior possible.

Some possible upgrade could be: 
 * Evaluating `java.math.BigInteger`
 * Using `Math.addExact` to throw an `ArithmeticException` in case of overflow
 * Leaving the overflow issues as they are, but adding a `logger.warn` where a 
possible overflow is detected

A source of confusion IMO is that EL `plus` (as well as minus, divide, 
multiply...) function allow both integral and decimal numbers, threating them 
either way at runtime:

```

if (subjectValue instanceof Double || plus instanceof Double){
  result = subjectValue.doubleValue() + plus.doubleValue();
} else {
  result = subjectValue.longValue() + plus.longValue();
}

```

This means that using `plus` to sum [Long.MAX_VALUE] `9223372036854775807L` and 
`100` will produce unmanaged overflow, while using it to sum 
`9223372036854775807L` and `100.0` will instead promote both values to Double, 
avoiding overflow but losing precision. 

Note also that causing overflow with two `Double` won't cause the output value 
to be rendered to `Double.POSITIVE_INFINITE` but will instead leave an empty 
`String` in the property. This could also be considered part of the bug IMO. 

 

> Expression language plus operation doesn't check for overflow
> -
>
> Key: NIFI-6672
> URL: https://issues.apache.org/jira/browse/NIFI-6672
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Major
> Attachments: image-2019-09-14-17-32-58-740.png
>
>
> To reproduce the bug, create a FF with an attribute equals to Long.MAX, then 
> add 100 to that attribute in a following UpdateAttribute processor. The 
> property will overflow to a negative number without throwing any exception
> !image-2019-09-14-17-32-58-740.png!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (NIFI-6672) Expression language plus operation doesn't check for overflow

2019-09-14 Thread Alessandro D'Armiento (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6672:

Description: 
To reproduce the bug, create a FF with an attribute equals to Long.MAX, then 
add 100 to that attribute in a following UpdateAttribute processor. The 
property will overflow to a negative number without throwing any exception

!image-2019-09-14-17-32-58-740.png!

  was:
To reproduce the bug, create a FF with an attribute equals to Long.MAX, then 
add 1 to that attribute in a following UpdateAttribute processor. The property 
will overflow to a negative number without throwing any exception

!image-2019-09-14-17-32-58-740.png!


> Expression language plus operation doesn't check for overflow
> -
>
> Key: NIFI-6672
> URL: https://issues.apache.org/jira/browse/NIFI-6672
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Major
> Attachments: image-2019-09-14-17-32-58-740.png
>
>
> To reproduce the bug, create a FF with an attribute equals to Long.MAX, then 
> add 100 to that attribute in a following UpdateAttribute processor. The 
> property will overflow to a negative number without throwing any exception
> !image-2019-09-14-17-32-58-740.png!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (NIFI-6674) Expression language minus operation doesn't check for underflow

2019-09-14 Thread Alessandro D'Armiento (Jira)
Alessandro D'Armiento created NIFI-6674:
---

 Summary: Expression language minus operation doesn't check for 
underflow
 Key: NIFI-6674
 URL: https://issues.apache.org/jira/browse/NIFI-6674
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Alessandro D'Armiento
 Fix For: 1.9.2
 Attachments: image-2019-09-14-17-51-41-809.png

To reproduce the bug, create a FF with an attribute equals to Long.MIN, then 
subtract 100 to that attribute in a following UpdateAttribute processor. The 
property will underflow to a positive number without throwing any exception

!image-2019-09-14-17-51-41-809.png!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (NIFI-6673) Expression language multiply operation doesn't check for overflow

2019-09-14 Thread Alessandro D'Armiento (Jira)
Alessandro D'Armiento created NIFI-6673:
---

 Summary: Expression language multiply operation doesn't check for 
overflow
 Key: NIFI-6673
 URL: https://issues.apache.org/jira/browse/NIFI-6673
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.9.2
Reporter: Alessandro D'Armiento
 Attachments: image-2019-09-14-17-38-19-397.png

To reproduce the bug, create a FF with an attribute equals to Long.MAX, then 
multiply it by 2 to that attribute in a following UpdateAttribute processor. 
The property will overflow to a negative number without throwing any exception

!image-2019-09-14-17-38-19-397.png!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (NIFI-6672) Expression language plus operation doesn't check for overflow

2019-09-14 Thread Alessandro D'Armiento (Jira)
Alessandro D'Armiento created NIFI-6672:
---

 Summary: Expression language plus operation doesn't check for 
overflow
 Key: NIFI-6672
 URL: https://issues.apache.org/jira/browse/NIFI-6672
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.9.2
Reporter: Alessandro D'Armiento
 Attachments: image-2019-09-14-17-32-58-740.png

To reproduce the bug, create a FF with an attribute equals to Long.MAX, then 
add 1 to that attribute in a following UpdateAttribute processor. The property 
will overflow to a negative number without throwing any exception

!image-2019-09-14-17-32-58-740.png!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (NIFI-6628) Separate out logging of extensions vs. nifi framework

2019-09-06 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16924099#comment-16924099
 ] 

Alessandro D'Armiento commented on NIFI-6628:
-

Hello, what do you think about using the new nifi-framework log as root for all 
logging activities, while giving the old nifi-app log to the nifi-processors, 
LogMessage and LogAttribute loggers?  

> Separate out logging of extensions vs. nifi framework
> -
>
> Key: NIFI-6628
> URL: https://issues.apache.org/jira/browse/NIFI-6628
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Mark Payne
>Priority: Major
>
> Currently, nifi has 3 logs that are generated: `nifi-app.log`, 
> `nifi-user.log`, and `nifi-bootstrap.log`. The vast majority of logs are 
> written to `nifi-app.log`. This can result in the app log being extremely 
> verbose and difficult to follow if there are Processors that are configured 
> with an invalid username/password, etc. that result in spewing a lot of 
> errors. As a result, it can be very difficult to find error messages about 
> framework/app itself.
> We should update the `logback.xml` file to add a new `nifi-framework.log` 
> file that contains framework-related loggers and let everything else go to 
> `nifi-app.log`



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (NIFI-6594) TestListenSyslog.testParsingError fails with "expected:<1> but was:<0>" with FR environment

2019-08-29 Thread Alessandro D'Armiento (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6594:

Summary: TestListenSyslog.testParsingError fails with "expected:<1> but 
was:<0>" with FR environment  (was: TestListenSyslog.testParsingError randomly 
fails with "expected:<1> but was:<0>")

> TestListenSyslog.testParsingError fails with "expected:<1> but was:<0>" with 
> FR environment
> ---
>
> Key: NIFI-6594
> URL: https://issues.apache.org/jira/browse/NIFI-6594
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Alessandro D'Armiento
>Priority: Major
> Attachments: image-2019-08-27-16-56-02-796.png
>
>
>  `TestListenSyslog.testParsingError()` randomly fails when it shouldn't
> `TestListenSyslog.testParsingError:141 expected:<1> but was:<0>`
> !image-2019-08-27-16-56-02-796.png!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (NIFI-6594) TestListenSyslog.testParsingError fails with "expected:<1> but was:<0>" with FR environment

2019-08-29 Thread Alessandro D'Armiento (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6594:

Description: 
 `TestListenSyslog.testParsingError()` fails when it shouldn't

`TestListenSyslog.testParsingError:141 expected:<1> but was:<0>`

!image-2019-08-27-16-56-02-796.png!

  was:
 `TestListenSyslog.testParsingError()` randomly fails when it shouldn't

`TestListenSyslog.testParsingError:141 expected:<1> but was:<0>`

!image-2019-08-27-16-56-02-796.png!


> TestListenSyslog.testParsingError fails with "expected:<1> but was:<0>" with 
> FR environment
> ---
>
> Key: NIFI-6594
> URL: https://issues.apache.org/jira/browse/NIFI-6594
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Alessandro D'Armiento
>Priority: Major
> Attachments: image-2019-08-27-16-56-02-796.png
>
>
>  `TestListenSyslog.testParsingError()` fails when it shouldn't
> `TestListenSyslog.testParsingError:141 expected:<1> but was:<0>`
> !image-2019-08-27-16-56-02-796.png!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (NIFI-6594) TestListenSyslog.testParsingError randomly fails with "expected:<1> but was:<0>"

2019-08-27 Thread Alessandro D'Armiento (Jira)
Alessandro D'Armiento created NIFI-6594:
---

 Summary: TestListenSyslog.testParsingError randomly fails with 
"expected:<1> but was:<0>"
 Key: NIFI-6594
 URL: https://issues.apache.org/jira/browse/NIFI-6594
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Alessandro D'Armiento
 Attachments: image-2019-08-27-16-56-02-796.png

 `TestListenSyslog.testParsingError()` randomly fails when it shouldn't

`TestListenSyslog.testParsingError:141 expected:<1> but was:<0>`

!image-2019-08-27-16-56-02-796.png!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (NIFI-6591) DbcpConnectionPool service UnsatisfiedLinkError

2019-08-27 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16916768#comment-16916768
 ] 

Alessandro D'Armiento commented on NIFI-6591:
-

Hello [~ilay.e], 

I'm sorry I am not directly able to help you, however, this is not the right 
section for your kind of problem. It will be difficult for supporters to keep 
track of this. Still, this can be very helpful.


I suggest you to: 

*  Get in touch with devs in the mailing list / slack workspace: 
[https://nifi.apache.org/mailing_lists.html] 

*  If your problem happens to not actually be a bug, you can close this issue 
with the "Not a problem" tag

*  If your problem happens to actually be a bug, you should elaborate this 
issue adding more details on how to reproduce the bug

> DbcpConnectionPool service UnsatisfiedLinkError
> ---
>
> Key: NIFI-6591
> URL: https://issues.apache.org/jira/browse/NIFI-6591
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.9.2
> Environment: Software platform 
>Reporter: Remoleav
>Priority: Major
>
> I’ve been using nifi since 0.6.1 version. All versions I’ve been using didn’t 
> require several instances of dbcpconnectionpool service. Unfortunately in 
> most stable version 1.9.2 I’m facing with above error 
> “unsatisfiedlinkerror... sqljdbc_auth.dll already loaded in another class 
> loader. Nifi deployed on windows. This error began to happen after some 
> successful working. Restart didn’t bring any success. And I’m stuck with 
> absolutely blocking processors using such service. I stumble on such a 
> problem while searching for known issue in bugs list but didn’t find any 
> number got for it. How could it be eliminated or could you hint me on some 
> known number in Jira bugs to track for it?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (NIFI-6592) Test fails for nifi-media-processors when decimals-separator != .

2019-08-27 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16916755#comment-16916755
 ] 

Alessandro D'Armiento commented on NIFI-6592:
-

Hello, I took a look at the actual code implementation and I found out what you 
are pointing out has already been resolved with #6529
Quoting [~a.stock] commit message:
bq. NIFI-6529 Updated TestFormatUtils.testFormatDataSize to use 
DecimalFormatSymbols when verifying results of FormatUtils.formatDataSize  
NIFI-6529 Updated tests that fail with non en_US locales  Signed-off-by: Pierre 
Villard   This closes #3639. 

> Test fails for nifi-media-processors when decimals-separator != .
> -
>
> Key: NIFI-6592
> URL: https://issues.apache.org/jira/browse/NIFI-6592
> Project: Apache NiFi
>  Issue Type: Test
>  Components: Extensions
>Affects Versions: 1.9.2
> Environment: macOS 10.14.6
>Reporter: H.Verweij
>Priority: Trivial
>  Labels: test
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> When building Nifi it fais a test in the nifi-media-processors component:
> [*ERROR*] 
> testExtractPNG(org.apache.nifi.processors.image.ExtractImageMetadataTest)  
> Time elapsed: 0.014 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<0[.]455> but was:<0[,]455>
> at 
> org.apache.nifi.processors.image.ExtractImageMetadataTest.testExtractPNG(ExtractImageMetadataTest.java:96)
>  
> It's trivial to fix, just make English the primary language, but perhaps the 
> test can be more robust.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (NIFI-6592) Test fails for nifi-media-processors when decimals-separator != .

2019-08-27 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16916755#comment-16916755
 ] 

Alessandro D'Armiento edited comment on NIFI-6592 at 8/27/19 2:11 PM:
--

Hello, I took a look at the actual code implementation and I found out what you 
are pointing out has already been resolved with NIFI-6529
 Quoting [~a.stock] commit message:
{quote}NIFI-6529 Updated TestFormatUtils.testFormatDataSize to use 
DecimalFormatSymbols when verifying results of FormatUtils.formatDataSize 
NIFI-6529 Updated tests that fail with non en_US locales Signed-off-by: Pierre 
Villard  This closes #3639.
{quote}


was (Author: axelsync):
Hello, I took a look at the actual code implementation and I found out what you 
are pointing out has already been resolved with #6529
Quoting [~a.stock] commit message:
bq. NIFI-6529 Updated TestFormatUtils.testFormatDataSize to use 
DecimalFormatSymbols when verifying results of FormatUtils.formatDataSize  
NIFI-6529 Updated tests that fail with non en_US locales  Signed-off-by: Pierre 
Villard   This closes #3639. 

> Test fails for nifi-media-processors when decimals-separator != .
> -
>
> Key: NIFI-6592
> URL: https://issues.apache.org/jira/browse/NIFI-6592
> Project: Apache NiFi
>  Issue Type: Test
>  Components: Extensions
>Affects Versions: 1.9.2
> Environment: macOS 10.14.6
>Reporter: H.Verweij
>Priority: Trivial
>  Labels: test
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> When building Nifi it fais a test in the nifi-media-processors component:
> [*ERROR*] 
> testExtractPNG(org.apache.nifi.processors.image.ExtractImageMetadataTest)  
> Time elapsed: 0.014 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<0[.]455> but was:<0[,]455>
> at 
> org.apache.nifi.processors.image.ExtractImageMetadataTest.testExtractPNG(ExtractImageMetadataTest.java:96)
>  
> It's trivial to fix, just make English the primary language, but perhaps the 
> test can be more robust.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (NIFI-6555) Select all checkbox for fds data table is not aligned

2019-08-26 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16915639#comment-16915639
 ] 

Alessandro D'Armiento commented on NIFI-6555:
-

Hello Scott, I don't think this issue could be considered Major. 
Does the bug only affect the front end alignment or is it actually affecting 
any functionality? 

> Select all checkbox for fds data table is not aligned
> -
>
> Key: NIFI-6555
> URL: https://issues.apache.org/jira/browse/NIFI-6555
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: FDS
>Reporter: Scott Aslan
>Priority: Major
> Fix For: fds-0.3
>
> Attachments: image-2019-08-14-15-02-39-727.png
>
>
> The select all checkbox for fds data table should align with the other 
> checkboxes in the table:
>  
> !image-2019-08-14-15-02-39-727.png!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (NIFI-6500) Add padLeft() and padRight() functions to expression language

2019-08-24 Thread Alessandro D'Armiento (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento reassigned NIFI-6500:
---

Assignee: Alessandro D'Armiento

> Add padLeft() and padRight() functions to expression language
> -
>
> Key: NIFI-6500
> URL: https://issues.apache.org/jira/browse/NIFI-6500
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Assignee: Alessandro D'Armiento
>Priority: Minor
> Fix For: 1.10.0
>
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> h2. Current Situation
> - [Expression Language string 
> manipulation|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#strings]
>  doesn't have anything to perform string padding
> - Existing solutions to achieve left or right padding are unintuitive
> -- Es: 
> ${prop:prepend(''):substring(${prop:length()},${prop:length():plus(4)})}
> h1. Improvement Proposal
> - Support two new expression language methods
> -- padLeft:
> --- padLeft(int n) will prepend a default character to of the input string 
> until it reaches the length n
> --- padLeft(int n, char c) will prepend the c characters to the input string 
> until it reaches the length n
> -- padRight:
> --- padRight(int n) will append a default character to the input string until 
> it reaches the length n
> --- padRight(int n, char c) will append the c character to the input string 
> until it reaches the length n
> -- Default character should be a renderable character such as underscore
> -- If the input string is already longer than the padding length, no 
> operation should be performed 
> h3. Examples
> 
> input = "myString"
> 
> - ${input:padLeft(10, '#')} => "##myString"
> - ${input:padRight(10, '#')} => "myString##"
> - ${input:padLeft(10)} => "__myString"
> - ${input:padRight(10)} => "myString__"



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (NIFI-6502) Add padLeft() and padRight() functions to RecordPath

2019-08-20 Thread Alessandro D'Armiento (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento reassigned NIFI-6502:
---

Assignee: Alessandro D'Armiento

> Add padLeft() and padRight() functions to RecordPath 
> -
>
> Key: NIFI-6502
> URL: https://issues.apache.org/jira/browse/NIFI-6502
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Assignee: Alessandro D'Armiento
>Priority: Minor
> Fix For: 1.10.0
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> h2. Current Situation
> - [Record path 
> functions|https://nifi.apache.org/docs/nifi-docs/html/record-path-guide.html#functions]
>  don't provide anything to perform string padding
> h1. Improvement Proposal
> - Support two new recordPath functions
> -- padLeft:
> --- padLeft(string field, int desiredLength) will prepend a default character 
> to the input string until it reaches the length desiredLength
> --- padLeft(String field, int desiredLength, char c) will prepend the c 
> characters to the input string until it reaches the length n
> -- padRight:
> --- padRight(string field, int desiredLength) will append a default character 
> to the input string until it reaches the length desiredLength
> --- padRight(String field, int desiredLength, char c) will append the c 
> characters to the input string until it reaches the length n
> -- Default character should be a renderable character such as underscore
> -- If the input string is already longer than the padding length, no 
> operation should be performed 
> h3. Examples
> 
> {
>   "name" : "john smith"
> }
> 
> `padLeft(/name, 15, '@')` => @john smith
> `padLeft(/name, 15)` =>  _john smith
> `padRight(/name, 15, '@')` => john smith@
> `padRight(/name, 15)`=> john smith_



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (NIFI-6524) MergeContent properties should accept expression language variables

2019-08-04 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6524:

Description: 
Many of the properties of the MergeContent Processor do not currently support 
any form of expression language. I think it would be a nice improvement to 
update them in the following way: 
* Minimum Number of Entries: Accepts from Variable Registry
* Maximum Number of Entries: Accepts from Variable Registry
* Minimum Group Size: Accepts from Variable Registry
* Maximum Group Size: Accepts from Variable Registry
* Max Bin Age: Accepts from Variable Registry
* Maximum number of Bins: Accepts from Variable Registry

  was:
Many of the properties of the MergeContent Processor do not currently support 
any form of expression language. I think it would be a nice improvement to 
update them in the following way: 
* Minimum Number of Entries: Accepts from Variable Registry
* Maximum Number of Entries: Accepts from Variable Registry
* Minimum Group Size: Accepts from Variable Registry
* Maximum Group Size: Accepts from Variable Registry
* Max Bin Age: Accepts from Variable Registry
* Maximum number of Bins: Accepts from Variable Registry
* Compression Level: Accepts from Variable Registry


> MergeContent properties should accept expression language variables
> ---
>
> Key: NIFI-6524
> URL: https://issues.apache.org/jira/browse/NIFI-6524
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> Many of the properties of the MergeContent Processor do not currently support 
> any form of expression language. I think it would be a nice improvement to 
> update them in the following way: 
> * Minimum Number of Entries: Accepts from Variable Registry
> * Maximum Number of Entries: Accepts from Variable Registry
> * Minimum Group Size: Accepts from Variable Registry
> * Maximum Group Size: Accepts from Variable Registry
> * Max Bin Age: Accepts from Variable Registry
> * Maximum number of Bins: Accepts from Variable Registry



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (NIFI-6524) MergeContent properties should accept expression language variables

2019-08-03 Thread Alessandro D'Armiento (JIRA)
Alessandro D'Armiento created NIFI-6524:
---

 Summary: MergeContent properties should accept expression language 
variables
 Key: NIFI-6524
 URL: https://issues.apache.org/jira/browse/NIFI-6524
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Alessandro D'Armiento


Many of the properties of the MergeContent Processor do not currently support 
any form of expression language. I think it would be a nice improvement to 
update them in the following way: 
* Minimum Number of Entries: Accepts from Variable Registry
* Maximum Number of Entries: Accepts from Variable Registry
* Minimum Group Size: Accepts from Variable Registry
* Maximum Group Size: Accepts from Variable Registry
* Max Bin Age: Accepts from Variable Registry
* Maximum number of Bins: Accepts from Variable Registry
* Compression Level: Accepts from Variable Registry



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6523) MergeRecords properties should accept expression language variables

2019-08-03 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6523:

Summary: MergeRecords properties should accept expression language 
variables  (was: MergeRecords property should accept expression language 
variables)

> MergeRecords properties should accept expression language variables
> ---
>
> Key: NIFI-6523
> URL: https://issues.apache.org/jira/browse/NIFI-6523
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> After NIFI-6490, I think it would be a nice improvement to the MergeRecord 
> Processor to have these properties updated:
> * Correlation Attribute Name: Accepts both from Variable Registry and 
> Flowfile Attributes (just like MergeContent) 
> * Minimum Bin Size: Accepts from Variable Registry
> * Maximum Bin Size: Accepts from Variable Registry
> * Max Bin Age: Accepts from Variable Registry
> * Maximum Number of Bins: Accepts from Variable Registry



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (NIFI-6523) MergeRecords property should accept expression language variables

2019-08-03 Thread Alessandro D'Armiento (JIRA)
Alessandro D'Armiento created NIFI-6523:
---

 Summary: MergeRecords property should accept expression language 
variables
 Key: NIFI-6523
 URL: https://issues.apache.org/jira/browse/NIFI-6523
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.9.2
Reporter: Alessandro D'Armiento


After NIFI-6490, I think it would be a nice improvement to the MergeRecord 
Processor to have these properties updated:
* Correlation Attribute Name: Accepts both from Variable Registry and Flowfile 
Attributes (just like MergeContent) 
* Minimum Bin Size: Accepts from Variable Registry
* Maximum Bin Size: Accepts from Variable Registry
* Max Bin Age: Accepts from Variable Registry
* Maximum Number of Bins: Accepts from Variable Registry



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (NIFI-6509) Date related issue in unit test VolatileComponentStatusRepositoryTest

2019-08-03 Thread Alessandro D'Armiento (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899444#comment-16899444
 ] 

Alessandro D'Armiento commented on NIFI-6509:
-

Hello, I am experiencing this issue too, 
In particular, it happens to me it fails {code:java} 
VolatileComponentStatusRepositoryTest.testFilterDatesUsingStartFilter() {code} 
with:

{panel:title=Error log}
[ERROR] Failures: 
[ERROR]   
VolatileComponentStatusRepositoryTest.testFilterDatesUsingStartFilter:132 
expected: but was:
{panel}



> Date related issue in unit test VolatileComponentStatusRepositoryTest
> -
>
> Key: NIFI-6509
> URL: https://issues.apache.org/jira/browse/NIFI-6509
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Tamas Palfy
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Unit test 
> {{VolatileComponentStatusRepositoryTest.testFilterDatesUsingPreferredDataPoints}}
>  may fail with the following:
> {code:java}
> java.lang.AssertionError: 
> Expected :Thu Jan 01 00:00:00 CET 1970
> Actual   :Thu Jan 01 01:00:00 CET 1970
> {code}
> The test creates a {{VolatileComponentStatusRepository}} instance and adds 
> {{java.util.Date}} objects to it starting from epoch (via {{new Date(0)}}).
>  This first date at epoch is the _Actual_ in the _AssertionError_.
> Then filters this list by looking for those that are earlier or matching a 
> _start_ paramater. This _start_ is created from a {{LocalDateTime}} at the 
> default system time zone.
>  This is the _Expected_ in the _AssertionError_.
> In general the issue is the difference in how the list is created (dates that 
> are 00:00:00 GMT) and how the filter parameter date is created (00:00:00 at 
> system time zone).



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (NIFI-6513) The property of PutHive3Streaming Processor named "Auto-Create Partitions" is useless

2019-08-03 Thread Alessandro D'Armiento (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-6513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899443#comment-16899443
 ] 

Alessandro D'Armiento edited comment on NIFI-6513 at 8/3/19 11:41 AM:
--

Are we sure we actually need that boolean? 
In case you enable dynamic partitioning, you should be able to set the maximum 
number of partitions in the hive configuration files you are able to pass to 
the `Hive Configuration Resources` property. Furthermore, HiveConnection[1] 
isDynamic() method only checks for the existence of a list of static partitions 
(as well as if the table itself is partitioned). 
If you instead filled the `Partition Values` you are telling Hive you are using 
static partitioning and no partitions should be created anyhow. 

I suggest renaming `Partition Values` with `Static Partition Values` for 
correctness and remove that boolean

cc [~mattyb149], which worked on the Hive Processor in NIFI-4963

[1] 
https://github.com/apache/hive/blob/d7475aa98f6a2fc813e2e1c0ad99f902cb28cc00/streaming/src/java/org/apache/hive/streaming/HiveStreamingConnection.java#L867


was (Author: axelsync):
Are we sure we actually need that boolean? 
In case you enable dynamic partitioning, you should be able to set the maximum 
number of partitions in the hive configuration files you are able to pass to 
the `Hive Configuration Resources` property. If you instead filled the 
`Partition Values` you are telling Hive you are using static partitioning and 
no partitions should be created anyhow. 

I suggest renaming `Partition Values` with `Static Partition Values` for 
correctness and remove that boolean

cc [~mattyb149], which worked on the Hive Processor in NIFI-4963

> The property of PutHive3Streaming Processor named "Auto-Create Partitions" is 
> useless
> -
>
> Key: NIFI-6513
> URL: https://issues.apache.org/jira/browse/NIFI-6513
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.9.0
> Environment: Hive3.1.0
>Reporter: Lu Wang
>Priority: Major
>
> _emphasized text_The PutHive3Streaming processor always creates the 
> partition, regardless of whether the  value of property named "Auto-Create 
> Partitions" is set to true or false。



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (NIFI-6513) The property of PutHive3Streaming Processor named "Auto-Create Partitions" is useless

2019-08-03 Thread Alessandro D'Armiento (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-6513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899443#comment-16899443
 ] 

Alessandro D'Armiento commented on NIFI-6513:
-

Are we sure we actually need that boolean? 
In case you enable dynamic partitioning, you should be able to set the maximum 
number of partitions in the hive configuration files you are able to pass to 
the `Hive Configuration Resources` property. If you instead filled the 
`Partition Values` you are telling Hive you are using static partitioning and 
no partitions should be created anyhow. 

I suggest renaming `Partition Values` with `Static Partition Values` for 
correctness and remove that boolean

cc [~mattyb149], which worked on the Hive Processor in NIFI-4963

> The property of PutHive3Streaming Processor named "Auto-Create Partitions" is 
> useless
> -
>
> Key: NIFI-6513
> URL: https://issues.apache.org/jira/browse/NIFI-6513
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.9.0
> Environment: Hive3.1.0
>Reporter: Lu Wang
>Priority: Major
>
> _emphasized text_The PutHive3Streaming processor always creates the 
> partition, regardless of whether the  value of property named "Auto-Create 
> Partitions" is set to true or false。



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6513) The property of PutHive3Streaming Processor named "Auto-Create Partitions" is useless

2019-08-03 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6513:

Description: _emphasized text_The PutHive3Streaming processor always 
creates the partition, regardless of whether the  value of property named 
"Auto-Create Partitions" is set to true or false。  (was: The PutHive3Streaming 
processor always creates the partition, regardless of whether the  value of 
property named "Auto-Create Partitions" is set to true or false。)

> The property of PutHive3Streaming Processor named "Auto-Create Partitions" is 
> useless
> -
>
> Key: NIFI-6513
> URL: https://issues.apache.org/jira/browse/NIFI-6513
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.9.0
> Environment: Hive3.1.0
>Reporter: Lu Wang
>Priority: Major
>
> _emphasized text_The PutHive3Streaming processor always creates the 
> partition, regardless of whether the  value of property named "Auto-Create 
> Partitions" is set to true or false。



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (NIFI-6513) The property of PutHive3Streaming Processor named "Auto-Create Partitions" is useless

2019-08-03 Thread Alessandro D'Armiento (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-6513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899437#comment-16899437
 ] 

Alessandro D'Armiento commented on NIFI-6513:
-

It seems the issue is in the `HiveOptions` object, which receives a boolean 
`autoCreatePartitions` in its `Builder`, but then it's never used (and a getter 
doesn't exist as well). I'm working on fixing this

> The property of PutHive3Streaming Processor named "Auto-Create Partitions" is 
> useless
> -
>
> Key: NIFI-6513
> URL: https://issues.apache.org/jira/browse/NIFI-6513
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.9.0
> Environment: Hive3.1.0
>Reporter: Lu Wang
>Priority: Major
>
> The PutHive3Streaming processor always creates the partition, regardless of 
> whether the  value of property named "Auto-Create Partitions" is set to true 
> or false。



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (NIFI-6502) Add padLeft() and padRight() functions to RecordPath

2019-07-31 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento reassigned NIFI-6502:
---

Assignee: (was: Alessandro D'Armiento)

> Add padLeft() and padRight() functions to RecordPath 
> -
>
> Key: NIFI-6502
> URL: https://issues.apache.org/jira/browse/NIFI-6502
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> h2. Current Situation
> - [Record path 
> functions|https://nifi.apache.org/docs/nifi-docs/html/record-path-guide.html#functions]
>  don't provide anything to perform string padding
> h1. Improvement Proposal
> - Support two new recordPath functions
> -- padLeft:
> --- padLeft(string field, int desiredLength) will prepend a default character 
> to the input string until it reaches the length desiredLength
> --- padLeft(String field, int desiredLength, char c) will prepend the c 
> characters to the input string until it reaches the length n
> -- padRight:
> --- padRight(string field, int desiredLength) will append a default character 
> to the input string until it reaches the length desiredLength
> --- padRight(String field, int desiredLength, char c) will append the c 
> characters to the input string until it reaches the length n
> -- Default character should be a renderable character such as underscore
> -- If the input string is already longer than the padding length, no 
> operation should be performed 
> h3. Examples
> 
> {
>   "name" : "john smith"
> }
> 
> `padLeft(/name, 15, '@')` => @john smith
> `padLeft(/name, 15)` =>  _john smith
> `padRight(/name, 15, '@')` => john smith@
> `padRight(/name, 15)`=> john smith_



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (NIFI-6500) Add padLeft() and padRight() functions to expression language

2019-07-31 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento reassigned NIFI-6500:
---

Assignee: (was: Alessandro D'Armiento)

> Add padLeft() and padRight() functions to expression language
> -
>
> Key: NIFI-6500
> URL: https://issues.apache.org/jira/browse/NIFI-6500
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> h2. Current Situation
> - [Expression Language string 
> manipulation|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#strings]
>  doesn't have anything to perform string padding
> - Existing solutions to achieve left or right padding are unintuitive
> -- Es: 
> ${prop:prepend(''):substring(${prop:length()},${prop:length():plus(4)})}
> h1. Improvement Proposal
> - Support two new expression language methods
> -- padLeft:
> --- padLeft(int n) will prepend a default character to of the input string 
> until it reaches the length n
> --- padLeft(int n, char c) will prepend the c characters to the input string 
> until it reaches the length n
> -- padRight:
> --- padRight(int n) will append a default character to the input string until 
> it reaches the length n
> --- padRight(int n, char c) will append the c character to the input string 
> until it reaches the length n
> -- Default character should be a renderable character such as underscore
> -- If the input string is already longer than the padding length, no 
> operation should be performed 
> h3. Examples
> 
> input = "myString"
> 
> - ${input:padLeft(10, '#')} => "##myString"
> - ${input:padRight(10, '#')} => "myString##"
> - ${input:padLeft(10)} => "__myString"
> - ${input:padRight(10)} => "myString__"



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (NIFI-6502) Add padLeft() and padRight() functions to RecordPath

2019-07-31 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento reassigned NIFI-6502:
---

Assignee: Alessandro D'Armiento

> Add padLeft() and padRight() functions to RecordPath 
> -
>
> Key: NIFI-6502
> URL: https://issues.apache.org/jira/browse/NIFI-6502
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Assignee: Alessandro D'Armiento
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> h2. Current Situation
> - [Record path 
> functions|https://nifi.apache.org/docs/nifi-docs/html/record-path-guide.html#functions]
>  don't provide anything to perform string padding
> h1. Improvement Proposal
> - Support two new recordPath functions
> -- padLeft:
> --- padLeft(string field, int desiredLength) will prepend a default character 
> to the input string until it reaches the length desiredLength
> --- padLeft(String field, int desiredLength, char c) will prepend the c 
> characters to the input string until it reaches the length n
> -- padRight:
> --- padRight(string field, int desiredLength) will append a default character 
> to the input string until it reaches the length desiredLength
> --- padRight(String field, int desiredLength, char c) will append the c 
> characters to the input string until it reaches the length n
> -- Default character should be a renderable character such as underscore
> -- If the input string is already longer than the padding length, no 
> operation should be performed 
> h3. Examples
> 
> {
>   "name" : "john smith"
> }
> 
> `padLeft(/name, 15, '@')` => @john smith
> `padLeft(/name, 15)` =>  _john smith
> `padRight(/name, 15, '@')` => john smith@
> `padRight(/name, 15)`=> john smith_



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (NIFI-6500) Add padLeft() and padRight() functions to expression language

2019-07-31 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento reassigned NIFI-6500:
---

Assignee: Alessandro D'Armiento

> Add padLeft() and padRight() functions to expression language
> -
>
> Key: NIFI-6500
> URL: https://issues.apache.org/jira/browse/NIFI-6500
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Assignee: Alessandro D'Armiento
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> h2. Current Situation
> - [Expression Language string 
> manipulation|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#strings]
>  doesn't have anything to perform string padding
> - Existing solutions to achieve left or right padding are unintuitive
> -- Es: 
> ${prop:prepend(''):substring(${prop:length()},${prop:length():plus(4)})}
> h1. Improvement Proposal
> - Support two new expression language methods
> -- padLeft:
> --- padLeft(int n) will prepend a default character to of the input string 
> until it reaches the length n
> --- padLeft(int n, char c) will prepend the c characters to the input string 
> until it reaches the length n
> -- padRight:
> --- padRight(int n) will append a default character to the input string until 
> it reaches the length n
> --- padRight(int n, char c) will append the c character to the input string 
> until it reaches the length n
> -- Default character should be a renderable character such as underscore
> -- If the input string is already longer than the padding length, no 
> operation should be performed 
> h3. Examples
> 
> input = "myString"
> 
> - ${input:padLeft(10, '#')} => "##myString"
> - ${input:padRight(10, '#')} => "myString##"
> - ${input:padLeft(10)} => "__myString"
> - ${input:padRight(10)} => "myString__"



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (NIFI-6490) MergeRecord properties MIN_RECORDS and MAX_RECORDS should accept variable registry expression language

2019-07-31 Thread Alessandro D'Armiento (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-6490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16896898#comment-16896898
 ] 

Alessandro D'Armiento commented on NIFI-6490:
-

[~ijokarumawak] Thank you very much! I'm glad this was helpful!

> MergeRecord properties MIN_RECORDS and MAX_RECORDS should accept variable 
> registry expression language
> --
>
> Key: NIFI-6490
> URL: https://issues.apache.org/jira/browse/NIFI-6490
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Assignee: Alessandro D'Armiento
>Priority: Minor
> Fix For: 1.10.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> h2. Current Situation
> MergeRecords allows two attributes MIN_RECORDS and MAX_RECORDS to define how 
> many records a merged bin can contain. 
> These properties, however, do not support expression language and cannot be 
> inserted from variables.
> h2. Improvement Proposal
> Accept variable registry in these properties



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6502) Add padLeft() and padRight() functions to RecordPath

2019-07-29 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6502:

Summary: Add padLeft() and padRight() functions to RecordPath   (was: add 
padLeft() and padRight() functions to RecordPath )

> Add padLeft() and padRight() functions to RecordPath 
> -
>
> Key: NIFI-6502
> URL: https://issues.apache.org/jira/browse/NIFI-6502
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> - [Record path 
> functions|https://nifi.apache.org/docs/nifi-docs/html/record-path-guide.html#functions]
>  don't provide anything to perform string padding
> h1. Improvement Proposal
> - Support two new recordPath functions
> -- padLeft:
> --- padLeft(string field, int desiredLength) will prepend a default character 
> to the input string until it reaches the length desiredLength
> --- padLeft(String field, int desiredLength, char c) will prepend the c 
> characters to the input string until it reaches the length n
> -- padRight:
> --- padRight(string field, int desiredLength) will append a default character 
> to the input string until it reaches the length desiredLength
> --- padRight(String field, int desiredLength, char c) will append the c 
> characters to the input string until it reaches the length n
> -- Default character should be a renderable character such as underscore
> -- If the input string is already longer than the padding length, no 
> operation should be performed 
> h3. Examples
> 
> {
>   "name" : "john smith"
> }
> 
> `padLeft(/name, 15, '@')` => @john smith
> `padLeft(/name, 15)` =>  _john smith
> `padRight(/name, 15, '@')` => john smith@
> `padRight(/name, 15)`=> john smith_



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6500) Add padLeft() and padRight() functions to expression language

2019-07-29 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6500:

Description: 
h2. Current Situation

- [Expression Language string 
manipulation|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#strings]
 doesn't have anything to perform string padding

- Existing solutions to achieve left or right padding are unintuitive
-- Es: 
${prop:prepend(''):substring(${prop:length()},${prop:length():plus(4)})}

h1. Improvement Proposal
- Support two new expression language methods
-- padLeft:
--- padLeft(int n) will prepend a default character to of the input string 
until it reaches the length n
--- padLeft(int n, char c) will prepend the c characters to the input string 
until it reaches the length n
-- padRight:
--- padRight(int n) will append a default character to the input string until 
it reaches the length n
--- padRight(int n, char c) will append the c character to the input string 
until it reaches the length n
-- Default character should be a renderable character such as underscore
-- If the input string is already longer than the padding length, no operation 
should be performed 

h3. Examples

---
input = "myString"
---

- ${input:padLeft(10, '#')} => "##myString"
- ${input:padRight(10, '#')} => "myString##"
- ${input:padLeft(10)} => "__myString"
- ${input:padRight(10)} => "myString__"


  was:
h2. Current Situation

- [Expression Language string 
manipulation|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#strings]
 doesn't have anything to perform string padding

- Existing solutions to achieve left or right padding are unintuitive
-- Es: 
${prop:prepend(''):substring(${prop:length()},${prop:length():plus(4)})}

h1. Improvement Proposal
- Support two new expression language methods
-- padLeft() will add characters on the left of the string until a certain size 
is reached
--- padLeft(int n) will add a default character at the left of the input string 
until it reaches the length n
--- padLeft(int n, char c) will add the c characters at the left of the input 
string until it reaches the length n
-- padRight()
--- padRight(int n) will add a default character at the right of the input 
string until it reaches the length n
--- padRight(int n, char c) will add the c characters at the right of the input 
string until it reaches the length n
-- Default character should be a renderable character such as underscore
-- If the input string is already longer than the padding length, no operation 
should be performed 

h3. Examples
input = "myString"

- ${input:padLeft(10, '#')} => "##myString"
- ${input:padRight(10, '#')} => "myString##"
- ${input:padLeft(10)} => "__myString"
- ${input:padRight(10)} => "myString__"



> Add padLeft() and padRight() functions to expression language
> -
>
> Key: NIFI-6500
> URL: https://issues.apache.org/jira/browse/NIFI-6500
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> - [Expression Language string 
> manipulation|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#strings]
>  doesn't have anything to perform string padding
> - Existing solutions to achieve left or right padding are unintuitive
> -- Es: 
> ${prop:prepend(''):substring(${prop:length()},${prop:length():plus(4)})}
> h1. Improvement Proposal
> - Support two new expression language methods
> -- padLeft:
> --- padLeft(int n) will prepend a default character to of the input string 
> until it reaches the length n
> --- padLeft(int n, char c) will prepend the c characters to the input string 
> until it reaches the length n
> -- padRight:
> --- padRight(int n) will append a default character to the input string until 
> it reaches the length n
> --- padRight(int n, char c) will append the c character to the input string 
> until it reaches the length n
> -- Default character should be a renderable character such as underscore
> -- If the input string is already longer than the padding length, no 
> operation should be performed 
> h3. Examples
> ---
> input = "myString"
> ---
> - ${input:padLeft(10, '#')} => "##myString"
> - ${input:padRight(10, '#')} => "myString##"
> - ${input:padLeft(10)} => "__myString"
> - ${input:padRight(10)} => "myString__"



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6500) Add padLeft() and padRight() functions to expression language

2019-07-29 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6500:

Description: 
h2. Current Situation

- [Expression Language string 
manipulation|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#strings]
 doesn't have anything to perform string padding

- Existing solutions to achieve left or right padding are unintuitive
-- Es: 
${prop:prepend(''):substring(${prop:length()},${prop:length():plus(4)})}

h1. Improvement Proposal
- Support two new expression language methods
-- padLeft:
--- padLeft(int n) will prepend a default character to of the input string 
until it reaches the length n
--- padLeft(int n, char c) will prepend the c characters to the input string 
until it reaches the length n
-- padRight:
--- padRight(int n) will append a default character to the input string until 
it reaches the length n
--- padRight(int n, char c) will append the c character to the input string 
until it reaches the length n
-- Default character should be a renderable character such as underscore
-- If the input string is already longer than the padding length, no operation 
should be performed 

h3. Examples


input = "myString"


- ${input:padLeft(10, '#')} => "##myString"
- ${input:padRight(10, '#')} => "myString##"
- ${input:padLeft(10)} => "__myString"
- ${input:padRight(10)} => "myString__"


  was:
h2. Current Situation

- [Expression Language string 
manipulation|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#strings]
 doesn't have anything to perform string padding

- Existing solutions to achieve left or right padding are unintuitive
-- Es: 
${prop:prepend(''):substring(${prop:length()},${prop:length():plus(4)})}

h1. Improvement Proposal
- Support two new expression language methods
-- padLeft:
--- padLeft(int n) will prepend a default character to of the input string 
until it reaches the length n
--- padLeft(int n, char c) will prepend the c characters to the input string 
until it reaches the length n
-- padRight:
--- padRight(int n) will append a default character to the input string until 
it reaches the length n
--- padRight(int n, char c) will append the c character to the input string 
until it reaches the length n
-- Default character should be a renderable character such as underscore
-- If the input string is already longer than the padding length, no operation 
should be performed 

h3. Examples

---
input = "myString"
---

- ${input:padLeft(10, '#')} => "##myString"
- ${input:padRight(10, '#')} => "myString##"
- ${input:padLeft(10)} => "__myString"
- ${input:padRight(10)} => "myString__"



> Add padLeft() and padRight() functions to expression language
> -
>
> Key: NIFI-6500
> URL: https://issues.apache.org/jira/browse/NIFI-6500
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> - [Expression Language string 
> manipulation|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#strings]
>  doesn't have anything to perform string padding
> - Existing solutions to achieve left or right padding are unintuitive
> -- Es: 
> ${prop:prepend(''):substring(${prop:length()},${prop:length():plus(4)})}
> h1. Improvement Proposal
> - Support two new expression language methods
> -- padLeft:
> --- padLeft(int n) will prepend a default character to of the input string 
> until it reaches the length n
> --- padLeft(int n, char c) will prepend the c characters to the input string 
> until it reaches the length n
> -- padRight:
> --- padRight(int n) will append a default character to the input string until 
> it reaches the length n
> --- padRight(int n, char c) will append the c character to the input string 
> until it reaches the length n
> -- Default character should be a renderable character such as underscore
> -- If the input string is already longer than the padding length, no 
> operation should be performed 
> h3. Examples
> 
> input = "myString"
> 
> - ${input:padLeft(10, '#')} => "##myString"
> - ${input:padRight(10, '#')} => "myString##"
> - ${input:padLeft(10)} => "__myString"
> - ${input:padRight(10)} => "myString__"



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6502) add padLeft() and padRight() functions to RecordPath

2019-07-29 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6502:

Description: 
h2. Current Situation

- [Record path 
functions|https://nifi.apache.org/docs/nifi-docs/html/record-path-guide.html#functions]
 don't provide anything to perform string padding

h1. Improvement Proposal
- Support two new recordPath functions
-- padLeft:
--- padLeft(string field, int desiredLength) will prepend a default character 
to the input string until it reaches the length desiredLength
--- padLeft(String field, int desiredLength, char c) will prepend the c 
characters to the input string until it reaches the length n
-- padRight:
--- padRight(string field, int desiredLength) will append a default character 
to the input string until it reaches the length desiredLength
--- padRight(String field, int desiredLength, char c) will append the c 
characters to the input string until it reaches the length n
-- Default character should be a renderable character such as underscore
-- If the input string is already longer than the padding length, no operation 
should be performed 

h3. Examples


{
  "name" : "john smith"
}


`padLeft(/name, 15, '@')` => @john smith
`padLeft(/name, 15)` =>  _john smith
`padRight(/name, 15, '@')` => john smith@
`padRight(/name, 15)`=> john smith_

  was:
h2. Current Situation

- [Record path 
functions|https://nifi.apache.org/docs/nifi-docs/html/record-path-guide.html#functions]
 don't provide anything to perform string padding

h1. Improvement Proposal
- Support two new recordPath functions
-- padLeft:
--- padLeft(string field, int desiredLength) will prepend a default character 
to the input string until it reaches the length desiredLength
--- padLeft(String field, int desiredLength, char c) will prepend the c 
characters to the input string until it reaches the length n
-- padRight:
--- padRight(string field, int desiredLength) will append a default character 
to the input string until it reaches the length desiredLength
--- padRight(String field, int desiredLength, char c) will append the c 
characters to the input string until it reaches the length n
-- Default character should be a renderable character such as underscore
-- If the input string is already longer than the padding length, no operation 
should be performed 

h3. Examples


{
  "name" : "john smith"
}


`padLeft(/name, 15, '@')` | @john smith
`padLeft(/name, 15)` | _john smith
`padRight(/name, 15, '@')` | john smith@
`padRight(/name, 15)` | john smith_


> add padLeft() and padRight() functions to RecordPath 
> -
>
> Key: NIFI-6502
> URL: https://issues.apache.org/jira/browse/NIFI-6502
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> - [Record path 
> functions|https://nifi.apache.org/docs/nifi-docs/html/record-path-guide.html#functions]
>  don't provide anything to perform string padding
> h1. Improvement Proposal
> - Support two new recordPath functions
> -- padLeft:
> --- padLeft(string field, int desiredLength) will prepend a default character 
> to the input string until it reaches the length desiredLength
> --- padLeft(String field, int desiredLength, char c) will prepend the c 
> characters to the input string until it reaches the length n
> -- padRight:
> --- padRight(string field, int desiredLength) will append a default character 
> to the input string until it reaches the length desiredLength
> --- padRight(String field, int desiredLength, char c) will append the c 
> characters to the input string until it reaches the length n
> -- Default character should be a renderable character such as underscore
> -- If the input string is already longer than the padding length, no 
> operation should be performed 
> h3. Examples
> 
> {
>   "name" : "john smith"
> }
> 
> `padLeft(/name, 15, '@')` => @john smith
> `padLeft(/name, 15)` =>  _john smith
> `padRight(/name, 15, '@')` => john smith@
> `padRight(/name, 15)`=> john smith_



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (NIFI-6502) add padLeft() and padRight() functions to RecordPath

2019-07-29 Thread Alessandro D'Armiento (JIRA)
Alessandro D'Armiento created NIFI-6502:
---

 Summary: add padLeft() and padRight() functions to RecordPath 
 Key: NIFI-6502
 URL: https://issues.apache.org/jira/browse/NIFI-6502
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Core Framework
Affects Versions: 1.9.2
Reporter: Alessandro D'Armiento


h2. Current Situation

- [Record path 
functions|https://nifi.apache.org/docs/nifi-docs/html/record-path-guide.html#functions]
 don't provide anything to perform string padding

h1. Improvement Proposal
- Support two new recordPath functions
-- padLeft:
--- padLeft(string field, int desiredLength) will prepend a default character 
to the input string until it reaches the length desiredLength
--- padLeft(String field, int desiredLength, char c) will prepend the c 
characters to the input string until it reaches the length n
-- padRight:
--- padRight(string field, int desiredLength) will append a default character 
to the input string until it reaches the length desiredLength
--- padRight(String field, int desiredLength, char c) will append the c 
characters to the input string until it reaches the length n
-- Default character should be a renderable character such as underscore
-- If the input string is already longer than the padding length, no operation 
should be performed 

h3. Examples


{
  "name" : "john smith"
}


`padLeft(/name, 15, '@')` | @john smith
`padLeft(/name, 15)` | _john smith
`padRight(/name, 15, '@')` | john smith@
`padRight(/name, 15)` | john smith_



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6500) Add padLeft() and padRight() functions to expression language

2019-07-29 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6500:

Description: 
h2. Current Situation

- [Expression Language string 
manipulation|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#strings]
 doesn't have anything to perform string padding

- Existing solutions to achieve left or right padding are unintuitive
-- Es: 
${prop:prepend(''):substring(${prop:length()},${prop:length():plus(4)})}

h1. Improvement Proposal
- Support two new expression language methods
-- padLeft() will add characters on the left of the string until a certain size 
is reached
--- padLeft(int n) will add a default character at the left of the input string 
until it reaches the length n
--- padLeft(int n, char c) will add the c characters at the left of the input 
string until it reaches the length n
-- padRight()
--- padRight(int n) will add a default character at the right of the input 
string until it reaches the length n
--- padRight(int n, char c) will add the c characters at the right of the input 
string until it reaches the length n
-- Default character should be a renderable character such as underscore
-- If the input string is already longer than the padding length, no operation 
should be performed 

h3. Examples
input = "myString"

- ${input:padLeft(10, '#')} => "##myString"
- ${input:padRight(10, '#')} => "myString##"
- ${input:padLeft(10)} => "__myString"
- ${input:padRight(10)} => "myString__"


  was:
h2. Current Situation

- [Expression Language string 
manipulation|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#strings]
 doesn't have anything to perform string padding

- [Existing 
solutions|https://community.hortonworks.com/questions/99856/nifi-padding-string.html]
 to achieve left or right padding are unintuitive and could require to 
instantiate additional processors

h1. Improvement Proposal
- Support two new expression language methods
-- padLeft() will add characters on the left of the string until a certain size 
is reached
--- padLeft(int n) will add a default character at the left of the input string 
until it reaches the length n
--- padLeft(int n, char c) will add the c characters at the left of the input 
string until it reaches the length n
-- padRight()
--- padRight(int n) will add a default character at the right of the input 
string until it reaches the length n
--- padRight(int n, char c) will add the c characters at the right of the input 
string until it reaches the length n
-- Default character should be a renderable character such as underscore
-- If the input string is already longer than the padding length, no operation 
should be performed 

h3. Examples
input = "myString"

- ${input:padLeft(10, '#')} => "##myString"
- ${input:padRight(10, '#')} => "myString##"
- ${input:padLeft(10)} => "__myString"
- ${input:padRight(10)} => "myString__"



> Add padLeft() and padRight() functions to expression language
> -
>
> Key: NIFI-6500
> URL: https://issues.apache.org/jira/browse/NIFI-6500
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> - [Expression Language string 
> manipulation|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#strings]
>  doesn't have anything to perform string padding
> - Existing solutions to achieve left or right padding are unintuitive
> -- Es: 
> ${prop:prepend(''):substring(${prop:length()},${prop:length():plus(4)})}
> h1. Improvement Proposal
> - Support two new expression language methods
> -- padLeft() will add characters on the left of the string until a certain 
> size is reached
> --- padLeft(int n) will add a default character at the left of the input 
> string until it reaches the length n
> --- padLeft(int n, char c) will add the c characters at the left of the input 
> string until it reaches the length n
> -- padRight()
> --- padRight(int n) will add a default character at the right of the input 
> string until it reaches the length n
> --- padRight(int n, char c) will add the c characters at the right of the 
> input string until it reaches the length n
> -- Default character should be a renderable character such as underscore
> -- If the input string is already longer than the padding length, no 
> operation should be performed 
> h3. Examples
> input = "myString"
> - ${input:padLeft(10, '#')} => "##myString"
> - ${input:padRight(10, '#')} => "myString##"
> - ${input:padLeft(10)} => "__myString"
> - ${input:padRight(10)} => "myString__"



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (NIFI-6500) Add padLeft() and padRight() functions to expression language

2019-07-29 Thread Alessandro D'Armiento (JIRA)
Alessandro D'Armiento created NIFI-6500:
---

 Summary: Add padLeft() and padRight() functions to expression 
language
 Key: NIFI-6500
 URL: https://issues.apache.org/jira/browse/NIFI-6500
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Core Framework
Affects Versions: 1.9.2
Reporter: Alessandro D'Armiento


h2. Current Situation

- [Expression Language string 
manipulation|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#strings]
 doesn't have anything to perform string padding

- [Existing 
solutions|https://community.hortonworks.com/questions/99856/nifi-padding-string.html]
 to achieve left or right padding are unintuitive and could require to 
instantiate additional processors

h1. Improvement Proposal
- Support two new expression language methods
-- padLeft() will add characters on the left of the string until a certain size 
is reached
--- padLeft(int n) will add a default character at the left of the input string 
until it reaches the length n
--- padLeft(int n, char c) will add the c characters at the left of the input 
string until it reaches the length n
-- padRight()
--- padRight(int n) will add a default character at the right of the input 
string until it reaches the length n
--- padRight(int n, char c) will add the c characters at the right of the input 
string until it reaches the length n
-- Default character should be a renderable character such as underscore
-- If the input string is already longer than the padding length, no operation 
should be performed 

h3. Examples
input = "myString"

${input:padLeft(10, '#')} => "##myString"
${input:padRight(10, '#')} => "myString##"
${input:padLeft(10)} => "__myString"
${input:padRight(10)} => "myString__"




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6500) Add padLeft() and padRight() functions to expression language

2019-07-29 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6500:

Description: 
h2. Current Situation

- [Expression Language string 
manipulation|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#strings]
 doesn't have anything to perform string padding

- [Existing 
solutions|https://community.hortonworks.com/questions/99856/nifi-padding-string.html]
 to achieve left or right padding are unintuitive and could require to 
instantiate additional processors

h1. Improvement Proposal
- Support two new expression language methods
-- padLeft() will add characters on the left of the string until a certain size 
is reached
--- padLeft(int n) will add a default character at the left of the input string 
until it reaches the length n
--- padLeft(int n, char c) will add the c characters at the left of the input 
string until it reaches the length n
-- padRight()
--- padRight(int n) will add a default character at the right of the input 
string until it reaches the length n
--- padRight(int n, char c) will add the c characters at the right of the input 
string until it reaches the length n
-- Default character should be a renderable character such as underscore
-- If the input string is already longer than the padding length, no operation 
should be performed 

h3. Examples
input = "myString"

- ${input:padLeft(10, '#')} => "##myString"
- ${input:padRight(10, '#')} => "myString##"
- ${input:padLeft(10)} => "__myString"
- ${input:padRight(10)} => "myString__"


  was:
h2. Current Situation

- [Expression Language string 
manipulation|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#strings]
 doesn't have anything to perform string padding

- [Existing 
solutions|https://community.hortonworks.com/questions/99856/nifi-padding-string.html]
 to achieve left or right padding are unintuitive and could require to 
instantiate additional processors

h1. Improvement Proposal
- Support two new expression language methods
-- padLeft() will add characters on the left of the string until a certain size 
is reached
--- padLeft(int n) will add a default character at the left of the input string 
until it reaches the length n
--- padLeft(int n, char c) will add the c characters at the left of the input 
string until it reaches the length n
-- padRight()
--- padRight(int n) will add a default character at the right of the input 
string until it reaches the length n
--- padRight(int n, char c) will add the c characters at the right of the input 
string until it reaches the length n
-- Default character should be a renderable character such as underscore
-- If the input string is already longer than the padding length, no operation 
should be performed 

h3. Examples
input = "myString"

${input:padLeft(10, '#')} => "##myString"
${input:padRight(10, '#')} => "myString##"
${input:padLeft(10)} => "__myString"
${input:padRight(10)} => "myString__"



> Add padLeft() and padRight() functions to expression language
> -
>
> Key: NIFI-6500
> URL: https://issues.apache.org/jira/browse/NIFI-6500
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> - [Expression Language string 
> manipulation|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#strings]
>  doesn't have anything to perform string padding
> - [Existing 
> solutions|https://community.hortonworks.com/questions/99856/nifi-padding-string.html]
>  to achieve left or right padding are unintuitive and could require to 
> instantiate additional processors
> h1. Improvement Proposal
> - Support two new expression language methods
> -- padLeft() will add characters on the left of the string until a certain 
> size is reached
> --- padLeft(int n) will add a default character at the left of the input 
> string until it reaches the length n
> --- padLeft(int n, char c) will add the c characters at the left of the input 
> string until it reaches the length n
> -- padRight()
> --- padRight(int n) will add a default character at the right of the input 
> string until it reaches the length n
> --- padRight(int n, char c) will add the c characters at the right of the 
> input string until it reaches the length n
> -- Default character should be a renderable character such as underscore
> -- If the input string is already longer than the padding length, no 
> operation should be performed 
> h3. Examples
> input = "myString"
> - ${input:padLeft(10, '#')} => "##myString"
> - ${input:padRight(10, '#')} => "myString##"
> - ${input:padLeft(10)} => "__myString"
> - ${input:padRight(10)} => "myString__"



--
This 

[jira] [Updated] (NIFI-6490) MergeRecord properties MIN_RECORDS and MAX_RECORDS should accept variable registry expression language

2019-07-26 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6490:

Component/s: Core Framework

> MergeRecord properties MIN_RECORDS and MAX_RECORDS should accept variable 
> registry expression language
> --
>
> Key: NIFI-6490
> URL: https://issues.apache.org/jira/browse/NIFI-6490
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> h2. Current Situation
> MergeRecords allows two attributes MIN_RECORDS and MAX_RECORDS to define how 
> many records a merged bin can contain. 
> These properties, however, do not support expression language and cannot be 
> inserted from variables.
> h2. Improvement Proposal
> Accept variable registry in these properties



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (NIFI-6490) MergeRecord properties MIN_RECORDS and MAX_RECORDS should accept variable registry expression language

2019-07-26 Thread Alessandro D'Armiento (JIRA)
Alessandro D'Armiento created NIFI-6490:
---

 Summary: MergeRecord properties MIN_RECORDS and MAX_RECORDS should 
accept variable registry expression language
 Key: NIFI-6490
 URL: https://issues.apache.org/jira/browse/NIFI-6490
 Project: Apache NiFi
  Issue Type: Improvement
Affects Versions: 1.9.2
Reporter: Alessandro D'Armiento


h2. Current Situation

MergeRecords allows two attributes MIN_RECORDS and MAX_RECORDS to define how 
many records a merged bin can contain. 
These properties, however, do not support expression language and cannot be 
inserted from variables.

h2. Improvement Proposal

Accept variable registry in these properties




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6476) TestLuceneEventIndex.testUnauthorizedEventsGetPlaceholdersForExpandChildren() fails when it's not intendet to

2019-07-23 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6476:

Description: 
Depending on the machine I run the tests on, with the `mvn -Pcontrib-check 
clean install` command, I either pass or fail the test with `expected <5>, 
actual <8>`
I think it should be useful to investigate if this test relies on other active 
components which may interfere with its execution

  was:Depending on the machine I run the tests on, with the `mvn 
-Pcontrib-check clean install` command, I either pass or fail the test with 
`expected <5>, actual <8>`


> TestLuceneEventIndex.testUnauthorizedEventsGetPlaceholdersForExpandChildren() 
> fails when it's not intendet to
> -
>
> Key: NIFI-6476
> URL: https://issues.apache.org/jira/browse/NIFI-6476
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> Depending on the machine I run the tests on, with the `mvn -Pcontrib-check 
> clean install` command, I either pass or fail the test with `expected <5>, 
> actual <8>`
> I think it should be useful to investigate if this test relies on other 
> active components which may interfere with its execution



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (NIFI-6476) TestLuceneEventIndex.testUnauthorizedEventsGetPlaceholdersForExpandChildren() fails when it's not intendet to

2019-07-23 Thread Alessandro D'Armiento (JIRA)
Alessandro D'Armiento created NIFI-6476:
---

 Summary: 
TestLuceneEventIndex.testUnauthorizedEventsGetPlaceholdersForExpandChildren() 
fails when it's not intendet to
 Key: NIFI-6476
 URL: https://issues.apache.org/jira/browse/NIFI-6476
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.9.2
Reporter: Alessandro D'Armiento


Depending on the machine I run the tests on, with the `mvn -Pcontrib-check 
clean install` command, I either pass or fail the test with `expected <5>, 
actual <8>`



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6461) FileTransfer accepts remote_path property as FlowFileAttribute but documentation only sais it accepts variable registry

2019-07-23 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6461:

Description: 
h2. Current Situation

FileTransfer based processors (ex: PutSFTP) documentation refers they accept 
remote_path property only from the variable registry. 
Actually, SendFTP and SendFSTP will accept file-attribute based expressions, 
while ListFTP and ListSFTP don't.

h2. Improvement Proposal 

Fix documentation

  was:
h2. Current Situation

FileTransfer based processors (ex: PutSFTP) only accept remote path variables 
from the variable registry. Each flowfile sent through those processors will be 
sent in the same directory.
 
h2. Improvement Proposal 

It would be nice to be able to send flowfile to different locations based on 
their attributes, for example partitioning those files. 


> FileTransfer accepts remote_path property as FlowFileAttribute but 
> documentation only sais it accepts variable registry
> ---
>
> Key: NIFI-6461
> URL: https://issues.apache.org/jira/browse/NIFI-6461
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> FileTransfer based processors (ex: PutSFTP) documentation refers they accept 
> remote_path property only from the variable registry. 
> Actually, SendFTP and SendFSTP will accept file-attribute based expressions, 
> while ListFTP and ListSFTP don't.
> h2. Improvement Proposal 
> Fix documentation



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6461) FileTransfer accepts remote_path property as FlowFileAttribute but documentation only sais it accepts variable registry

2019-07-23 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6461:

Summary: FileTransfer accepts remote_path property as FlowFileAttribute but 
documentation only sais it accepts variable registry  (was: FileTransfer 
accepts remote_path property as FlowFileAttribute but documentation only sais 
it accepts )

> FileTransfer accepts remote_path property as FlowFileAttribute but 
> documentation only sais it accepts variable registry
> ---
>
> Key: NIFI-6461
> URL: https://issues.apache.org/jira/browse/NIFI-6461
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> FileTransfer based processors (ex: PutSFTP) only accept remote path variables 
> from the variable registry. Each flowfile sent through those processors will 
> be sent in the same directory.
>  
> h2. Improvement Proposal 
> It would be nice to be able to send flowfile to different locations based on 
> their attributes, for example partitioning those files. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6461) FileTransfer accepts remote_path property as FlowFileAttribute but documentation only sais it accepts

2019-07-23 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6461:

Summary: FileTransfer accepts remote_path property as FlowFileAttribute but 
documentation only sais it accepts   (was: FileTransfer remote path should 
accept FlowFile attributes as expression variable scope)

> FileTransfer accepts remote_path property as FlowFileAttribute but 
> documentation only sais it accepts 
> --
>
> Key: NIFI-6461
> URL: https://issues.apache.org/jira/browse/NIFI-6461
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> FileTransfer based processors (ex: PutSFTP) only accept remote path variables 
> from the variable registry. Each flowfile sent through those processors will 
> be sent in the same directory.
>  
> h2. Improvement Proposal 
> It would be nice to be able to send flowfile to different locations based on 
> their attributes, for example partitioning those files. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Closed] (NIFI-6465) ListHDFS: skip last should be optional

2019-07-23 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento closed NIFI-6465.
---

Not needed because of the existence of GetHDFSFileInfo

> ListHDFS: skip last should be optional
> --
>
> Key: NIFI-6465
> URL: https://issues.apache.org/jira/browse/NIFI-6465
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> h2. Current Situation
> From [official 
> documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.2/org.apache.nifi.processors.hadoop.ListHDFS/index.html]
> * Each time a listing is performed, the files with the latest timestamp will 
> be excluded and picked up during the next execution of the processor. This is 
> done to ensure that we do not miss any files, or produce duplicates, in the 
> cases where files with the same timestamp are written immediately before and 
> after a single execution of the processor.
> h2. Improvement Proposal
> * If we are calling the ListHDFS only after a certain operation which 
> populates an HDFS directory has finished, it is pointless to skip the last 
> file, and avoiding this behavior is tricky.
> * A mandatory property "skip last" should be implemented in order to be able 
> to actively decide whether or not this behavior is necessary, based on the 
> use case.
> * This is also particularly useful in combination with [NIFI-6462]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (NIFI-6462) ListHDFS should be triggerable

2019-07-23 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento resolved NIFI-6462.
-
Resolution: Not A Problem

> ListHDFS should be triggerable
> --
>
> Key: NIFI-6462
> URL: https://issues.apache.org/jira/browse/NIFI-6462
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> h2. Current Situation
> ListHDFS is designed to be (only) the entry point of a data integration 
> pipeline, and therefore can only be triggered on a cron or time base.
> h2. Improvement Proposal
> ListHDFS should be able to be used as part of your pipeline even if you do 
> not expect to have it as the entry point. To obtain it:
>  * It has to be triggerable
>  * Trigger flowfile should be able to bring the listing directory as an 
> attribute



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (NIFI-6465) ListHDFS: skip last should be optional

2019-07-23 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento resolved NIFI-6465.
-
Resolution: Not A Problem

> ListHDFS: skip last should be optional
> --
>
> Key: NIFI-6465
> URL: https://issues.apache.org/jira/browse/NIFI-6465
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> h2. Current Situation
> From [official 
> documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.2/org.apache.nifi.processors.hadoop.ListHDFS/index.html]
> * Each time a listing is performed, the files with the latest timestamp will 
> be excluded and picked up during the next execution of the processor. This is 
> done to ensure that we do not miss any files, or produce duplicates, in the 
> cases where files with the same timestamp are written immediately before and 
> after a single execution of the processor.
> h2. Improvement Proposal
> * If we are calling the ListHDFS only after a certain operation which 
> populates an HDFS directory has finished, it is pointless to skip the last 
> file, and avoiding this behavior is tricky.
> * A mandatory property "skip last" should be implemented in order to be able 
> to actively decide whether or not this behavior is necessary, based on the 
> use case.
> * This is also particularly useful in combination with [NIFI-6462]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Closed] (NIFI-6462) ListHDFS should be triggerable

2019-07-23 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento closed NIFI-6462.
---

Already implemented in GetHDFSFileInfo

> ListHDFS should be triggerable
> --
>
> Key: NIFI-6462
> URL: https://issues.apache.org/jira/browse/NIFI-6462
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> h2. Current Situation
> ListHDFS is designed to be (only) the entry point of a data integration 
> pipeline, and therefore can only be triggered on a cron or time base.
> h2. Improvement Proposal
> ListHDFS should be able to be used as part of your pipeline even if you do 
> not expect to have it as the entry point. To obtain it:
>  * It has to be triggerable
>  * Trigger flowfile should be able to bring the listing directory as an 
> attribute



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (NIFI-6464) ListHDFS should support fragment attributes with strategies

2019-07-23 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento resolved NIFI-6464.
-
Resolution: Not A Problem

> ListHDFS should support fragment attributes with strategies
> ---
>
> Key: NIFI-6464
> URL: https://issues.apache.org/jira/browse/NIFI-6464
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> ListHDFS doesn't support Fragmentation attributes
> h2. Improvement Proposal
>  * Since the processor works on a 1:N semantic (1 input trigger flowfile, N 
> output flowfiles) it would be nice to support fragmentation attributes (for 
> example for subsequent merge operations)
>  ** It would be also useful to support different fragmentation strategies, in 
> order to support multiple user cases. For example, it should be possible to 
> select:
>  *** A "one for all" fragmentation strategy which will create a single 
> fragmentation group. Therefore, all files will have the same 
> fragment.identifier, the same fragment.count, equal to the total number N of 
> listed files, and fragment.index ∈ [0, N).
>  *** A "per subdir" fragmentation strategy which will create different 
> fragmentation groups, one for each scanned subdirectory of the given path. 
> Therefore, for each subfolder, flowfiles will have a specific 
> fragment.identifier, fragment.count will be, for each flowfile, equal to the 
> number Ni of files in the i-th directory, and fragment.index ∈ [0, Ni).



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6465) ListHDFS: skip last should be optional

2019-07-22 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6465:

Description: 
h2. Current Situation

>From [official 
>documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.2/org.apache.nifi.processors.hadoop.ListHDFS/index.html]

* Each time a listing is performed, the files with the latest timestamp will be 
excluded and picked up during the next execution of the processor. This is done 
to ensure that we do not miss any files, or produce duplicates, in the cases 
where files with the same timestamp are written immediately before and after a 
single execution of the processor.

h2. Improvement Proposal

* If we are calling the ListHDFS only after a certain operation which populates 
an HDFS directory has finished, it is pointless to skip the last file, and 
avoiding this behavior is tricky.
* A mandatory property "skip last" should be implemented in order to be able to 
actively decide whether or not this behavior is necessary, based on the use 
case.
* This is also particularly useful in combination with [NIFI-6462]


  was:
h2. Current Situation

>From [official 
>documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.2/org.apache.nifi.processors.hadoop.ListHDFS/index.html]

* Each time a listing is performed, the files with the latest timestamp will be 
excluded and picked up during the next execution of the processor. This is done 
to ensure that we do not miss any files, or produce duplicates, in the cases 
where files with the same timestamp are written immediately before and after a 
single execution of the processor.

h2. Improvement Proposal

* If we are calling the ListHDFS only after a certain operation which populates 
an HDFS directory has finished, it is pointless to skip the last file, and 
avoiding this behavior is tricky.
* A mandatory property "skip last" should be implemented in order to be able to 
actively decide whether or not this behavior is necessary, based on the use 
case.
* This is also particularly useful in combination with 
[NIFI-6462]|https://issues.apache.org/jira/browse/NIFI-6462]



> ListHDFS: skip last should be optional
> --
>
> Key: NIFI-6465
> URL: https://issues.apache.org/jira/browse/NIFI-6465
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> From [official 
> documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.2/org.apache.nifi.processors.hadoop.ListHDFS/index.html]
> * Each time a listing is performed, the files with the latest timestamp will 
> be excluded and picked up during the next execution of the processor. This is 
> done to ensure that we do not miss any files, or produce duplicates, in the 
> cases where files with the same timestamp are written immediately before and 
> after a single execution of the processor.
> h2. Improvement Proposal
> * If we are calling the ListHDFS only after a certain operation which 
> populates an HDFS directory has finished, it is pointless to skip the last 
> file, and avoiding this behavior is tricky.
> * A mandatory property "skip last" should be implemented in order to be able 
> to actively decide whether or not this behavior is necessary, based on the 
> use case.
> * This is also particularly useful in combination with [NIFI-6462]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6465) ListHDFS: skip last should be optional

2019-07-22 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6465:

Description: 
h2. Current Situation

>From [official 
>documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.2/org.apache.nifi.processors.hadoop.ListHDFS/index.html]

* Each time a listing is performed, the files with the latest timestamp will be 
excluded and picked up during the next execution of the processor. This is done 
to ensure that we do not miss any files, or produce duplicates, in the cases 
where files with the same timestamp are written immediately before and after a 
single execution of the processor.

h2. Improvement Proposal

* If we are calling the ListHDFS only after a certain operation which populates 
an HDFS directory has finished, it is pointless to skip the last file, and 
avoiding this behavior is tricky.
* A mandatory property "skip last" should be implemented in order to be able to 
actively decide whether or not this behavior is necessary, based on the use 
case.
* This is also particularly useful in combination with 
[NIFI-6462]|https://issues.apache.org/jira/browse/NIFI-6462]


  was:
h2. Current Situation

>From [official 
>documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.2/org.apache.nifi.processors.hadoop.ListHDFS/index.html]

* Each time a listing is performed, the files with the latest timestamp will be 
excluded and picked up during the next execution of the processor. This is done 
to ensure that we do not miss any files, or produce duplicates, in the cases 
where files with the same timestamp are written immediately before and after a 
single execution of the processor.

h2. Improvement Proposal

* If we are calling the ListHDFS only after a certain operation which populates 
an HDFS directory has finished, it is pointless to skip the last file, and 
avoiding this behavior is tricky.
* A mandatory property "skip last" should be implemented in order to be able to 
actively decide whether or not this behavior is necessary, based on the use 
case.



> ListHDFS: skip last should be optional
> --
>
> Key: NIFI-6465
> URL: https://issues.apache.org/jira/browse/NIFI-6465
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> From [official 
> documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.2/org.apache.nifi.processors.hadoop.ListHDFS/index.html]
> * Each time a listing is performed, the files with the latest timestamp will 
> be excluded and picked up during the next execution of the processor. This is 
> done to ensure that we do not miss any files, or produce duplicates, in the 
> cases where files with the same timestamp are written immediately before and 
> after a single execution of the processor.
> h2. Improvement Proposal
> * If we are calling the ListHDFS only after a certain operation which 
> populates an HDFS directory has finished, it is pointless to skip the last 
> file, and avoiding this behavior is tricky.
> * A mandatory property "skip last" should be implemented in order to be able 
> to actively decide whether or not this behavior is necessary, based on the 
> use case.
> * This is also particularly useful in combination with 
> [NIFI-6462]|https://issues.apache.org/jira/browse/NIFI-6462]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (NIFI-6465) ListHDFS: skip last should be optional

2019-07-22 Thread Alessandro D'Armiento (JIRA)
Alessandro D'Armiento created NIFI-6465:
---

 Summary: ListHDFS: skip last should be optional
 Key: NIFI-6465
 URL: https://issues.apache.org/jira/browse/NIFI-6465
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.9.2
Reporter: Alessandro D'Armiento


h2. Current Situation

>From [official 
>documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.2/org.apache.nifi.processors.hadoop.ListHDFS/index.html]

* Each time a listing is performed, the files with the latest timestamp will be 
excluded and picked up during the next execution of the processor. This is done 
to ensure that we do not miss any files, or produce duplicates, in the cases 
where files with the same timestamp are written immediately before and after a 
single execution of the processor.

h2. Improvement Proposal

* If we are calling the ListHDFS only after a certain operation which populates 
an HDFS directory has finished, it is pointless to skip the last file, and 
avoiding this behavior is tricky.
* A mandatory property "skip last" should be implemented in order to be able to 
actively decide whether or not this behavior is necessary, based on the use 
case.




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6462) ListHDFS should be triggerable

2019-07-22 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6462:

Description: 
h2. Current Situation

ListHDFS is designed to be (only) the entry point of a data integration 
pipeline, and therefore can only be triggered on a cron or time base.
h2. Improvement Proposal

ListHDFS should be able to be used as part of your pipeline even if you do not 
expect to have it as the entry point. To obtain it:
 * It has to be triggerable
 * Trigger flowfile should be able to bring the listing directory as an 
attribute

  was:
h2. Current Situation

ListHDFS is designed to be (only) the entry point of a data integration 
pipeline, and therefore can only be triggered on a cron or time base.
h2. Improvement Proposal

ListHDFS should be able to be used as part of your pipeline even if you do not 
expect to have it as the entry point. To obtain it:
 * It has to be triggerable
 * Trigger flowfile should be able to bring the listing directory as an 
attribute
 * Some logic, such as the "skip the last file in the listing directory" should 
be made optional
 ** Because if you are triggering the execution of the ListHDFS and you are 
sure that the job which writes on the listing folder is over, is pointless to 
skip a file for the next execution


> ListHDFS should be triggerable
> --
>
> Key: NIFI-6462
> URL: https://issues.apache.org/jira/browse/NIFI-6462
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> ListHDFS is designed to be (only) the entry point of a data integration 
> pipeline, and therefore can only be triggered on a cron or time base.
> h2. Improvement Proposal
> ListHDFS should be able to be used as part of your pipeline even if you do 
> not expect to have it as the entry point. To obtain it:
>  * It has to be triggerable
>  * Trigger flowfile should be able to bring the listing directory as an 
> attribute



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (NIFI-6464) ListHDFS should support fragment attributes with strategies

2019-07-22 Thread Alessandro D'Armiento (JIRA)
Alessandro D'Armiento created NIFI-6464:
---

 Summary: ListHDFS should support fragment attributes with 
strategies
 Key: NIFI-6464
 URL: https://issues.apache.org/jira/browse/NIFI-6464
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.9.2
Reporter: Alessandro D'Armiento


h2. Current Situation

ListHDFS doesn't support Fragmentation attributes

h2. Improvement Proposal

 * Since the processor works on a 1:N semantic (1 input trigger flowfile, N 
output flowfiles) it would be nice to support fragmentation attributes (for 
example for subsequent merge operations)
 ** It would be also useful to support different fragmentation strategies, in 
order to support multiple user cases. For example, it should be possible to 
select:
 *** A "one for all" fragmentation strategy which will create a single 
fragmentation group. Therefore, all files will have the same 
fragment.identifier, the same fragment.count, equal to the total number N of 
listed files, and fragment.index ∈ [0, N).
 *** A "per subdir" fragmentation strategy which will create different 
fragmentation groups, one for each scanned subdirectory of the given path. 
Therefore, for each subfolder, flowfiles will have a specific 
fragment.identifier, fragment.count will be, for each flowfile, equal to the 
number Ni of files in the i-th directory, and fragment.index ∈ [0, Ni).



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6462) ListHDFS should be triggerable

2019-07-22 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6462:

Description: 
h2. Current Situation

ListHDFS is designed to be (only) the entry point of a data integration 
pipeline, and therefore can only be triggered on a cron or time base.
h2. Improvement Proposal

ListHDFS should be able to be used as part of your pipeline even if you do not 
expect to have it as the entry point. To obtain it:
 * It has to be triggerable
 * Trigger flowfile should be able to bring the listing directory as an 
attribute
 * Some logic, such as the "skip the last file in the listing directory" should 
be made optional
 ** Because if you are triggering the execution of the ListHDFS and you are 
sure that the job which writes on the listing folder is over, is pointless to 
skip a file for the next execution

  was:
h2. Current Situation

ListHDFS is designed to be (only) the entry point of a data integration 
pipeline, and therefore can only be triggered on a cron or time base.
h2. Improvement Proposal

ListHDFS should be able to be used as part of your pipeline even if you do not 
expect to have it as the entry point. To obtain it:
 * It has to be triggerable
 * Trigger flowfile should be able to bring the listing directory as an 
attribute
 * Some logic, such as the "skip the last file in the listing directory" should 
be made optional
 * Since the processor will work on a 1:N semantic (1 input trigger flowfile, N 
output flowfiles) it would be nice to support fragmentation attributes (for 
example for subsequent merge operations)
 ** It would be also useful to support different fragmentation strategies, in 
order to support multiple user cases. For example, it should be possible to 
select:
 *** A "one for all" fragmentation strategy which will create a single 
fragmentation group. Therefore, all files will have the same 
fragment.identifier, the same fragment.count, equal to the total number N of 
listed files, and fragment.index ∈ [0, N).
 *** A "per subdir" fragmentation strategy which will create different 
fragmentation groups, one for each scanned subdirectory of the given path. 
Therefore, for each subfolder, flowfiles will have a specific 
fragment.identifier, fragment.count will be, for each flowfile, equal to the 
number Ni of files in the i-th directory, and fragment.index ∈ [0, Ni).


> ListHDFS should be triggerable
> --
>
> Key: NIFI-6462
> URL: https://issues.apache.org/jira/browse/NIFI-6462
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> ListHDFS is designed to be (only) the entry point of a data integration 
> pipeline, and therefore can only be triggered on a cron or time base.
> h2. Improvement Proposal
> ListHDFS should be able to be used as part of your pipeline even if you do 
> not expect to have it as the entry point. To obtain it:
>  * It has to be triggerable
>  * Trigger flowfile should be able to bring the listing directory as an 
> attribute
>  * Some logic, such as the "skip the last file in the listing directory" 
> should be made optional
>  ** Because if you are triggering the execution of the ListHDFS and you are 
> sure that the job which writes on the listing folder is over, is pointless to 
> skip a file for the next execution



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6461) FileTransfer remote path should accept FlowFile attributes as expression variable scope

2019-07-21 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6461:

Description: 
h2. Current Situation

FileTransfer based processors (ex: PutSFTP) only accept remote path variables 
from the variable registry. Each flowfile sent through those processors will be 
sent in the same directory.
 
h2. Improvement Proposal 

It would be nice to be able to send flowfile to different locations based on 
their attributes, for example partitioning those files. 

  was:
h2. Current Situation

FileTransfer based processors (ex: PutSFTP) only accept remote path variable 
from the variable registry. Each flowfile sent through those processors will be 
sent in the same directory.
 
h2. Improvement Proposal 

It would be nice to be able to send flowfile to different locations based on 
their attributes, for example partitioning those files. 


> FileTransfer remote path should accept FlowFile attributes as expression 
> variable scope
> ---
>
> Key: NIFI-6461
> URL: https://issues.apache.org/jira/browse/NIFI-6461
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> FileTransfer based processors (ex: PutSFTP) only accept remote path variables 
> from the variable registry. Each flowfile sent through those processors will 
> be sent in the same directory.
>  
> h2. Improvement Proposal 
> It would be nice to be able to send flowfile to different locations based on 
> their attributes, for example partitioning those files. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6461) FileTransfer remote path should accept FlowFile attributes as expression variable scope

2019-07-21 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6461:

Description: 
h2. Current Situation

FileTransfer based processors (ex: PutSFTP) only accept remote path variable 
from the variable registry. Each flowfile sent through those processors will be 
sent in the same directory.
 
h2. Improvement Proposal 

It would be nice to be able to send flowfile to different locations based on 
their attributes, for example partitioning those files. 

  was:
h2. Current Situation

PutSFTP processor only accepts variables from the variable registry. Each 
flowfile sent through the PutSFTP processor will be sent in the same directory.
 
h2. Improvement Proposal 

It would be nice to be able to send flowfile to different locations based on 
their attributes, for example partitioning those files. 


> FileTransfer remote path should accept FlowFile attributes as expression 
> variable scope
> ---
>
> Key: NIFI-6461
> URL: https://issues.apache.org/jira/browse/NIFI-6461
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> FileTransfer based processors (ex: PutSFTP) only accept remote path variable 
> from the variable registry. Each flowfile sent through those processors will 
> be sent in the same directory.
>  
> h2. Improvement Proposal 
> It would be nice to be able to send flowfile to different locations based on 
> their attributes, for example partitioning those files. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6461) FileTransfer remote path should accept FlowFile attributes as expression variable scope

2019-07-21 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6461:

Summary: FileTransfer remote path should accept FlowFile attributes as 
expression variable scope  (was: PutSFTP remote path should accept FlowFile 
attributes as expression variable scope)

> FileTransfer remote path should accept FlowFile attributes as expression 
> variable scope
> ---
>
> Key: NIFI-6461
> URL: https://issues.apache.org/jira/browse/NIFI-6461
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> PutSFTP processor only accepts variables from the variable registry. Each 
> flowfile sent through the PutSFTP processor will be sent in the same 
> directory.
>  
> h2. Improvement Proposal 
> It would be nice to be able to send flowfile to different locations based on 
> their attributes, for example partitioning those files. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6462) ListHDFS should be triggerable

2019-07-20 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6462:

Description: 
h2. Current Situation

ListHDFS is designed to be (only) the entry point of a data integration 
pipeline, and therefore can only be triggered on a cron or time base.
h2. Improvement Proposal

ListHDFS should be able to be used as part of your pipeline even if you do not 
expect to have it as the entry point. To obtain it:
 * It has to be triggerable
 * Trigger flowfile should be able to bring the listing directory as an 
attribute
 * Some logic, such as the "skip the last file in the listing directory" should 
be made optional
 * Since the processor will work on a 1:N semantic (1 input trigger flowfile, N 
output flowfiles) it would be nice to support fragmentation attributes (for 
example for subsequent merge operations)
 ** It would be also useful to support different fragmentation strategies, in 
order to support multiple user cases. For example, it should be possible to 
select:
 *** A "one for all" fragmentation strategy which will create a single 
fragmentation group. Therefore, all files will have the same 
fragment.identifier, the same fragment.count, equal to the total number N of 
listed files, and fragment.index ∈ [0, N).
 *** A "per subdir" fragmentation strategy which will create different 
fragmentation groups, one for each scanned subdirectory of the given path. 
Therefore, for each subfolder, flowfiles will have a specific 
fragment.identifier, fragment.count will be, for each flowfile, equal to the 
number Ni of files in the i-th directory, and fragment.index ∈ [0, Ni).

  was:
h2. Current Situation

ListHDFS is designed to be (only) the entry point of a data integration 
pipeline, and therefore can only be triggered on a cron or time base.
h2. Improvement Proposal

ListHDFS should be able to be used as part of your pipeline even if you do not 
expect to have it as the entry point. To obtain it:
 * It has to be triggerable
 * Trigger flowfile should be able to bring the listing directory as an 
attribute
 * Some logic, such as the "skip the last file in the listing directory" should 
be made optional
 * Since the processor will work on a 1:N semantic (1 input trigger flowfile, N 
output flowfiles) it would be nice to support fragmentation attributes (for 
example for subsequent merge operations)
 * It would be also useful to support different fragmentation strategies, in 
order to support multiple user cases. For example, it should be possible to 
select:
 * A "one for all" fragmentation strategy which will create a single 
fragmentation group. Therefore, all files will have the same 
fragment.identifier, the same fragment.count, equal to the total number N of 
listed files, and fragment.index ∈ [0, N).
 * A "per subdir" fragmentation strategy which will create different 
fragmentation groups, one for each scanned subdirectory of the given path. 
Therefore, for each subfolder, flowfiles will have a specific 
fragment.identifier, fragment.count will be, for each flowfile, equal to the 
number Ni of files in the i-th directory, and fragment.index ∈ [0, Ni).


> ListHDFS should be triggerable
> --
>
> Key: NIFI-6462
> URL: https://issues.apache.org/jira/browse/NIFI-6462
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> ListHDFS is designed to be (only) the entry point of a data integration 
> pipeline, and therefore can only be triggered on a cron or time base.
> h2. Improvement Proposal
> ListHDFS should be able to be used as part of your pipeline even if you do 
> not expect to have it as the entry point. To obtain it:
>  * It has to be triggerable
>  * Trigger flowfile should be able to bring the listing directory as an 
> attribute
>  * Some logic, such as the "skip the last file in the listing directory" 
> should be made optional
>  * Since the processor will work on a 1:N semantic (1 input trigger flowfile, 
> N output flowfiles) it would be nice to support fragmentation attributes (for 
> example for subsequent merge operations)
>  ** It would be also useful to support different fragmentation strategies, in 
> order to support multiple user cases. For example, it should be possible to 
> select:
>  *** A "one for all" fragmentation strategy which will create a single 
> fragmentation group. Therefore, all files will have the same 
> fragment.identifier, the same fragment.count, equal to the total number N of 
> listed files, and fragment.index ∈ [0, N).
>  *** A "per subdir" fragmentation strategy which will create different 
> fragmentation groups, one for each scanned subdirectory of the given path. 
> Therefore, f

[jira] [Updated] (NIFI-6462) ListHDFS should be triggerable

2019-07-20 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6462:

Description: 
h2. Current Situation

ListHDFS is designed to be (only) the entry point of a data integration 
pipeline, and therefore can only be triggered on a cron or time base.
h2. Improvement Proposal

ListHDFS should be able to be used as part of your pipeline even if you do not 
expect to have it as the entry point. To obtain it:
 * It has to be triggerable
 * Trigger flowfile should be able to bring the listing directory as an 
attribute
 * Some logic, such as the "skip the last file in the listing directory" should 
be made optional
 * Since the processor will work on a 1:N semantic (1 input trigger flowfile, N 
output flowfiles) it would be nice to support fragmentation attributes (for 
example for subsequent merge operations)
 * It would be also useful to support different fragmentation strategies, in 
order to support multiple user cases. For example, it should be possible to 
select:
 * A "one for all" fragmentation strategy which will create a single 
fragmentation group. Therefore, all files will have the same 
fragment.identifier, the same fragment.count, equal to the total number N of 
listed files, and fragment.index ∈ [0, N).
 * A "per subdir" fragmentation strategy which will create different 
fragmentation groups, one for each scanned subdirectory of the given path. 
Therefore, for each subfolder, flowfiles will have a specific 
fragment.identifier, fragment.count will be, for each flowfile, equal to the 
number Ni of files in the i-th directory, and fragment.index ∈ [0, Ni).

  was:
h2. Current Situation

ListHDFS is designed to be (only) the entry point of a data integration 
pipeline, and therefore can only be triggered on a cron or time base. 

h2. Improvement Proposal

ListHDFS should be able to be used as part of your pipeline even if you do not 
expect to have it as the entry point. To obtain it: 
* It has to be triggerable
* Trigger flowfile should be able to bring the listing directory as an attribute
* Some logic, such as the "skip the last file in the listing directory" should 
be made optional
* Since the processor will work on a 1:N semantic (1 input trigger flowfile, N 
output flowfiles) it would be nice to support fragmentation attributes (for 
example for subsequent merge operations)
  * It would be also useful to support different fragmentation strategies, in 
order to support multiple user cases. For example, it should be possible to 
select:
*  A "one for all" fragmentation strategy which will create a single 
fragmentation group. Therefore, all files will have the same 
fragment.identifier, the same fragment.count, equal to the total number N of 
listed files, and fragment.index ∈ [0, N).
*  A "per subdir" fragmentation strategy which will create different 
fragmentation groups, one for each scanned subdirectory of the given path. 
Therefore, for each subfolder, flowfiles will have a specific 
fragment.identifier, fragment.count will be, for each flowfile, equal to the 
number Ni of files in the i-th directory, and fragment.index ∈ [0, Ni).



> ListHDFS should be triggerable
> --
>
> Key: NIFI-6462
> URL: https://issues.apache.org/jira/browse/NIFI-6462
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> ListHDFS is designed to be (only) the entry point of a data integration 
> pipeline, and therefore can only be triggered on a cron or time base.
> h2. Improvement Proposal
> ListHDFS should be able to be used as part of your pipeline even if you do 
> not expect to have it as the entry point. To obtain it:
>  * It has to be triggerable
>  * Trigger flowfile should be able to bring the listing directory as an 
> attribute
>  * Some logic, such as the "skip the last file in the listing directory" 
> should be made optional
>  * Since the processor will work on a 1:N semantic (1 input trigger flowfile, 
> N output flowfiles) it would be nice to support fragmentation attributes (for 
> example for subsequent merge operations)
>  * It would be also useful to support different fragmentation strategies, in 
> order to support multiple user cases. For example, it should be possible to 
> select:
>  * A "one for all" fragmentation strategy which will create a single 
> fragmentation group. Therefore, all files will have the same 
> fragment.identifier, the same fragment.count, equal to the total number N of 
> listed files, and fragment.index ∈ [0, N).
>  * A "per subdir" fragmentation strategy which will create different 
> fragmentation groups, one for each scanned subdirectory of the given path. 
> Therefore, for

[jira] [Updated] (NIFI-6462) ListHDFS should be triggerable

2019-07-20 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6462:

Priority: Minor  (was: Major)

> ListHDFS should be triggerable
> --
>
> Key: NIFI-6462
> URL: https://issues.apache.org/jira/browse/NIFI-6462
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> ListHDFS is designed to be (only) the entry point of a data integration 
> pipeline, and therefore can only be triggered on a cron or time base. 
> h2. Improvement Proposal
> ListHDFS should be able to be used as part of your pipeline even if you do 
> not expect to have it as the entry point. To obtain it: 
> * It has to be triggerable
> * Trigger flowfile should be able to bring the listing directory as an 
> attribute
> * Some logic, such as the "skip the last file in the listing directory" 
> should be made optional
> * Since the processor will work on a 1:N semantic (1 input trigger flowfile, 
> N output flowfiles) it would be nice to support fragmentation attributes (for 
> example for subsequent merge operations)
>   * It would be also useful to support different fragmentation strategies, in 
> order to support multiple user cases. For example, it should be possible to 
> select:
> *  A "one for all" fragmentation strategy which will create a single 
> fragmentation group. Therefore, all files will have the same 
> fragment.identifier, the same fragment.count, equal to the total number N of 
> listed files, and fragment.index ∈ [0, N).
> *  A "per subdir" fragmentation strategy which will create different 
> fragmentation groups, one for each scanned subdirectory of the given path. 
> Therefore, for each subfolder, flowfiles will have a specific 
> fragment.identifier, fragment.count will be, for each flowfile, equal to the 
> number Ni of files in the i-th directory, and fragment.index ∈ [0, Ni).



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (NIFI-6462) ListHDFS should be triggerable

2019-07-20 Thread Alessandro D'Armiento (JIRA)
Alessandro D'Armiento created NIFI-6462:
---

 Summary: ListHDFS should be triggerable
 Key: NIFI-6462
 URL: https://issues.apache.org/jira/browse/NIFI-6462
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.9.2
Reporter: Alessandro D'Armiento


h2. Current Situation

ListHDFS is designed to be (only) the entry point of a data integration 
pipeline, and therefore can only be triggered on a cron or time base. 

h2. Improvement Proposal

ListHDFS should be able to be used as part of your pipeline even if you do not 
expect to have it as the entry point. To obtain it: 
* It has to be triggerable
* Trigger flowfile should be able to bring the listing directory as an attribute
* Some logic, such as the "skip the last file in the listing directory" should 
be made optional
* Since the processor will work on a 1:N semantic (1 input trigger flowfile, N 
output flowfiles) it would be nice to support fragmentation attributes (for 
example for subsequent merge operations)
  * It would be also useful to support different fragmentation strategies, in 
order to support multiple user cases. For example, it should be possible to 
select:
*  A "one for all" fragmentation strategy which will create a single 
fragmentation group. Therefore, all files will have the same 
fragment.identifier, the same fragment.count, equal to the total number N of 
listed files, and fragment.index ∈ [0, N).
*  A "per subdir" fragmentation strategy which will create different 
fragmentation groups, one for each scanned subdirectory of the given path. 
Therefore, for each subfolder, flowfiles will have a specific 
fragment.identifier, fragment.count will be, for each flowfile, equal to the 
number Ni of files in the i-th directory, and fragment.index ∈ [0, Ni).




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (NIFI-6461) PutSFTP remote path should accept FlowFile attributes as expression variable scope

2019-07-20 Thread Alessandro D'Armiento (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro D'Armiento updated NIFI-6461:

Description: 
h2. Current Situation

PutSFTP processor only accepts variables from the variable registry. Each 
flowfile sent through the PutSFTP processor will be sent in the same directory.
 
h2. Improvement Proposal 

It would be nice to be able to send flowfile to different locations based on 
their attributes, for example partitioning those files. 

  was:
## Current Situation

PutSFTP processor only accepts variables from the variable registry. Each 
flowfile sent through the PutSFTP processor will be sent in the same directory.

 

## Improvement Proposal 

It would be nice to be able to send flowfile to different locations based on 
their attributes, for example partitioning those files. 


> PutSFTP remote path should accept FlowFile attributes as expression variable 
> scope
> --
>
> Key: NIFI-6461
> URL: https://issues.apache.org/jira/browse/NIFI-6461
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Alessandro D'Armiento
>Priority: Minor
>
> h2. Current Situation
> PutSFTP processor only accepts variables from the variable registry. Each 
> flowfile sent through the PutSFTP processor will be sent in the same 
> directory.
>  
> h2. Improvement Proposal 
> It would be nice to be able to send flowfile to different locations based on 
> their attributes, for example partitioning those files. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (NIFI-6461) PutSFTP remote path should accept FlowFile attributes as expression variable scope

2019-07-20 Thread Alessandro D'Armiento (JIRA)
Alessandro D'Armiento created NIFI-6461:
---

 Summary: PutSFTP remote path should accept FlowFile attributes as 
expression variable scope
 Key: NIFI-6461
 URL: https://issues.apache.org/jira/browse/NIFI-6461
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.9.2
Reporter: Alessandro D'Armiento


## Current Situation

PutSFTP processor only accepts variables from the variable registry. Each 
flowfile sent through the PutSFTP processor will be sent in the same directory.

 

## Improvement Proposal 

It would be nice to be able to send flowfile to different locations based on 
their attributes, for example partitioning those files. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)