[jira] [Comment Edited] (NIFI-7052) UI - Processor details dialog with Advanced button

2020-01-21 Thread Nagasivanath Dasari (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17020788#comment-17020788
 ] 

Nagasivanath Dasari edited comment on NIFI-7052 at 1/22/20 5:52 AM:


can you give the scenario, in which case you get this issue?

I checked the code, there is a condition for whether the current window is top 
window or not, for the display of the advanced button

if (*top === window* && b.isDefinedAndNotNull(h) && 
b.isDefinedAndNotNull(p.config.customUiUrl) && p.config.customUiUrl !== "") 

So, in case of summary page, as its an iframe, while showing the advanced 
button, this condition should be false and the button shouldn't be shown


was (Author: nagasivanath):
can you tell me, in which case you are able to get this issue?

I checked the code, there is a condition for whether the current window is top 
window or not, for the display of the advanced button

if (*top === window* && b.isDefinedAndNotNull(h) && 
b.isDefinedAndNotNull(p.config.customUiUrl) && p.config.customUiUrl !== "") 

So, in case of summary page, as its an iframe, while showing the advanced 
button, this condition should be false and the button shouldn't be shown

> UI - Processor details dialog with Advanced button
> --
>
> Key: NIFI-7052
> URL: https://issues.apache.org/jira/browse/NIFI-7052
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core UI
>Reporter: Matt Gilman
>Priority: Major
>
> The processor configuration and processor details dialog can optionally 
> contain a button that launches the processor advanced UI. This Advanced 
> button does not work correctly when the Summary page is popped out of the 
> primary UI. I believe that in this case, we should disable/hide the Advanced 
> button feature when the Summary page is popped out.
> {code:java}
> nf-custom-ui.js?1.11.0-SNAPSHOT:79 Uncaught TypeError: Cannot read property 
> 'showPage' of undefined
> at nf-custom-ui.js?1.11.0-SNAPSHOT:79
> at c (jquery.min.js:2)
> at Object.add [as done] (jquery.min.js:2)
> at Object. (nf-custom-ui.js?1.11.0-SNAPSHOT:58)
> at Function.Deferred (jquery.min.js:2)
> at Object.showCustomUi (nf-custom-ui.js?1.11.0-SNAPSHOT:57)
> at k.fn.init.click (nf-processor-details.js?1.11.0-SNAPSHOT:325)
> at HTMLDivElement. (jquery.modal.js?1.11.0-SNAPSHOT:143)
> at HTMLDivElement.dispatch (jquery.min.js:2)
> at HTMLDivElement.$event.dispatch (jquery.event.drag-2.3.0.js:382)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7052) UI - Processor details dialog with Advanced button

2020-01-21 Thread Nagasivanath Dasari (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17020788#comment-17020788
 ] 

Nagasivanath Dasari commented on NIFI-7052:
---

can you tell me, in which case you are able to get this issue?

I checked the code, there is a condition for whether the current window is top 
window or not, for the display of the advanced button

if (*top === window* && b.isDefinedAndNotNull(h) && 
b.isDefinedAndNotNull(p.config.customUiUrl) && p.config.customUiUrl !== "") 

So, in case of summary page, as its an iframe, while showing the advanced 
button, this condition should be false and the button shouldn't be shown

> UI - Processor details dialog with Advanced button
> --
>
> Key: NIFI-7052
> URL: https://issues.apache.org/jira/browse/NIFI-7052
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core UI
>Reporter: Matt Gilman
>Priority: Major
>
> The processor configuration and processor details dialog can optionally 
> contain a button that launches the processor advanced UI. This Advanced 
> button does not work correctly when the Summary page is popped out of the 
> primary UI. I believe that in this case, we should disable/hide the Advanced 
> button feature when the Summary page is popped out.
> {code:java}
> nf-custom-ui.js?1.11.0-SNAPSHOT:79 Uncaught TypeError: Cannot read property 
> 'showPage' of undefined
> at nf-custom-ui.js?1.11.0-SNAPSHOT:79
> at c (jquery.min.js:2)
> at Object.add [as done] (jquery.min.js:2)
> at Object. (nf-custom-ui.js?1.11.0-SNAPSHOT:58)
> at Function.Deferred (jquery.min.js:2)
> at Object.showCustomUi (nf-custom-ui.js?1.11.0-SNAPSHOT:57)
> at k.fn.init.click (nf-processor-details.js?1.11.0-SNAPSHOT:325)
> at HTMLDivElement. (jquery.modal.js?1.11.0-SNAPSHOT:143)
> at HTMLDivElement.dispatch (jquery.min.js:2)
> at HTMLDivElement.$event.dispatch (jquery.event.drag-2.3.0.js:382)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] CHNnoodle closed pull request #4005: Merge pull request #1 from apache/master

2020-01-21 Thread GitBox
CHNnoodle closed pull request #4005: Merge pull request #1 from apache/master
URL: https://github.com/apache/nifi/pull/4005
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] CHNnoodle opened a new pull request #4005: Merge pull request #1 from apache/master

2020-01-21 Thread GitBox
CHNnoodle opened a new pull request #4005: Merge pull request #1 from 
apache/master
URL: https://github.com/apache/nifi/pull/4005
 
 
   1
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-7054) Add RecordSinkServiceLookup

2020-01-21 Thread Matt Burgess (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17020728#comment-17020728
 ] 

Matt Burgess commented on NIFI-7054:


Might involve an interface change to RecordSinkService for `reset()`, probably 
need to send in attributes to pick the right service to reset.

> Add RecordSinkServiceLookup
> ---
>
> Key: NIFI-7054
> URL: https://issues.apache.org/jira/browse/NIFI-7054
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> The RecordSinkService controller service interface was added in NiFi 1.10 
> (via NIFI-6780) to decouple the destination for records in a FlowFile from 
> the format of those records. Since then there have been various 
> implementations (NIFI-6799, NIFI-6819). Other controller services have been 
> augmented with a "lookup" pattern where the actual CS can be swapped out 
> during the flow based on an attribute/variable (such as 
> DBCPConnectionPoolLookup). 
> RecordSinkService could be improved to have such a lookup as well, especially 
> with the advent of the PutRecord processor (NIFI-6947). This would allow some 
> flow files to be routed to Kafka while others are sent Site-to-Site for 
> example, all with a single configured controller service.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] shawnweeks commented on issue #3188: NIFI-5829 Create Lookup Controller Services for RecordSetWriter and R…

2020-01-21 Thread GitBox
shawnweeks commented on issue #3188: NIFI-5829 Create Lookup Controller 
Services for RecordSetWriter and R…
URL: https://github.com/apache/nifi/pull/3188#issuecomment-576950195
 
 
   @patricker It looks like several of the changes with attributes to complete 
this were done in other efforts do you have any time update this PR? I've got 
some uses cases for it and could spend some time on it if you can't.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (NIFI-7054) Add RecordSinkServiceLookup

2020-01-21 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reassigned NIFI-7054:
--

Assignee: Matt Burgess

> Add RecordSinkServiceLookup
> ---
>
> Key: NIFI-7054
> URL: https://issues.apache.org/jira/browse/NIFI-7054
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> The RecordSinkService controller service interface was added in NiFi 1.10 
> (via NIFI-6780) to decouple the destination for records in a FlowFile from 
> the format of those records. Since then there have been various 
> implementations (NIFI-6799, NIFI-6819). Other controller services have been 
> augmented with a "lookup" pattern where the actual CS can be swapped out 
> during the flow based on an attribute/variable (such as 
> DBCPConnectionPoolLookup). 
> RecordSinkService could be improved to have such a lookup as well, especially 
> with the advent of the PutRecord processor (NIFI-6947). This would allow some 
> flow files to be routed to Kafka while others are sent Site-to-Site for 
> example, all with a single configured controller service.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7054) Add RecordSinkServiceLookup

2020-01-21 Thread Matt Burgess (Jira)
Matt Burgess created NIFI-7054:
--

 Summary: Add RecordSinkServiceLookup
 Key: NIFI-7054
 URL: https://issues.apache.org/jira/browse/NIFI-7054
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Extensions
Reporter: Matt Burgess


The RecordSinkService controller service interface was added in NiFi 1.10 (via 
NIFI-6780) to decouple the destination for records in a FlowFile from the 
format of those records. Since then there have been various implementations 
(NIFI-6799, NIFI-6819). Other controller services have been augmented with a 
"lookup" pattern where the actual CS can be swapped out during the flow based 
on an attribute/variable (such as DBCPConnectionPoolLookup). 

RecordSinkService could be improved to have such a lookup as well, especially 
with the advent of the PutRecord processor (NIFI-6947). This would allow some 
flow files to be routed to Kafka while others are sent Site-to-Site for 
example, all with a single configured controller service.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7053) Update Toolkit Guide with macOS 10.15 trusted certificate requirements (2048 bit key and max of 825 days of validity)

2020-01-21 Thread Andrew M. Lim (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew M. Lim updated NIFI-7053:

Component/s: Security

> Update Toolkit Guide with macOS 10.15  trusted certificate requirements (2048 
> bit key and max of 825 days of validity)
> --
>
> Key: NIFI-7053
> URL: https://issues.apache.org/jira/browse/NIFI-7053
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website, Security
>Reporter: Andrew M. Lim
>Assignee: Andrew M. Lim
>Priority: Major
>
> I was testing secured NiFi and NiFi Registry on macOS 10.15.2 using certs 
> generated by the TLS Toolkit.  I was able to access the UIs of both apps 
> using Safari but not able to with Chrome due to a NET::ERR_CERT_REVOKED error 
> which I had never seen before.  Turns out this is a known issue on Catalina 
> ([https://support.apple.com/en-us/HT210176]). macOSX 10.15 requires certs to 
> be:
>  * valid for 825 days or less
>  * a minimum 2048 bit key
> By default, the TLS Toolkit sets the number of days the cert should be valid 
> for to 1095 days and the number of bits for generated keys to 2048. 
> Generating new certs with the required 825 validity solved the issue.
> We should document this in the Toolkit Guide for the Mac users in the NiFi 
> community.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7053) Update Toolkit Guide with macOS 10.15 trusted certificate requirements (2048 bit key and max of 825 days of validity)

2020-01-21 Thread Andrew M. Lim (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew M. Lim reassigned NIFI-7053:
---

Assignee: Andrew M. Lim

> Update Toolkit Guide with macOS 10.15  trusted certificate requirements (2048 
> bit key and max of 825 days of validity)
> --
>
> Key: NIFI-7053
> URL: https://issues.apache.org/jira/browse/NIFI-7053
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Reporter: Andrew M. Lim
>Assignee: Andrew M. Lim
>Priority: Major
>
> I was testing secured NiFi and NiFi Registry on macOS 10.15.2 using certs 
> generated by the TLS Toolkit.  I was able to access the UIs of both apps 
> using Safari but not able to with Chrome due to a NET::ERR_CERT_REVOKED error 
> which I had never seen before.  Turns out this is a known issue on Catalina 
> ([https://support.apple.com/en-us/HT210176]). macOSX 10.15 requires certs to 
> be:
>  * valid for 825 days or less
>  * a minimum 2048 bit key
> By default, the TLS Toolkit sets the number of days the cert should be valid 
> for to 1095 days and the number of bits for generated keys to 2048. 
> Generating new certs with the required 825 validity solved the issue.
> We should document this in the Toolkit Guide for the Mac users in the NiFi 
> community.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7053) Update Toolkit Guide with macOS 10.15 trusted certificate requirements (2048 bit key and max of 825 days of validity)

2020-01-21 Thread Andrew M. Lim (Jira)
Andrew M. Lim created NIFI-7053:
---

 Summary: Update Toolkit Guide with macOS 10.15  trusted 
certificate requirements (2048 bit key and max of 825 days of validity)
 Key: NIFI-7053
 URL: https://issues.apache.org/jira/browse/NIFI-7053
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Documentation & Website
Reporter: Andrew M. Lim


I was testing secured NiFi and NiFi Registry on macOS 10.15.2 using certs 
generated by the TLS Toolkit.  I was able to access the UIs of both apps using 
Safari but not able to with Chrome due to a NET::ERR_CERT_REVOKED error which I 
had never seen before.  Turns out this is a known issue on Catalina 
([https://support.apple.com/en-us/HT210176]). macOSX 10.15 requires certs to be:
 * valid for 825 days or less
 * a minimum 2048 bit key

By default, the TLS Toolkit sets the number of days the cert should be valid 
for to 1095 days and the number of bits for generated keys to 2048. Generating 
new certs with the required 825 validity solved the issue.

We should document this in the Toolkit Guide for the Mac users in the NiFi 
community.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-6908) PutKudu 1.10.0 Memory Leak

2020-01-21 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt resolved NIFI-6908.

Resolution: Duplicate

> PutKudu 1.10.0 Memory Leak
> --
>
> Key: NIFI-6908
> URL: https://issues.apache.org/jira/browse/NIFI-6908
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
> Environment: NiFi 1.10.0 8-Node Cluster; Kudu 1.10.0
>Reporter: Josef Zahner
>Assignee: Grant Henke
>Priority: Blocker
>  Labels: heap, kudu, oom
> Attachments: PutKudu_Properties.png, PutKudu_Scheduling.png, 
> PutKudu_Settings.png, memory_leak.png
>
>
> PutKudu 1.10.0 eats up all the heap memory and garbage collection can't 
> anymore free up memory after a few hours.
> We have an NiFi 8-Node cluster (31GB java max memory configured) with a 
> streaming source which generates constantly about 2'500 flowfiles/2.5GB data 
> in 5 minutes. In our example the streaming source was running on "nifi-05" 
> (green line). As you can see between 00:00 and 04:00 the memory grows and 
> grows and at the end the node became instable and the dreaded 
> "java.lang.OutOfMemoryError: Java heap space" message appeared. We tried to 
> do a manual garbage collection with visualvm profiler, but it didn't helped.  
> !memory_leak.png!
> We are sure that the PutKudu is the culprit, as we have now taken the 
> codebase from PutKudu 1.9.2 and use it now in NiFi 1.10.0 without any leaks 
> at all.
> With the official PutKudu 1.10.0 processor our cluster crashed within 5-6 
> hours with our current load as the memory was completely full.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6908) PutKudu 1.10.0 Memory Leak

2020-01-21 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-6908:
---
Fix Version/s: 1.11.0

> PutKudu 1.10.0 Memory Leak
> --
>
> Key: NIFI-6908
> URL: https://issues.apache.org/jira/browse/NIFI-6908
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
> Environment: NiFi 1.10.0 8-Node Cluster; Kudu 1.10.0
>Reporter: Josef Zahner
>Assignee: Grant Henke
>Priority: Blocker
>  Labels: heap, kudu, oom
> Fix For: 1.11.0
>
> Attachments: PutKudu_Properties.png, PutKudu_Scheduling.png, 
> PutKudu_Settings.png, memory_leak.png
>
>
> PutKudu 1.10.0 eats up all the heap memory and garbage collection can't 
> anymore free up memory after a few hours.
> We have an NiFi 8-Node cluster (31GB java max memory configured) with a 
> streaming source which generates constantly about 2'500 flowfiles/2.5GB data 
> in 5 minutes. In our example the streaming source was running on "nifi-05" 
> (green line). As you can see between 00:00 and 04:00 the memory grows and 
> grows and at the end the node became instable and the dreaded 
> "java.lang.OutOfMemoryError: Java heap space" message appeared. We tried to 
> do a manual garbage collection with visualvm profiler, but it didn't helped.  
> !memory_leak.png!
> We are sure that the PutKudu is the culprit, as we have now taken the 
> codebase from PutKudu 1.9.2 and use it now in NiFi 1.10.0 without any leaks 
> at all.
> With the official PutKudu 1.10.0 processor our cluster crashed within 5-6 
> hours with our current load as the memory was completely full.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6908) PutKudu 1.10.0 Memory Leak

2020-01-21 Thread Grant Henke (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17020561#comment-17020561
 ] 

Grant Henke commented on NIFI-6908:
---

Agreed. 

> PutKudu 1.10.0 Memory Leak
> --
>
> Key: NIFI-6908
> URL: https://issues.apache.org/jira/browse/NIFI-6908
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
> Environment: NiFi 1.10.0 8-Node Cluster; Kudu 1.10.0
>Reporter: Josef Zahner
>Assignee: Grant Henke
>Priority: Blocker
>  Labels: heap, kudu, oom
> Attachments: PutKudu_Properties.png, PutKudu_Scheduling.png, 
> PutKudu_Settings.png, memory_leak.png
>
>
> PutKudu 1.10.0 eats up all the heap memory and garbage collection can't 
> anymore free up memory after a few hours.
> We have an NiFi 8-Node cluster (31GB java max memory configured) with a 
> streaming source which generates constantly about 2'500 flowfiles/2.5GB data 
> in 5 minutes. In our example the streaming source was running on "nifi-05" 
> (green line). As you can see between 00:00 and 04:00 the memory grows and 
> grows and at the end the node became instable and the dreaded 
> "java.lang.OutOfMemoryError: Java heap space" message appeared. We tried to 
> do a manual garbage collection with visualvm profiler, but it didn't helped.  
> !memory_leak.png!
> We are sure that the PutKudu is the culprit, as we have now taken the 
> codebase from PutKudu 1.9.2 and use it now in NiFi 1.10.0 without any leaks 
> at all.
> With the official PutKudu 1.10.0 processor our cluster crashed within 5-6 
> hours with our current load as the memory was completely full.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6947) Add PutRecord processor to leverage RecordSinkService implementations

2020-01-21 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-6947:
---
Fix Version/s: 1.11.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Add PutRecord processor to leverage RecordSinkService implementations
> -
>
> Key: NIFI-6947
> URL: https://issues.apache.org/jira/browse/NIFI-6947
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.11.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The RecordSinkService interface was created as part of NIFI-6780 in order to 
> decouple the generation of status records (via QueryNiFiReportingTask for 
> example) from the transport/format of an external system. Implementations 
> such as SiteToSiteReportingRecordSink, DatabaseRecordSink, and 
> KafkaRecordSink are already in NiFi.
> This Jira proposes a generic PutRecord processor that consists of 
> RecordReader and  RecordSinkService controller services, for the purposes of 
> sending the records of an incoming flow file to whatever RecordSinkService 
> implementation is configured by the user. In general this might alleviate the 
> need for "PutExternalSystemXRecord" processors such as PutDatabaseRecord, 
> where a single processor type can be used to send records to any number of 
> external systems for which a RecordSinkService has been implemented. It 
> basically shifts the logic from the processor to the controller service, but 
> if a RecordSinkServiceLookup is implemented, this could offer a more flexible 
> approach to sending records to various systems with fewer components.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6908) PutKudu 1.10.0 Memory Leak

2020-01-21 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17020543#comment-17020543
 ] 

Joe Witt commented on NIFI-6908:


I am pretty sure this can be closed due to 
https://issues.apache.org/jira/browse/NIFI-6895

[~granthenke] you agree?

> PutKudu 1.10.0 Memory Leak
> --
>
> Key: NIFI-6908
> URL: https://issues.apache.org/jira/browse/NIFI-6908
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
> Environment: NiFi 1.10.0 8-Node Cluster; Kudu 1.10.0
>Reporter: Josef Zahner
>Assignee: Grant Henke
>Priority: Blocker
>  Labels: heap, kudu, oom
> Attachments: PutKudu_Properties.png, PutKudu_Scheduling.png, 
> PutKudu_Settings.png, memory_leak.png
>
>
> PutKudu 1.10.0 eats up all the heap memory and garbage collection can't 
> anymore free up memory after a few hours.
> We have an NiFi 8-Node cluster (31GB java max memory configured) with a 
> streaming source which generates constantly about 2'500 flowfiles/2.5GB data 
> in 5 minutes. In our example the streaming source was running on "nifi-05" 
> (green line). As you can see between 00:00 and 04:00 the memory grows and 
> grows and at the end the node became instable and the dreaded 
> "java.lang.OutOfMemoryError: Java heap space" message appeared. We tried to 
> do a manual garbage collection with visualvm profiler, but it didn't helped.  
> !memory_leak.png!
> We are sure that the PutKudu is the culprit, as we have now taken the 
> codebase from PutKudu 1.9.2 and use it now in NiFi 1.10.0 without any leaks 
> at all.
> With the official PutKudu 1.10.0 processor our cluster crashed within 5-6 
> hours with our current load as the memory was completely full.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6908) PutKudu 1.10.0 Memory Leak

2020-01-21 Thread Gardella Juan Pablo (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17020538#comment-17020538
 ] 

Gardella Juan Pablo commented on NIFI-6908:
---

[~jzahner] Are you able to attach the memory dump to check memory leaks in 
somewhere? Or any simple template to try to reproduce the problem?

> PutKudu 1.10.0 Memory Leak
> --
>
> Key: NIFI-6908
> URL: https://issues.apache.org/jira/browse/NIFI-6908
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
> Environment: NiFi 1.10.0 8-Node Cluster; Kudu 1.10.0
>Reporter: Josef Zahner
>Assignee: Grant Henke
>Priority: Blocker
>  Labels: heap, kudu, oom
> Attachments: PutKudu_Properties.png, PutKudu_Scheduling.png, 
> PutKudu_Settings.png, memory_leak.png
>
>
> PutKudu 1.10.0 eats up all the heap memory and garbage collection can't 
> anymore free up memory after a few hours.
> We have an NiFi 8-Node cluster (31GB java max memory configured) with a 
> streaming source which generates constantly about 2'500 flowfiles/2.5GB data 
> in 5 minutes. In our example the streaming source was running on "nifi-05" 
> (green line). As you can see between 00:00 and 04:00 the memory grows and 
> grows and at the end the node became instable and the dreaded 
> "java.lang.OutOfMemoryError: Java heap space" message appeared. We tried to 
> do a manual garbage collection with visualvm profiler, but it didn't helped.  
> !memory_leak.png!
> We are sure that the PutKudu is the culprit, as we have now taken the 
> codebase from PutKudu 1.9.2 and use it now in NiFi 1.10.0 without any leaks 
> at all.
> With the official PutKudu 1.10.0 processor our cluster crashed within 5-6 
> hours with our current load as the memory was completely full.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7050) ConsumeJMS is not yielded in case of exception

2020-01-21 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7050:
--
Status: Patch Available  (was: In Progress)

> ConsumeJMS is not yielded in case of exception
> --
>
> Key: NIFI-7050
> URL: https://issues.apache.org/jira/browse/NIFI-7050
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.10.0
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If any exception happens when ConsumerJMS tries to read messages, the process 
> tries again immediately. 
> {code:java}
>   try {
> consumer.consume(destinationName, errorQueueName, durable, 
> shared, subscriptionName, charset, new ConsumerCallback() {
> @Override
> public void accept(final JMSResponse response) {
> if (response == null) {
> return;
> }
> FlowFile flowFile = processSession.create();
> flowFile = processSession.write(flowFile, out -> 
> out.write(response.getMessageBody()));
> final Map jmsHeaders = 
> response.getMessageHeaders();
> final Map jmsProperties = 
> response.getMessageProperties();
> flowFile = 
> ConsumeJMS.this.updateFlowFileAttributesWithJMSAttributes(jmsHeaders, 
> flowFile, processSession);
> flowFile = 
> ConsumeJMS.this.updateFlowFileAttributesWithJMSAttributes(jmsProperties, 
> flowFile, processSession);
> flowFile = processSession.putAttribute(flowFile, 
> JMS_SOURCE_DESTINATION_NAME, destinationName);
> processSession.getProvenanceReporter().receive(flowFile, 
> destinationName);
> processSession.putAttribute(flowFile, JMS_MESSAGETYPE, 
> response.getMessageType());
> processSession.transfer(flowFile, REL_SUCCESS);
> processSession.commit();
> }
> });
> } catch(Exception e) {
> consumer.setValid(false);
> throw e; // for backward compatibility with exception handling in 
> flows
> }
> }
> {code}
> It should call {{context.yield}} in exception block. Notice 
> [PublishJMS|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/PublishJMS.java#L166]
>  is yielded in the same scenario. It is requires to do in the ConsumeJMS 
> processor only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] gardellajuanpablo opened a new pull request #4004: NIFI-7050 ConsumeJMS is not yielded in case of exception

2020-01-21 Thread GitBox
gardellajuanpablo opened a new pull request #4004:  NIFI-7050 ConsumeJMS is not 
yielded in case of exception
URL: https://github.com/apache/nifi/pull/4004
 
 
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (NIFI-7050) ConsumeJMS is not yielded in case of exception

2020-01-21 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo reassigned NIFI-7050:
-

Assignee: Gardella Juan Pablo

> ConsumeJMS is not yielded in case of exception
> --
>
> Key: NIFI-7050
> URL: https://issues.apache.org/jira/browse/NIFI-7050
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.10.0
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Minor
>
> If any exception happens when ConsumerJMS tries to read messages, the process 
> tries again immediately. 
> {code:java}
>   try {
> consumer.consume(destinationName, errorQueueName, durable, 
> shared, subscriptionName, charset, new ConsumerCallback() {
> @Override
> public void accept(final JMSResponse response) {
> if (response == null) {
> return;
> }
> FlowFile flowFile = processSession.create();
> flowFile = processSession.write(flowFile, out -> 
> out.write(response.getMessageBody()));
> final Map jmsHeaders = 
> response.getMessageHeaders();
> final Map jmsProperties = 
> response.getMessageProperties();
> flowFile = 
> ConsumeJMS.this.updateFlowFileAttributesWithJMSAttributes(jmsHeaders, 
> flowFile, processSession);
> flowFile = 
> ConsumeJMS.this.updateFlowFileAttributesWithJMSAttributes(jmsProperties, 
> flowFile, processSession);
> flowFile = processSession.putAttribute(flowFile, 
> JMS_SOURCE_DESTINATION_NAME, destinationName);
> processSession.getProvenanceReporter().receive(flowFile, 
> destinationName);
> processSession.putAttribute(flowFile, JMS_MESSAGETYPE, 
> response.getMessageType());
> processSession.transfer(flowFile, REL_SUCCESS);
> processSession.commit();
> }
> });
> } catch(Exception e) {
> consumer.setValid(false);
> throw e; // for backward compatibility with exception handling in 
> flows
> }
> }
> {code}
> It should call {{context.yield}} in exception block. Notice 
> [PublishJMS|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/PublishJMS.java#L166]
>  is yielded in the same scenario. It is requires to do in the ConsumeJMS 
> processor only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7052) UI - Processor details dialog with Advanced button

2020-01-21 Thread Matt Gilman (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-7052:
--
Description: 
The processor configuration and processor details dialog can optionally contain 
a button that launches the processor advanced UI. This Advanced button does not 
work correctly when the Summary page is popped out of the primary UI. I believe 
that in this case, we should disable/hide the Advanced button feature when the 
Summary page is popped out.

{code:java}
nf-custom-ui.js?1.11.0-SNAPSHOT:79 Uncaught TypeError: Cannot read property 
'showPage' of undefined
at nf-custom-ui.js?1.11.0-SNAPSHOT:79
at c (jquery.min.js:2)
at Object.add [as done] (jquery.min.js:2)
at Object. (nf-custom-ui.js?1.11.0-SNAPSHOT:58)
at Function.Deferred (jquery.min.js:2)
at Object.showCustomUi (nf-custom-ui.js?1.11.0-SNAPSHOT:57)
at k.fn.init.click (nf-processor-details.js?1.11.0-SNAPSHOT:325)
at HTMLDivElement. (jquery.modal.js?1.11.0-SNAPSHOT:143)
at HTMLDivElement.dispatch (jquery.min.js:2)
at HTMLDivElement.$event.dispatch (jquery.event.drag-2.3.0.js:382)
{code}


  was:
The processor configuration and processor details dialog can optionally contain 
a button that launches the processor advanced UI. This Advanced button does not 
work correctly when the Summary page is popped out of the primary UI. I believe 
that in this case, we should disable/hidden the Advanced button feature when 
the Summary page is popped out.

{code:java}
nf-custom-ui.js?1.11.0-SNAPSHOT:79 Uncaught TypeError: Cannot read property 
'showPage' of undefined
at nf-custom-ui.js?1.11.0-SNAPSHOT:79
at c (jquery.min.js:2)
at Object.add [as done] (jquery.min.js:2)
at Object. (nf-custom-ui.js?1.11.0-SNAPSHOT:58)
at Function.Deferred (jquery.min.js:2)
at Object.showCustomUi (nf-custom-ui.js?1.11.0-SNAPSHOT:57)
at k.fn.init.click (nf-processor-details.js?1.11.0-SNAPSHOT:325)
at HTMLDivElement. (jquery.modal.js?1.11.0-SNAPSHOT:143)
at HTMLDivElement.dispatch (jquery.min.js:2)
at HTMLDivElement.$event.dispatch (jquery.event.drag-2.3.0.js:382)
{code}



> UI - Processor details dialog with Advanced button
> --
>
> Key: NIFI-7052
> URL: https://issues.apache.org/jira/browse/NIFI-7052
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core UI
>Reporter: Matt Gilman
>Priority: Major
>
> The processor configuration and processor details dialog can optionally 
> contain a button that launches the processor advanced UI. This Advanced 
> button does not work correctly when the Summary page is popped out of the 
> primary UI. I believe that in this case, we should disable/hide the Advanced 
> button feature when the Summary page is popped out.
> {code:java}
> nf-custom-ui.js?1.11.0-SNAPSHOT:79 Uncaught TypeError: Cannot read property 
> 'showPage' of undefined
> at nf-custom-ui.js?1.11.0-SNAPSHOT:79
> at c (jquery.min.js:2)
> at Object.add [as done] (jquery.min.js:2)
> at Object. (nf-custom-ui.js?1.11.0-SNAPSHOT:58)
> at Function.Deferred (jquery.min.js:2)
> at Object.showCustomUi (nf-custom-ui.js?1.11.0-SNAPSHOT:57)
> at k.fn.init.click (nf-processor-details.js?1.11.0-SNAPSHOT:325)
> at HTMLDivElement. (jquery.modal.js?1.11.0-SNAPSHOT:143)
> at HTMLDivElement.dispatch (jquery.min.js:2)
> at HTMLDivElement.$event.dispatch (jquery.event.drag-2.3.0.js:382)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7052) UI - Processor details dialog with Advanced button

2020-01-21 Thread Matt Gilman (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-7052:
--
Description: 
The processor configuration and processor details dialog can optionally contain 
a button that launches the processor advanced UI. This Advanced button does not 
work correctly when the Summary page is popped out of the primary UI. I believe 
that in this case, we should disable/hidden the Advanced button feature when 
the Summary page is popped out.

{code:java}
nf-custom-ui.js?1.11.0-SNAPSHOT:79 Uncaught TypeError: Cannot read property 
'showPage' of undefined
at nf-custom-ui.js?1.11.0-SNAPSHOT:79
at c (jquery.min.js:2)
at Object.add [as done] (jquery.min.js:2)
at Object. (nf-custom-ui.js?1.11.0-SNAPSHOT:58)
at Function.Deferred (jquery.min.js:2)
at Object.showCustomUi (nf-custom-ui.js?1.11.0-SNAPSHOT:57)
at k.fn.init.click (nf-processor-details.js?1.11.0-SNAPSHOT:325)
at HTMLDivElement. (jquery.modal.js?1.11.0-SNAPSHOT:143)
at HTMLDivElement.dispatch (jquery.min.js:2)
at HTMLDivElement.$event.dispatch (jquery.event.drag-2.3.0.js:382)
{code}


  was:
The processor configuration and processor details dialog can optionally contain 
a button that launches the processor advanced UI. This Advanced button does not 
work correctly when the Summary page is popped out of the primary UI. I believe 
that in this case, we should disable the Advanced button feature when the 
Summary page is popped out.

{code:java}
nf-custom-ui.js?1.11.0-SNAPSHOT:79 Uncaught TypeError: Cannot read property 
'showPage' of undefined
at nf-custom-ui.js?1.11.0-SNAPSHOT:79
at c (jquery.min.js:2)
at Object.add [as done] (jquery.min.js:2)
at Object. (nf-custom-ui.js?1.11.0-SNAPSHOT:58)
at Function.Deferred (jquery.min.js:2)
at Object.showCustomUi (nf-custom-ui.js?1.11.0-SNAPSHOT:57)
at k.fn.init.click (nf-processor-details.js?1.11.0-SNAPSHOT:325)
at HTMLDivElement. (jquery.modal.js?1.11.0-SNAPSHOT:143)
at HTMLDivElement.dispatch (jquery.min.js:2)
at HTMLDivElement.$event.dispatch (jquery.event.drag-2.3.0.js:382)
{code}



> UI - Processor details dialog with Advanced button
> --
>
> Key: NIFI-7052
> URL: https://issues.apache.org/jira/browse/NIFI-7052
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core UI
>Reporter: Matt Gilman
>Priority: Major
>
> The processor configuration and processor details dialog can optionally 
> contain a button that launches the processor advanced UI. This Advanced 
> button does not work correctly when the Summary page is popped out of the 
> primary UI. I believe that in this case, we should disable/hidden the 
> Advanced button feature when the Summary page is popped out.
> {code:java}
> nf-custom-ui.js?1.11.0-SNAPSHOT:79 Uncaught TypeError: Cannot read property 
> 'showPage' of undefined
> at nf-custom-ui.js?1.11.0-SNAPSHOT:79
> at c (jquery.min.js:2)
> at Object.add [as done] (jquery.min.js:2)
> at Object. (nf-custom-ui.js?1.11.0-SNAPSHOT:58)
> at Function.Deferred (jquery.min.js:2)
> at Object.showCustomUi (nf-custom-ui.js?1.11.0-SNAPSHOT:57)
> at k.fn.init.click (nf-processor-details.js?1.11.0-SNAPSHOT:325)
> at HTMLDivElement. (jquery.modal.js?1.11.0-SNAPSHOT:143)
> at HTMLDivElement.dispatch (jquery.min.js:2)
> at HTMLDivElement.$event.dispatch (jquery.event.drag-2.3.0.js:382)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7032) Processor Details no longer appears when clicking 'View Processor Details'

2020-01-21 Thread Nissim Shiman (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17020526#comment-17020526
 ] 

Nissim Shiman commented on NIFI-7032:
-

Thank You [~nagasivanath] , [~mcgilman] and [~Dayakar] !

> Processor Details no longer appears when clicking 'View Processor Details'
> --
>
> Key: NIFI-7032
> URL: https://issues.apache.org/jira/browse/NIFI-7032
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.10.0
>Reporter: Nissim Shiman
>Assignee: Nagasivanath Dasari
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> To reproduce:
> From main gui page, choose the button in upper right hand corner with the 3 
> lines
> Summary -> Processors
> choose one of i's to the left of a processor name
> Processor Details should pop up at this point.
> This worked in 1.9.2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-6848) Migrate NiFi Site to ASF git build/deploy

2020-01-21 Thread Andrew M. Lim (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew M. Lim reassigned NIFI-6848:
---

Assignee: Andrew M. Lim

> Migrate NiFi Site to ASF git build/deploy
> -
>
> Key: NIFI-6848
> URL: https://issues.apache.org/jira/browse/NIFI-6848
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Documentation & Website
>Reporter: Aldrin Piri
>Assignee: Andrew M. Lim
>Priority: Major
>
> Currently, NiFi's site is versioned in 
> https://gitbox.apache.org/repos/asf?p=nifi-site.git but a scripted process 
> via grunt, manually executed, is used to publish this site to the legacy CMS 
> (svn) system.  The CMS system is largely deprecated and targeted for EOL in 
> the upcoming months.  We should look to transition our repository/site over 
> to the new approach outlined at https://s.apache.org/asfyaml



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-7032) Processor Details no longer appears when clicking 'View Processor Details'

2020-01-21 Thread Matt Gilman (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman resolved NIFI-7032.
---
Fix Version/s: 1.12.0
   Resolution: Fixed

> Processor Details no longer appears when clicking 'View Processor Details'
> --
>
> Key: NIFI-7032
> URL: https://issues.apache.org/jira/browse/NIFI-7032
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.10.0
>Reporter: Nissim Shiman
>Assignee: Nagasivanath Dasari
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> To reproduce:
> From main gui page, choose the button in upper right hand corner with the 3 
> lines
> Summary -> Processors
> choose one of i's to the left of a processor name
> Processor Details should pop up at this point.
> This worked in 1.9.2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mcgilman commented on issue #3990: NIFI-7032 Processor Details no longer appears when clicking 'View Processor Details'

2020-01-21 Thread GitBox
mcgilman commented on issue #3990: NIFI-7032 Processor Details no longer 
appears when clicking 'View Processor Details'
URL: https://github.com/apache/nifi/pull/3990#issuecomment-576828844
 
 
   Thanks for the PR @nagasivanath! Thanks for the review @mdayakar! This has 
been merged to master.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] asfgit closed pull request #3990: NIFI-7032 Processor Details no longer appears when clicking 'View Processor Details'

2020-01-21 Thread GitBox
asfgit closed pull request #3990: NIFI-7032 Processor Details no longer appears 
when clicking 'View Processor Details'
URL: https://github.com/apache/nifi/pull/3990
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-7032) Processor Details no longer appears when clicking 'View Processor Details'

2020-01-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17020490#comment-17020490
 ] 

ASF subversion and git services commented on NIFI-7032:
---

Commit 24ef8ba4cbd28a481d475356a17b76b2af924da5 in nifi's branch 
refs/heads/master from nagasivanath
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=24ef8ba ]

Update nf-processor-details.js

NIFI-7032:
- Processor Details no longer appears when clicking 'View Processor Details'
- handling the review comments

This closes #3990


> Processor Details no longer appears when clicking 'View Processor Details'
> --
>
> Key: NIFI-7032
> URL: https://issues.apache.org/jira/browse/NIFI-7032
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.10.0
>Reporter: Nissim Shiman
>Assignee: Nagasivanath Dasari
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> To reproduce:
> From main gui page, choose the button in upper right hand corner with the 3 
> lines
> Summary -> Processors
> choose one of i's to the left of a processor name
> Processor Details should pop up at this point.
> This worked in 1.9.2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7052) UI - Processor details dialog with Advanced button

2020-01-21 Thread Matt Gilman (Jira)
Matt Gilman created NIFI-7052:
-

 Summary: UI - Processor details dialog with Advanced button
 Key: NIFI-7052
 URL: https://issues.apache.org/jira/browse/NIFI-7052
 Project: Apache NiFi
  Issue Type: Task
  Components: Core UI
Reporter: Matt Gilman


The processor configuration and processor details dialog can optionally contain 
a button that launches the processor advanced UI. This Advanced button does not 
work correctly when the Summary page is popped out of the primary UI. I believe 
that in this case, we should disable the Advanced button feature when the 
Summary page is popped out.

{code:java}
nf-custom-ui.js?1.11.0-SNAPSHOT:79 Uncaught TypeError: Cannot read property 
'showPage' of undefined
at nf-custom-ui.js?1.11.0-SNAPSHOT:79
at c (jquery.min.js:2)
at Object.add [as done] (jquery.min.js:2)
at Object. (nf-custom-ui.js?1.11.0-SNAPSHOT:58)
at Function.Deferred (jquery.min.js:2)
at Object.showCustomUi (nf-custom-ui.js?1.11.0-SNAPSHOT:57)
at k.fn.init.click (nf-processor-details.js?1.11.0-SNAPSHOT:325)
at HTMLDivElement. (jquery.modal.js?1.11.0-SNAPSHOT:143)
at HTMLDivElement.dispatch (jquery.min.js:2)
at HTMLDivElement.$event.dispatch (jquery.event.drag-2.3.0.js:382)
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] thenatog commented on issue #3916: NIFI-5481 Additional Sensitive Property Providers (#3)

2020-01-21 Thread GitBox
thenatog commented on issue #3916: NIFI-5481 Additional Sensitive Property 
Providers (#3)
URL: https://github.com/apache/nifi/pull/3916#issuecomment-576822964
 
 
   I am still reviewing this PR, but as others said above I also had errors:
   
   * There were compilation errors as a result of building with Java 8. I set 
to build with Java 11 instead, however it should be backwards compatible so 
some changes will need to be made.
   * Small issue with javadoc for loadKeyStore() in 
HadoopCredentialsSensitivePropertyProvider.java:184.
   * Had issues with RAT (contrib-check) check on the password files 
password.sidefile and bad-password.sidefile in nifi-properties-loader. I added 
a RAT exclude to nifi-properties-loader/pom.xml
   * Issue with grpc dependency conflict, the nifi-properties-loader added 
dependency google-cloud-kms which included its own grpc. Had to exclude it from 
the google-cloud-kms dependency in the pom.
   
   I can submit these fixes for merging, but the Java 8 compatibility still 
needs to be looked at.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] szaszm opened a new pull request #713: WIP: MINIFICPP-1119 unify win/posix sockets + clean up issues (untested on windows)

2020-01-21 Thread GitBox
szaszm opened a new pull request #713: WIP: MINIFICPP-1119 unify win/posix 
sockets + clean up issues (untested on windows)
URL: https://github.com/apache/nifi-minifi-cpp/pull/713
 
 
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] bbende commented on issue #4003: NIFI-7051 Protect against empty group membership in ShellUserGroupPro…

2020-01-21 Thread GitBox
bbende commented on issue #4003: NIFI-7051 Protect against empty group 
membership in ShellUserGroupPro…
URL: https://github.com/apache/nifi/pull/4003#issuecomment-576784601
 
 
   Realized that changing the identifier generation would impact users that 
already have policies created against this provider. Will work on updating the 
PR to add a new config option to turn on this new behavior.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-7051) ShellUserGroupProvider produces null user identifier

2020-01-21 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-7051:
--
Status: Patch Available  (was: Open)

> ShellUserGroupProvider produces null user identifier
> 
>
> Key: NIFI-7051
> URL: https://issues.apache.org/jira/browse/NIFI-7051
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The ShellUserGroupProvider can produce a Set user identifiers with a 
> null entry when there are no members of a group.
> Also, the front end component displays users and groups in the same grid and 
> requires each entity have a unique id, so the user and group UUIDs should be 
> seeded with some kind of differentiator like "-user" or "-group" to handle 
> the case where a user and group with the same name exist.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] bbende opened a new pull request #4003: NIFI-7051 Protect against empty group membership in ShellUserGroupPro…

2020-01-21 Thread GitBox
bbende opened a new pull request #4003: NIFI-7051 Protect against empty group 
membership in ShellUserGroupPro…
URL: https://github.com/apache/nifi/pull/4003
 
 
   …vider, and add differentiator to id seeding
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-7051) ShellUserGroupProvider produces null user identifier

2020-01-21 Thread Bryan Bende (Jira)
Bryan Bende created NIFI-7051:
-

 Summary: ShellUserGroupProvider produces null user identifier
 Key: NIFI-7051
 URL: https://issues.apache.org/jira/browse/NIFI-7051
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.10.0, 1.11.0
Reporter: Bryan Bende
Assignee: Bryan Bende
 Fix For: 1.12.0


The ShellUserGroupProvider can produce a Set user identifiers with a 
null entry when there are no members of a group.

Also, the front end component displays users and groups in the same grid and 
requires each entity have a unique id, so the user and group UUIDs should be 
seeded with some kind of differentiator like "-user" or "-group" to handle the 
case where a user and group with the same name exist.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #687: MINIFICPP-1092 - Make CoAP compile and work on Windows

2020-01-21 Thread GitBox
bakaid commented on a change in pull request #687: MINIFICPP-1092 - Make CoAP 
compile and work on Windows
URL: https://github.com/apache/nifi-minifi-cpp/pull/687#discussion_r369082823
 
 

 ##
 File path: extensions/coap/tests/CMakeLists.txt
 ##
 @@ -40,7 +40,7 @@ FOREACH(testfile ${CURL_INTEGRATION_TESTS})
target_include_directories(${testfilename} BEFORE PRIVATE 
"../../http-curl/sitetosite/")
target_include_directories(${testfilename} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/extensions/civetweb/")
target_include_directories(${testfilename} BEFORE PRIVATE ./include)
-createTests("${testfilename}")
+createTests("${testfilename}")
 
 Review comment:
   Yep, that's fair, edited outside my normal IDE, fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #687: MINIFICPP-1092 - Make CoAP compile and work on Windows

2020-01-21 Thread GitBox
bakaid commented on a change in pull request #687: MINIFICPP-1092 - Make CoAP 
compile and work on Windows
URL: https://github.com/apache/nifi-minifi-cpp/pull/687#discussion_r369082262
 
 

 ##
 File path: extensions/coap/COAPLoader.cpp
 ##
 @@ -18,7 +18,28 @@
 #include "core/FlowConfiguration.h"
 #include "COAPLoader.h"
 
+#ifdef WIN32
+#include 
+#endif
+
 bool COAPObjectFactory::added = 
core::FlowConfiguration::add_static_func("createCOAPFactory");
+
+bool COAPObjectFactoryInitializer::initialize() {
+#ifdef WIN32
+  static WSADATA s_wsaData;
+  int iWinSockInitResult = WSAStartup(MAKEWORD(2, 2), &s_wsaData);
 
 Review comment:
   `WSAStartup` is extremely unlikely to fail, that's why I didn't add one 
originally, but it won't hurt, so I've added it now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] tpalfy commented on a change in pull request #3977: NIFI-7007 Add update functionality to the PutCassandraRecord processor.

2020-01-21 Thread GitBox
tpalfy commented on a change in pull request #3977: NIFI-7007 Add update 
functionality to the PutCassandraRecord processor.
URL: https://github.com/apache/nifi/pull/3977#discussion_r369071421
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ##
 @@ -193,6 +303,81 @@ public void onTrigger(ProcessContext context, 
ProcessSession session) throws Pro
 
 }
 
+private Statement generateUpdate(String cassandraTable, RecordSchema 
schema, String updateKeys, String updateMethod, Map 
recordContentMap) {
+Update updateQuery;
+
+// Split up the update key names separated by a comma, should not be 
empty
+final Set updateKeyNames;
+updateKeyNames = Arrays.stream(updateKeys.split(","))
+.map(String::trim)
+.filter(StringUtils::isNotEmpty)
+.collect(Collectors.toSet());
+if (updateKeyNames.isEmpty()) {
+throw new IllegalArgumentException("No Update Keys were 
specified");
+}
+
+// Verify if all update keys are present in the record
+for (String updateKey : updateKeyNames) {
+if (!schema.getFieldNames().contains(updateKey)) {
+throw new IllegalArgumentException("Update key '" + updateKey 
+ "' is not present in the record schema");
+}
+}
+
+// Prepare keyspace/table names
+if (cassandraTable.contains(".")) {
+String[] keyspaceAndTable = cassandraTable.split("\\.");
+updateQuery = QueryBuilder.update(keyspaceAndTable[0], 
keyspaceAndTable[1]);
+} else {
+updateQuery = QueryBuilder.update(cassandraTable);
+}
+
+// Loop through the field names, setting those that are not in the 
update key set, and using those
+// in the update key set as conditions.
+for (String fieldName : schema.getFieldNames()) {
+Object fieldValue = recordContentMap.get(fieldName);
+
+if (updateKeyNames.contains(fieldName)) {
+updateQuery.where(QueryBuilder.eq(fieldName, fieldValue));
+} else {
+Assignment assignment;
+if (SET_TYPE.getValue().equalsIgnoreCase(updateMethod)) {
+assignment = QueryBuilder.set(fieldName, fieldValue);
+} else {
+// Check if the fieldValue is of type long, as this is the 
only type that is can be used,
+// to increment or decrement.
+if (!(fieldValue instanceof Long)) {
+throw new IllegalArgumentException("Field '" + 
fieldName + "' is not of type Long, and cannot be used" +
+" to increment or decrement.");
+}
+
+if (INCR_TYPE.getValue().equalsIgnoreCase(updateMethod)) {
+assignment = QueryBuilder.incr(fieldName, 
(Long)fieldValue);
+} else if 
(DECR_TYPE.getValue().equalsIgnoreCase(updateMethod)) {
+assignment = QueryBuilder.decr(fieldName, 
(Long)fieldValue);
+} else {
+throw new IllegalArgumentException("Update Method '" + 
updateMethod + "' is not valid.");
 
 Review comment:
   Is it intentional you mean?
   
   If for example we want to set string fields, but have an error in the update 
method (let's say it has a type in it like 'SED' instead of 'SET'), the 
encountered error message "Field is not of type Long..." would be very 
misleading.
   
   In general the validity of the update method _itself_ is a higher level 
issue than the validity of the _parameters_ of the update method and as such 
might be better to check that first.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] tpalfy commented on a change in pull request #3977: NIFI-7007 Add update functionality to the PutCassandraRecord processor.

2020-01-21 Thread GitBox
tpalfy commented on a change in pull request #3977: NIFI-7007 Add update 
functionality to the PutCassandraRecord processor.
URL: https://github.com/apache/nifi/pull/3977#discussion_r369071421
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ##
 @@ -193,6 +303,81 @@ public void onTrigger(ProcessContext context, 
ProcessSession session) throws Pro
 
 }
 
+private Statement generateUpdate(String cassandraTable, RecordSchema 
schema, String updateKeys, String updateMethod, Map 
recordContentMap) {
+Update updateQuery;
+
+// Split up the update key names separated by a comma, should not be 
empty
+final Set updateKeyNames;
+updateKeyNames = Arrays.stream(updateKeys.split(","))
+.map(String::trim)
+.filter(StringUtils::isNotEmpty)
+.collect(Collectors.toSet());
+if (updateKeyNames.isEmpty()) {
+throw new IllegalArgumentException("No Update Keys were 
specified");
+}
+
+// Verify if all update keys are present in the record
+for (String updateKey : updateKeyNames) {
+if (!schema.getFieldNames().contains(updateKey)) {
+throw new IllegalArgumentException("Update key '" + updateKey 
+ "' is not present in the record schema");
+}
+}
+
+// Prepare keyspace/table names
+if (cassandraTable.contains(".")) {
+String[] keyspaceAndTable = cassandraTable.split("\\.");
+updateQuery = QueryBuilder.update(keyspaceAndTable[0], 
keyspaceAndTable[1]);
+} else {
+updateQuery = QueryBuilder.update(cassandraTable);
+}
+
+// Loop through the field names, setting those that are not in the 
update key set, and using those
+// in the update key set as conditions.
+for (String fieldName : schema.getFieldNames()) {
+Object fieldValue = recordContentMap.get(fieldName);
+
+if (updateKeyNames.contains(fieldName)) {
+updateQuery.where(QueryBuilder.eq(fieldName, fieldValue));
+} else {
+Assignment assignment;
+if (SET_TYPE.getValue().equalsIgnoreCase(updateMethod)) {
+assignment = QueryBuilder.set(fieldName, fieldValue);
+} else {
+// Check if the fieldValue is of type long, as this is the 
only type that is can be used,
+// to increment or decrement.
+if (!(fieldValue instanceof Long)) {
+throw new IllegalArgumentException("Field '" + 
fieldName + "' is not of type Long, and cannot be used" +
+" to increment or decrement.");
+}
+
+if (INCR_TYPE.getValue().equalsIgnoreCase(updateMethod)) {
+assignment = QueryBuilder.incr(fieldName, 
(Long)fieldValue);
+} else if 
(DECR_TYPE.getValue().equalsIgnoreCase(updateMethod)) {
+assignment = QueryBuilder.decr(fieldName, 
(Long)fieldValue);
+} else {
+throw new IllegalArgumentException("Update Method '" + 
updateMethod + "' is not valid.");
 
 Review comment:
   Is it intentional you mean?
   
   If for example we want to set string fields, but have an error in the update 
method (let's say it has a typo in it like 'SED' instead of 'SET'), the 
encountered error message "Field is not of type Long..." would be very 
misleading.
   
   In general the validity of the update method _itself_ is a higher level 
issue than the validity of the _parameters_ of the update method and as such 
might be better to check that first.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] mdayakar commented on issue #3990: NIFI-7032 Processor Details no longer appears when clicking 'View Processor Details'

2020-01-21 Thread GitBox
mdayakar commented on issue #3990: NIFI-7032 Processor Details no longer 
appears when clicking 'View Processor Details'
URL: https://github.com/apache/nifi/pull/3990#issuecomment-576722393
 
 
   LGTM +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] woutifier-t commented on a change in pull request #3977: NIFI-7007 Add update functionality to the PutCassandraRecord processor.

2020-01-21 Thread GitBox
woutifier-t commented on a change in pull request #3977: NIFI-7007 Add update 
functionality to the PutCassandraRecord processor.
URL: https://github.com/apache/nifi/pull/3977#discussion_r369052930
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ##
 @@ -67,6 +113,36 @@
 .required(true)
 .build();
 
+static final PropertyDescriptor STATEMENT_TYPE = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-statement-type")
+.displayName("Statement Type")
+.description("Specifies the type of CQL Statement to generate.")
+.required(true)
+.defaultValue(INSERT_TYPE.getValue())
+.allowableValues(UPDATE_TYPE, INSERT_TYPE, 
STATEMENT_TYPE_USE_ATTR_TYPE)
+.build();
+
+static final PropertyDescriptor UPDATE_METHOD = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-update-method")
+.displayName("Update Method")
+.description("Specifies the method to use to SET the values. This 
property is used if the Statement Type is " +
+"UPDATE and ignored otherwise.")
+.required(false)
+.defaultValue(SET_TYPE.getValue())
+.allowableValues(INCR_TYPE, DECR_TYPE, SET_TYPE, 
UPDATE_METHOD_USE_ATTR_TYPE)
+.build();
+
+static final PropertyDescriptor UPDATE_KEYS = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-update-keys")
+.displayName("Update Keys")
+.description("A comma-separated list of column names that uniquely 
identifies a row in the database for UPDATE statements. "
++ "If the Statement Type is UPDATE and this property is 
not set, the conversion to CQL will fail. "
++ "This property is ignored if the Statement Type is not 
UPDATE.")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 
 Review comment:
   Interesting, trying to run some test cases against this validator and it 
exhibits some (for me) unexpected behavior.
   
   * Passing an empty string to it fails validation (expected)
   * Passing just a separator (,) validates (unexpected), regardless if 
excludeEmptyEntries is set to true or false
   I don't see much in the way of documentation of that validator, but wouldn't 
it be logical for "," to fail validation? Debugging shows that Java's split 
method only works that way if we use the overloaded version `",".split(",", -1)`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] si-sun commented on issue #3917: Adding GetSmbFile and PutSmbFile processors

2020-01-21 Thread GitBox
si-sun commented on issue #3917: Adding GetSmbFile and PutSmbFile processors
URL: https://github.com/apache/nifi/pull/3917#issuecomment-576709462
 
 
   I updated the LICENSE and NOTICE files. Let me know if this is now ok.
   Looks like the build is failing due a maven https issue. Seem like all the 
build are failing right now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] woutifier-t commented on a change in pull request #3977: NIFI-7007 Add update functionality to the PutCassandraRecord processor.

2020-01-21 Thread GitBox
woutifier-t commented on a change in pull request #3977: NIFI-7007 Add update 
functionality to the PutCassandraRecord processor.
URL: https://github.com/apache/nifi/pull/3977#discussion_r369021524
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ##
 @@ -193,6 +303,81 @@ public void onTrigger(ProcessContext context, 
ProcessSession session) throws Pro
 
 }
 
+private Statement generateUpdate(String cassandraTable, RecordSchema 
schema, String updateKeys, String updateMethod, Map 
recordContentMap) {
+Update updateQuery;
+
+// Split up the update key names separated by a comma, should not be 
empty
+final Set updateKeyNames;
+updateKeyNames = Arrays.stream(updateKeys.split(","))
+.map(String::trim)
+.filter(StringUtils::isNotEmpty)
+.collect(Collectors.toSet());
+if (updateKeyNames.isEmpty()) {
+throw new IllegalArgumentException("No Update Keys were 
specified");
+}
+
+// Verify if all update keys are present in the record
+for (String updateKey : updateKeyNames) {
+if (!schema.getFieldNames().contains(updateKey)) {
+throw new IllegalArgumentException("Update key '" + updateKey 
+ "' is not present in the record schema");
+}
+}
+
+// Prepare keyspace/table names
+if (cassandraTable.contains(".")) {
+String[] keyspaceAndTable = cassandraTable.split("\\.");
+updateQuery = QueryBuilder.update(keyspaceAndTable[0], 
keyspaceAndTable[1]);
+} else {
+updateQuery = QueryBuilder.update(cassandraTable);
+}
+
+// Loop through the field names, setting those that are not in the 
update key set, and using those
+// in the update key set as conditions.
+for (String fieldName : schema.getFieldNames()) {
+Object fieldValue = recordContentMap.get(fieldName);
+
+if (updateKeyNames.contains(fieldName)) {
+updateQuery.where(QueryBuilder.eq(fieldName, fieldValue));
+} else {
+Assignment assignment;
+if (SET_TYPE.getValue().equalsIgnoreCase(updateMethod)) {
+assignment = QueryBuilder.set(fieldName, fieldValue);
+} else {
+// Check if the fieldValue is of type long, as this is the 
only type that is can be used,
+// to increment or decrement.
+if (!(fieldValue instanceof Long)) {
 
 Review comment:
   The datastax cassandra driver implementation expects a long as a parameter 
to incr(). Looks like instead we can use
   `
   long b = ((Number)a).longValue()
   `


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] woutifier-t commented on a change in pull request #3977: NIFI-7007 Add update functionality to the PutCassandraRecord processor.

2020-01-21 Thread GitBox
woutifier-t commented on a change in pull request #3977: NIFI-7007 Add update 
functionality to the PutCassandraRecord processor.
URL: https://github.com/apache/nifi/pull/3977#discussion_r369023279
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ##
 @@ -193,6 +303,81 @@ public void onTrigger(ProcessContext context, 
ProcessSession session) throws Pro
 
 }
 
+private Statement generateUpdate(String cassandraTable, RecordSchema 
schema, String updateKeys, String updateMethod, Map 
recordContentMap) {
+Update updateQuery;
+
+// Split up the update key names separated by a comma, should not be 
empty
+final Set updateKeyNames;
+updateKeyNames = Arrays.stream(updateKeys.split(","))
+.map(String::trim)
+.filter(StringUtils::isNotEmpty)
+.collect(Collectors.toSet());
+if (updateKeyNames.isEmpty()) {
+throw new IllegalArgumentException("No Update Keys were 
specified");
+}
+
+// Verify if all update keys are present in the record
+for (String updateKey : updateKeyNames) {
+if (!schema.getFieldNames().contains(updateKey)) {
+throw new IllegalArgumentException("Update key '" + updateKey 
+ "' is not present in the record schema");
+}
+}
+
+// Prepare keyspace/table names
+if (cassandraTable.contains(".")) {
+String[] keyspaceAndTable = cassandraTable.split("\\.");
+updateQuery = QueryBuilder.update(keyspaceAndTable[0], 
keyspaceAndTable[1]);
+} else {
+updateQuery = QueryBuilder.update(cassandraTable);
+}
+
+// Loop through the field names, setting those that are not in the 
update key set, and using those
+// in the update key set as conditions.
+for (String fieldName : schema.getFieldNames()) {
+Object fieldValue = recordContentMap.get(fieldName);
+
+if (updateKeyNames.contains(fieldName)) {
+updateQuery.where(QueryBuilder.eq(fieldName, fieldValue));
+} else {
+Assignment assignment;
+if (SET_TYPE.getValue().equalsIgnoreCase(updateMethod)) {
+assignment = QueryBuilder.set(fieldName, fieldValue);
+} else {
+// Check if the fieldValue is of type long, as this is the 
only type that is can be used,
+// to increment or decrement.
+if (!(fieldValue instanceof Long)) {
+throw new IllegalArgumentException("Field '" + 
fieldName + "' is not of type Long, and cannot be used" +
+" to increment or decrement.");
+}
+
+if (INCR_TYPE.getValue().equalsIgnoreCase(updateMethod)) {
+assignment = QueryBuilder.incr(fieldName, 
(Long)fieldValue);
+} else if 
(DECR_TYPE.getValue().equalsIgnoreCase(updateMethod)) {
+assignment = QueryBuilder.decr(fieldName, 
(Long)fieldValue);
+} else {
+throw new IllegalArgumentException("Update Method '" + 
updateMethod + "' is not valid.");
 
 Review comment:
   Agreed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] woutifier-t commented on a change in pull request #3977: NIFI-7007 Add update functionality to the PutCassandraRecord processor.

2020-01-21 Thread GitBox
woutifier-t commented on a change in pull request #3977: NIFI-7007 Add update 
functionality to the PutCassandraRecord processor.
URL: https://github.com/apache/nifi/pull/3977#discussion_r369021524
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ##
 @@ -193,6 +303,81 @@ public void onTrigger(ProcessContext context, 
ProcessSession session) throws Pro
 
 }
 
+private Statement generateUpdate(String cassandraTable, RecordSchema 
schema, String updateKeys, String updateMethod, Map 
recordContentMap) {
+Update updateQuery;
+
+// Split up the update key names separated by a comma, should not be 
empty
+final Set updateKeyNames;
+updateKeyNames = Arrays.stream(updateKeys.split(","))
+.map(String::trim)
+.filter(StringUtils::isNotEmpty)
+.collect(Collectors.toSet());
+if (updateKeyNames.isEmpty()) {
+throw new IllegalArgumentException("No Update Keys were 
specified");
+}
+
+// Verify if all update keys are present in the record
+for (String updateKey : updateKeyNames) {
+if (!schema.getFieldNames().contains(updateKey)) {
+throw new IllegalArgumentException("Update key '" + updateKey 
+ "' is not present in the record schema");
+}
+}
+
+// Prepare keyspace/table names
+if (cassandraTable.contains(".")) {
+String[] keyspaceAndTable = cassandraTable.split("\\.");
+updateQuery = QueryBuilder.update(keyspaceAndTable[0], 
keyspaceAndTable[1]);
+} else {
+updateQuery = QueryBuilder.update(cassandraTable);
+}
+
+// Loop through the field names, setting those that are not in the 
update key set, and using those
+// in the update key set as conditions.
+for (String fieldName : schema.getFieldNames()) {
+Object fieldValue = recordContentMap.get(fieldName);
+
+if (updateKeyNames.contains(fieldName)) {
+updateQuery.where(QueryBuilder.eq(fieldName, fieldValue));
+} else {
+Assignment assignment;
+if (SET_TYPE.getValue().equalsIgnoreCase(updateMethod)) {
+assignment = QueryBuilder.set(fieldName, fieldValue);
+} else {
+// Check if the fieldValue is of type long, as this is the 
only type that is can be used,
+// to increment or decrement.
+if (!(fieldValue instanceof Long)) {
 
 Review comment:
   The datastax cassandra driver implementation expects a long as a parameter 
to incr(). Do you have a recommendation on how to reliably cast an Object to a 
long/Long? What types besides long and integer do we expect?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-7050) ConsumeJMS is not yielded in case of exception

2020-01-21 Thread Gardella Juan Pablo (Jira)
Gardella Juan Pablo created NIFI-7050:
-

 Summary: ConsumeJMS is not yielded in case of exception
 Key: NIFI-7050
 URL: https://issues.apache.org/jira/browse/NIFI-7050
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 1.10.0
Reporter: Gardella Juan Pablo


If any exception happens when ConsumerJMS tries to read messages, the process 
tries again immediately. 

{code:java}
  try {
consumer.consume(destinationName, errorQueueName, durable, shared, 
subscriptionName, charset, new ConsumerCallback() {
@Override
public void accept(final JMSResponse response) {
if (response == null) {
return;
}

FlowFile flowFile = processSession.create();
flowFile = processSession.write(flowFile, out -> 
out.write(response.getMessageBody()));

final Map jmsHeaders = 
response.getMessageHeaders();
final Map jmsProperties = 
response.getMessageProperties();

flowFile = 
ConsumeJMS.this.updateFlowFileAttributesWithJMSAttributes(jmsHeaders, flowFile, 
processSession);
flowFile = 
ConsumeJMS.this.updateFlowFileAttributesWithJMSAttributes(jmsProperties, 
flowFile, processSession);
flowFile = processSession.putAttribute(flowFile, 
JMS_SOURCE_DESTINATION_NAME, destinationName);

processSession.getProvenanceReporter().receive(flowFile, 
destinationName);
processSession.putAttribute(flowFile, JMS_MESSAGETYPE, 
response.getMessageType());
processSession.transfer(flowFile, REL_SUCCESS);
processSession.commit();
}
});
} catch(Exception e) {
consumer.setValid(false);
throw e; // for backward compatibility with exception handling in 
flows
}
}
{code}

It should call {{context.yield}} in exception block. Notice 
[PublishJMS|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/PublishJMS.java#L166]
 is yielded in the same scenario. It is requires to do in the ConsumeJMS 
processor only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] woutifier-t commented on a change in pull request #3977: NIFI-7007 Add update functionality to the PutCassandraRecord processor.

2020-01-21 Thread GitBox
woutifier-t commented on a change in pull request #3977: NIFI-7007 Add update 
functionality to the PutCassandraRecord processor.
URL: https://github.com/apache/nifi/pull/3977#discussion_r369021524
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ##
 @@ -193,6 +303,81 @@ public void onTrigger(ProcessContext context, 
ProcessSession session) throws Pro
 
 }
 
+private Statement generateUpdate(String cassandraTable, RecordSchema 
schema, String updateKeys, String updateMethod, Map 
recordContentMap) {
+Update updateQuery;
+
+// Split up the update key names separated by a comma, should not be 
empty
+final Set updateKeyNames;
+updateKeyNames = Arrays.stream(updateKeys.split(","))
+.map(String::trim)
+.filter(StringUtils::isNotEmpty)
+.collect(Collectors.toSet());
+if (updateKeyNames.isEmpty()) {
+throw new IllegalArgumentException("No Update Keys were 
specified");
+}
+
+// Verify if all update keys are present in the record
+for (String updateKey : updateKeyNames) {
+if (!schema.getFieldNames().contains(updateKey)) {
+throw new IllegalArgumentException("Update key '" + updateKey 
+ "' is not present in the record schema");
+}
+}
+
+// Prepare keyspace/table names
+if (cassandraTable.contains(".")) {
+String[] keyspaceAndTable = cassandraTable.split("\\.");
+updateQuery = QueryBuilder.update(keyspaceAndTable[0], 
keyspaceAndTable[1]);
+} else {
+updateQuery = QueryBuilder.update(cassandraTable);
+}
+
+// Loop through the field names, setting those that are not in the 
update key set, and using those
+// in the update key set as conditions.
+for (String fieldName : schema.getFieldNames()) {
+Object fieldValue = recordContentMap.get(fieldName);
+
+if (updateKeyNames.contains(fieldName)) {
+updateQuery.where(QueryBuilder.eq(fieldName, fieldValue));
+} else {
+Assignment assignment;
+if (SET_TYPE.getValue().equalsIgnoreCase(updateMethod)) {
+assignment = QueryBuilder.set(fieldName, fieldValue);
+} else {
+// Check if the fieldValue is of type long, as this is the 
only type that is can be used,
+// to increment or decrement.
+if (!(fieldValue instanceof Long)) {
 
 Review comment:
   The datastax cassandra driver implementation expects a long as a parameter 
to incr(). Do you have a recommendation on how to reliably cast an Object to a 
long/Long?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-7049) SFTP processors shouldn't silently try to access known hosts file of the user

2020-01-21 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated NIFI-7049:
-
Description: 
In case NiFi test are executed on a machine without knows_hosts file, it's 
going to fail:
{code}
[INFO] Running org.apache.nifi.processors.standard.TestGetSFTP
[ERROR] Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.374 s 
<<< FAILURE! - in org.apache.nifi.processors.standard.TestGetSFTP
[ERROR] 
testGetSFTPFileBasicRead(org.apache.nifi.processors.standard.TestGetSFTP)  Time 
elapsed: 0.132 s  <<< FAILURE!
java.lang.AssertionError: expected:<4> but was:<0>
at 
org.apache.nifi.processors.standard.TestGetSFTP.testGetSFTPFileBasicRead(TestGetSFTP.java:88)

[ERROR] 
testGetSFTPIgnoreDottedFiles(org.apache.nifi.processors.standard.TestGetSFTP)  
Time elapsed: 0.013 s  <<< FAILURE!
java.lang.AssertionError: expected:<2> but was:<0>
at 
org.apache.nifi.processors.standard.TestGetSFTP.testGetSFTPIgnoreDottedFiles(TestGetSFTP.java:110)
{code}

Just pasting my private message that summarised this error previously:

https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/SFTPTransfer.java#L556
So the problem is that host key file is not a mandatory, but  in case it’s not 
provided, we call load on the 3rd party lib without arguments:
https://github.com/hierynomus/sshj/blob/master/src/main/java/net/schmizz/sshj/SSHClient.java#L621
Which tries to load keys from the default location, but this is far from what 
we state in our documentation:
{code}Host Key FileIf supplied, the given file will be used as the 
Host Key; otherwise, no use host key file will be used {code}
So there are multiple issues here:
-Even though the ssh connection fails, somewhere the IO exception is swallowed. 
Didn’t reproduce to check the logs, but I would expect exceptions to be thrown 
in the testcase and these being talkative about the error. My gut feeling says 
that we do the same in case the user specifies a host key file, but it’s 
somehow not accessible.
-Strict host check on/off might not be enough to cover all the scenarios as 
there are three: host 1# known and key matches, 2# host not known and we either 
trust or not, 3# host known, but there is a mismatch (probably man in the 
middle). I think this property should be improved at least in documentation 
point of view as currently only the code tells what do we do in 2#. Which 
depends on whether the file exists or not, so most probably something 
unintended.
-Either the documentation or the behaviour should be fixed to make them aligned 
-The testcase should either use a predefined key or have host key checking 
completely off. According to what we see above, not sure about the latter being 
nicely supported.

  was:
In case NiFi test are executed on a machine without knows_hosts file, it's 
going to fail. 

Just pasting my private message that summarised this error previously:

https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/SFTPTransfer.java#L556
So the problem is that host key file is not a mandatory, but  in case it’s not 
provided, we call load on the 3rd party lib without arguments:
https://github.com/hierynomus/sshj/blob/master/src/main/java/net/schmizz/sshj/SSHClient.java#L621
Which tries to load keys from the default location, but this is far from what 
we state in our documentation:
{code}Host Key FileIf supplied, the given file will be used as the 
Host Key; otherwise, no use host key file will be used {code}
So there are multiple issues here:
-Even though the ssh connection fails, somewhere the IO exception is swallowed. 
Didn’t reproduce to check the logs, but I would expect exceptions to be thrown 
in the testcase and these being talkative about the error. My gut feeling says 
that we do the same in case the user specifies a host key file, but it’s 
somehow not accessible.
-Strict host check on/off might not be enough to cover all the scenarios as 
there are three: host 1# known and key matches, 2# host not known and we either 
trust or not, 3# host known, but there is a mismatch (probably man in the 
middle). I think this property should be improved at least in documentation 
point of view as currently only the code tells what do we do in 2#. Which 
depends on whether the file exists or not, so most probably something 
unintended.
-Either the documentation or the behaviour should be fixed to make them aligned 
-The testcase should either use a predefined key or have host key checking 
completely off. According to what we see above, not sure about the latter being 
nicely supported.


> SFTP processors shouldn't silently try to access known hosts file of the user
> 

[jira] [Updated] (NIFI-7049) SFTP processors shouldn't silently try to access known hosts file of the user

2020-01-21 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated NIFI-7049:
-
Description: 
In case NiFi test are executed on a machine without knows_hosts file, it's 
going to fail. 

Just pasting my private message that summarised this error previously:

https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/SFTPTransfer.java#L556
So the problem is that host key file is not a mandatory, but  in case it’s not 
provided, we call load on the 3rd party lib without arguments:
https://github.com/hierynomus/sshj/blob/master/src/main/java/net/schmizz/sshj/SSHClient.java#L621
Which tries to load keys from the default location, but this is far from what 
we state in our documentation:
{code}Host Key FileIf supplied, the given file will be used as the 
Host Key; otherwise, no use host key file will be used {code}
So there are multiple issues here:
-Even though the ssh connection fails, somewhere the IO exception is swallowed. 
Didn’t reproduce to check the logs, but I would expect exceptions to be thrown 
in the testcase and these being talkative about the error. My gut feeling says 
that we do the same in case the user specifies a host key file, but it’s 
somehow not accessible.
-Strict host check on/off might not be enough to cover all the scenarios as 
there are three: host 1# known and key matches, 2# host not known and we either 
trust or not, 3# host known, but there is a mismatch (probably man in the 
middle). I think this property should be improved at least in documentation 
point of view as currently only the code tells what do we do in 2#. Which 
depends on whether the file exists or not, so most probably something 
unintended.
-Either the documentation or the behaviour should be fixed to make them aligned 
-The testcase should either use a predefined key or have host key checking 
completely off. According to what we see above, not sure about the latter being 
nicely supported.

  was:
In case NiFi test are executed on a machine without knows_hosts file, it's 
going to fail. 

Just pasting my private message that summarised this error previously:

https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/SFTPTransfer.java#L556
So the problem is that host key file is not a mandatory, but  in case it’s not 
provided, we call load on the 3rd party lib without arguments:
https://github.com/hierynomus/sshj/blob/master/src/main/java/net/schmizz/sshj/SSHClient.java#L621
Which tries to load keys from the default location, but this is far from what 
we state in our documentation:
{code}Host Key FileIf supplied, the given file will be used as the 
Host Key; otherwise, no use host key file will be used {code}
So there are multiple issues here:
-Even though the ssh connection fails, somewhere the IO exception is swallowed. 
Didn’t reproduce to check the logs, but I would expect exceptions to be thrown 
in the testcase and these being talkative about the error. My gut feeling says 
that we do the same in case the user specifies a host key file, but it’s 
somehow not accessible.
-Strict host check on/off might not be enough to cover all the scenarios as 
there are three: host 1# known and key matches, 2# host not known and we either 
trust or not, 3# host known, but there is a mismatch (probably man in the 
middle). I think this property should be improved at least in documentation 
point of view as currently only the code tells what do we do in 2#. Which 
depends on whether the file exists or not, so most probably something 
unintended.
-Either the documentation or the behaviour should be fixed to make them aligned 
(you are the security guy to tell which one is right :wink: )
-The testcase should either use a predefined key or have host key checking 
completely off. According to what we see above, not sure about the latter being 
nicely supported.


> SFTP processors shouldn't silently try to access known hosts file of the user
> -
>
> Key: NIFI-7049
> URL: https://issues.apache.org/jira/browse/NIFI-7049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
>Reporter: Arpad Boda
>Priority: Major
>
> In case NiFi test are executed on a machine without knows_hosts file, it's 
> going to fail. 
> Just pasting my private message that summarised this error previously:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/SFTPTransfer.java#L556
> So the problem is that host

[jira] [Updated] (NIFI-7049) SFTP processors shouldn't silently try to access known hosts file of the user

2020-01-21 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated NIFI-7049:
-
Description: 
In case NiFi test are executed on a machine without knows_hosts file, it's 
going to fail. 

Just pasting my private message that summarised this error previously:

https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/SFTPTransfer.java#L556
So the problem is that host key file is not a mandatory, but  in case it’s not 
provided, we call load on the 3rd party lib without arguments:
https://github.com/hierynomus/sshj/blob/master/src/main/java/net/schmizz/sshj/SSHClient.java#L621
Which tries to load keys from the default location, but this is far from what 
we state in our documentation:
{code}Host Key FileIf supplied, the given file will be used as the 
Host Key; otherwise, no use host key file will be used {code}
So there are multiple issues here:
-Even though the ssh connection fails, somewhere the IO exception is swallowed. 
Didn’t reproduce to check the logs, but I would expect exceptions to be thrown 
in the testcase and these being talkative about the error. My gut feeling says 
that we do the same in case the user specifies a host key file, but it’s 
somehow not accessible.
-Strict host check on/off might not be enough to cover all the scenarios as 
there are three: host 1# known and key matches, 2# host not known and we either 
trust or not, 3# host known, but there is a mismatch (probably man in the 
middle). I think this property should be improved at least in documentation 
point of view as currently only the code tells what do we do in 2#. Which 
depends on whether the file exists or not, so most probably something 
unintended.
-Either the documentation or the behaviour should be fixed to make them aligned 
(you are the security guy to tell which one is right :wink: )
-The testcase should either use a predefined key or have host key checking 
completely off. According to what we see above, not sure about the latter being 
nicely supported.

  was:
In case NiFi test are executed on a machine without knows_hosts file, it's 
going to fail. 

Just pasting my private message that summarised this error previously:

https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/SFTPTransfer.java#L556
So the problem is that host key file is not a mandatory, but  in case it’s not 
provided, we call load on the 3rd party lib without arguments:
https://github.com/hierynomus/sshj/blob/master/src/main/java/net/schmizz/sshj/SSHClient.java#L621
Which tries to load keys from the default location, but this is far from what 
we state in our documentation:
{code}Host Key FileIf supplied, the given file will be used as the 
Host Key; otherwise, no use host key file will be used {code}
So there are multiple issues here:
-Even though the ssh connection fails, somewhere the IO exception is swallowed. 
Didn’t reproduce to check the logs, but I would expect exceptions to be thrown 
in the testcase and these being talkative about the error. My gut feeling says 
that we do the same in case the user specifies a host key file, but it’s 
somehow not accessible.
-Strict host check on/off might not be enough to cover all the scenarios as 
there are three: host 1# known and key matches, 2# host not known and we either 
trust or not, 3# host known, but there is a mismatch (probably man in the 
middle). I think this property should be improved at least in documentation 
point of view as currently only the code tells what do we do in 2#. Which 
depends on whether the file exists or not, so most probably something unwanted.
-Either the documentation or the behaviour should be fixed to make them aligned 
(you are the security guy to tell which one is right :wink: )
-The testcase should either use a predefined key or have host key checking 
completely off. According to what we see above, not sure about the latter being 
nicely supported.


> SFTP processors shouldn't silently try to access known hosts file of the user
> -
>
> Key: NIFI-7049
> URL: https://issues.apache.org/jira/browse/NIFI-7049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
>Reporter: Arpad Boda
>Priority: Major
>
> In case NiFi test are executed on a machine without knows_hosts file, it's 
> going to fail. 
> Just pasting my private message that summarised this error previously:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standar

[jira] [Updated] (NIFI-7049) SFTP processors shouldn't silently try to access known hosts file of the user

2020-01-21 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated NIFI-7049:
-
Description: 
In case NiFi test are executed on a machine without knows_hosts file, it's 
going to fail. 

Just pasting my private message that summarised this error previously:

https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/SFTPTransfer.java#L556
So the problem is that host key file is not a mandatory, but  in case it’s not 
provided, we call load on the 3rd party lib without arguments:
https://github.com/hierynomus/sshj/blob/master/src/main/java/net/schmizz/sshj/SSHClient.java#L621
Which tries to load keys from the default location, but this is far from what 
we state in our documentation:
{code}Host Key FileIf supplied, the given file will be used as the 
Host Key; otherwise, no use host key file will be used {code}
So there are multiple issues here:
-Even though the ssh connection fails, somewhere the IO exception is swallowed. 
Didn’t reproduce to check the logs, but I would expect exceptions to be thrown 
in the testcase and these being talkative about the error. My gut feeling says 
that we do the same in case the user specifies a host key file, but it’s 
somehow not accessible.
-Strict host check on/off might not be enough to cover all the scenarios as 
there are three: host 1# known and key matches, 2# host not known and we either 
trust or not, 3# host known, but there is a mismatch (probably man in the 
middle). I think this property should be improved at least in documentation 
point of view as currently only the code tells what do we do in 2#. Which 
depends on whether the file exists or not, so most probably something unwanted.
-Either the documentation or the behaviour should be fixed to make them aligned 
(you are the security guy to tell which one is right :wink: )
-The testcase should either use a predefined key or have host key checking 
completely off. According to what we see above, not sure about the latter being 
nicely supported.

  was:
In case NiFi test are executed on a machine without knows_hosts file, it's 
going to fail. 

Just pasting my private message that summarised this error previously:

https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/SFTPTransfer.java#L556
So the problem is that host key file is not a mandatory, but  in case it’s not 
provided, we call load on the 3rd party lib without arguments:
https://github.com/hierynomus/sshj/blob/master/src/main/java/net/schmizz/sshj/SSHClient.java#L621
Which tries to load keys from the default location, but this is far from what 
we state in our documentation:
{code}Host Key FileIf supplied, the given file will be used as the 
Host Key; otherwise, no use host key file will be used {code}
So there are multiple issues here:
-Even though the ssh connection fails, somewhere the IO exception is swallowed. 
Didn’t reproduce to check the logs, but I would expect exceptions to be thrown 
in the testcase and these being talkative about the error. My gut feeling says 
that we do the same in case the user specifies a host key file, but it’s 
somehow not accessible.
-Strict host check on/off might not be enough to cover all the scenarios as 
there are three: host 1# known and key matches, 2# host not known and we either 
trust or not, 3# host known, but there is a mismatch (probably man in the 
middle). I think this property should be improved at least in documentation 
point of view as currently only the code tells what do we do in 2#. Which 
depends on whether the file exists or not, so something stupid.
-Either the documentation or the behaviour should be fixed to make them aligned 
(you are the security guy to tell which one is right :wink: )
-The testcase should either use a predefined key or have host key checking 
completely off. According to what we see above, not sure about the latter being 
nicely supported.


> SFTP processors shouldn't silently try to access known hosts file of the user
> -
>
> Key: NIFI-7049
> URL: https://issues.apache.org/jira/browse/NIFI-7049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
>Reporter: Arpad Boda
>Priority: Major
>
> In case NiFi test are executed on a machine without knows_hosts file, it's 
> going to fail. 
> Just pasting my private message that summarised this error previously:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/SFTPTransfer

[jira] [Updated] (NIFI-7049) SFTP processors shouldn't silently try to access known hosts file of the user

2020-01-21 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated NIFI-7049:
-
Summary: SFTP processors shouldn't silently try to access known hosts file 
of the user  (was: SFTP processors shouldn't silently try to access known hosts 
file on the system)

> SFTP processors shouldn't silently try to access known hosts file of the user
> -
>
> Key: NIFI-7049
> URL: https://issues.apache.org/jira/browse/NIFI-7049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
>Reporter: Arpad Boda
>Priority: Major
>
> In case NiFi test are executed on a machine without knows_hosts file, it's 
> going to fail. 
> Just pasting my private message that summarised this error previously:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/SFTPTransfer.java#L556
> So the problem is that host key file is not a mandatory, but  in case it’s 
> not provided, we call load on the 3rd party lib without arguments:
> https://github.com/hierynomus/sshj/blob/master/src/main/java/net/schmizz/sshj/SSHClient.java#L621
> Which tries to load keys from the default location, but this is far from what 
> we state in our documentation:
> {code}Host Key FileIf supplied, the given file will be used as 
> the Host Key; otherwise, no use host key file will be used {code}
> So there are multiple issues here:
> -Even though the ssh connection fails, somewhere the IO exception is 
> swallowed. Didn’t reproduce to check the logs, but I would expect exceptions 
> to be thrown in the testcase and these being talkative about the error. My 
> gut feeling says that we do the same in case the user specifies a host key 
> file, but it’s somehow not accessible.
> -Strict host check on/off might not be enough to cover all the scenarios as 
> there are three: host 1# known and key matches, 2# host not known and we 
> either trust or not, 3# host known, but there is a mismatch (probably man in 
> the middle). I think this property should be improved at least in 
> documentation point of view as currently only the code tells what do we do in 
> 2#. Which depends on whether the file exists or not, so something stupid.
> -Either the documentation or the behaviour should be fixed to make them 
> aligned (you are the security guy to tell which one is right :wink: )
> -The testcase should either use a predefined key or have host key checking 
> completely off. According to what we see above, not sure about the latter 
> being nicely supported.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7049) SFTP processors shouldn't silently try to access known hosts file on the system

2020-01-21 Thread Arpad Boda (Jira)
Arpad Boda created NIFI-7049:


 Summary: SFTP processors shouldn't silently try to access known 
hosts file on the system
 Key: NIFI-7049
 URL: https://issues.apache.org/jira/browse/NIFI-7049
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.10.0
Reporter: Arpad Boda


Just pasting my private message that summarised this error:

https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/SFTPTransfer.java#L556
So the problem is that host key file is not a mandatory, but  in case it’s not 
provided, we call load on the 3rd party lib without arguments:
https://github.com/hierynomus/sshj/blob/master/src/main/java/net/schmizz/sshj/SSHClient.java#L621
Which tries to load keys from the default location, but this is far from what 
we state in our documentation:
{code}Host Key FileIf supplied, the given file will be used as the 
Host Key; otherwise, no use host key file will be used {code}
So there are multiple issues here:
-Even though the ssh connection fails, somewhere the IO exception is swallowed. 
Didn’t reproduce to check the logs, but I would expect exceptions to be thrown 
in the testcase and these being talkative about the error. My gut feeling says 
that we do the same in case the user specifies a host key file, but it’s 
somehow not accessible.
-Strict host check on/off might not be enough to cover all the scenarios as 
there are three: host 1# known and key matches, 2# host not known and we either 
trust or not, 3# host known, but there is a mismatch (probably man in the 
middle). I think this property should be improved at least in documentation 
point of view as currently only the code tells what do we do in 2#. Which 
depends on whether the file exists or not, so something stupid.
-Either the documentation or the behaviour should be fixed to make them aligned 
(you are the security guy to tell which one is right :wink: )
-The testcase should either use a predefined key or have host key checking 
completely off. According to what we see above, not sure about the latter being 
nicely supported.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7049) SFTP processors shouldn't silently try to access known hosts file on the system

2020-01-21 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated NIFI-7049:
-
Description: 
In case NiFi test are executed on a machine without knows_hosts file, it's 
going to fail. 

Just pasting my private message that summarised this error previously:

https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/SFTPTransfer.java#L556
So the problem is that host key file is not a mandatory, but  in case it’s not 
provided, we call load on the 3rd party lib without arguments:
https://github.com/hierynomus/sshj/blob/master/src/main/java/net/schmizz/sshj/SSHClient.java#L621
Which tries to load keys from the default location, but this is far from what 
we state in our documentation:
{code}Host Key FileIf supplied, the given file will be used as the 
Host Key; otherwise, no use host key file will be used {code}
So there are multiple issues here:
-Even though the ssh connection fails, somewhere the IO exception is swallowed. 
Didn’t reproduce to check the logs, but I would expect exceptions to be thrown 
in the testcase and these being talkative about the error. My gut feeling says 
that we do the same in case the user specifies a host key file, but it’s 
somehow not accessible.
-Strict host check on/off might not be enough to cover all the scenarios as 
there are three: host 1# known and key matches, 2# host not known and we either 
trust or not, 3# host known, but there is a mismatch (probably man in the 
middle). I think this property should be improved at least in documentation 
point of view as currently only the code tells what do we do in 2#. Which 
depends on whether the file exists or not, so something stupid.
-Either the documentation or the behaviour should be fixed to make them aligned 
(you are the security guy to tell which one is right :wink: )
-The testcase should either use a predefined key or have host key checking 
completely off. According to what we see above, not sure about the latter being 
nicely supported.

  was:
Just pasting my private message that summarised this error:

https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/SFTPTransfer.java#L556
So the problem is that host key file is not a mandatory, but  in case it’s not 
provided, we call load on the 3rd party lib without arguments:
https://github.com/hierynomus/sshj/blob/master/src/main/java/net/schmizz/sshj/SSHClient.java#L621
Which tries to load keys from the default location, but this is far from what 
we state in our documentation:
{code}Host Key FileIf supplied, the given file will be used as the 
Host Key; otherwise, no use host key file will be used {code}
So there are multiple issues here:
-Even though the ssh connection fails, somewhere the IO exception is swallowed. 
Didn’t reproduce to check the logs, but I would expect exceptions to be thrown 
in the testcase and these being talkative about the error. My gut feeling says 
that we do the same in case the user specifies a host key file, but it’s 
somehow not accessible.
-Strict host check on/off might not be enough to cover all the scenarios as 
there are three: host 1# known and key matches, 2# host not known and we either 
trust or not, 3# host known, but there is a mismatch (probably man in the 
middle). I think this property should be improved at least in documentation 
point of view as currently only the code tells what do we do in 2#. Which 
depends on whether the file exists or not, so something stupid.
-Either the documentation or the behaviour should be fixed to make them aligned 
(you are the security guy to tell which one is right :wink: )
-The testcase should either use a predefined key or have host key checking 
completely off. According to what we see above, not sure about the latter being 
nicely supported.


> SFTP processors shouldn't silently try to access known hosts file on the 
> system
> ---
>
> Key: NIFI-7049
> URL: https://issues.apache.org/jira/browse/NIFI-7049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
>Reporter: Arpad Boda
>Priority: Major
>
> In case NiFi test are executed on a machine without knows_hosts file, it's 
> going to fail. 
> Just pasting my private message that summarised this error previously:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/SFTPTransfer.java#L556
> So the problem is that host key file is not a mandatory, but  in case it’s 
> not provided, we call 

[GitHub] [nifi] tpalfy commented on a change in pull request #3977: NIFI-7007 Add update functionality to the PutCassandraRecord processor.

2020-01-21 Thread GitBox
tpalfy commented on a change in pull request #3977: NIFI-7007 Add update 
functionality to the PutCassandraRecord processor.
URL: https://github.com/apache/nifi/pull/3977#discussion_r368970533
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ##
 @@ -193,6 +303,81 @@ public void onTrigger(ProcessContext context, 
ProcessSession session) throws Pro
 
 }
 
+private Statement generateUpdate(String cassandraTable, RecordSchema 
schema, String updateKeys, String updateMethod, Map 
recordContentMap) {
+Update updateQuery;
+
+// Split up the update key names separated by a comma, should not be 
empty
+final Set updateKeyNames;
+updateKeyNames = Arrays.stream(updateKeys.split(","))
+.map(String::trim)
+.filter(StringUtils::isNotEmpty)
+.collect(Collectors.toSet());
+if (updateKeyNames.isEmpty()) {
+throw new IllegalArgumentException("No Update Keys were 
specified");
+}
+
+// Verify if all update keys are present in the record
+for (String updateKey : updateKeyNames) {
+if (!schema.getFieldNames().contains(updateKey)) {
+throw new IllegalArgumentException("Update key '" + updateKey 
+ "' is not present in the record schema");
+}
+}
+
+// Prepare keyspace/table names
+if (cassandraTable.contains(".")) {
+String[] keyspaceAndTable = cassandraTable.split("\\.");
+updateQuery = QueryBuilder.update(keyspaceAndTable[0], 
keyspaceAndTable[1]);
+} else {
+updateQuery = QueryBuilder.update(cassandraTable);
+}
+
+// Loop through the field names, setting those that are not in the 
update key set, and using those
+// in the update key set as conditions.
+for (String fieldName : schema.getFieldNames()) {
+Object fieldValue = recordContentMap.get(fieldName);
+
+if (updateKeyNames.contains(fieldName)) {
+updateQuery.where(QueryBuilder.eq(fieldName, fieldValue));
+} else {
+Assignment assignment;
+if (SET_TYPE.getValue().equalsIgnoreCase(updateMethod)) {
+assignment = QueryBuilder.set(fieldName, fieldValue);
+} else {
+// Check if the fieldValue is of type long, as this is the 
only type that is can be used,
+// to increment or decrement.
+if (!(fieldValue instanceof Long)) {
 
 Review comment:
   Why can only Long types be used to increment or decrement?
   This schema could be inferred from a JSON or a CSV and some fields could be 
integers.
   The Cassandra update statement could still be valid couldn't it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] tpalfy commented on a change in pull request #3977: NIFI-7007 Add update functionality to the PutCassandraRecord processor.

2020-01-21 Thread GitBox
tpalfy commented on a change in pull request #3977: NIFI-7007 Add update 
functionality to the PutCassandraRecord processor.
URL: https://github.com/apache/nifi/pull/3977#discussion_r368969710
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ##
 @@ -193,6 +303,81 @@ public void onTrigger(ProcessContext context, 
ProcessSession session) throws Pro
 
 }
 
+private Statement generateUpdate(String cassandraTable, RecordSchema 
schema, String updateKeys, String updateMethod, Map 
recordContentMap) {
+Update updateQuery;
+
+// Split up the update key names separated by a comma, should not be 
empty
+final Set updateKeyNames;
+updateKeyNames = Arrays.stream(updateKeys.split(","))
+.map(String::trim)
+.filter(StringUtils::isNotEmpty)
+.collect(Collectors.toSet());
+if (updateKeyNames.isEmpty()) {
+throw new IllegalArgumentException("No Update Keys were 
specified");
+}
+
+// Verify if all update keys are present in the record
+for (String updateKey : updateKeyNames) {
+if (!schema.getFieldNames().contains(updateKey)) {
+throw new IllegalArgumentException("Update key '" + updateKey 
+ "' is not present in the record schema");
+}
+}
+
+// Prepare keyspace/table names
+if (cassandraTable.contains(".")) {
+String[] keyspaceAndTable = cassandraTable.split("\\.");
+updateQuery = QueryBuilder.update(keyspaceAndTable[0], 
keyspaceAndTable[1]);
+} else {
+updateQuery = QueryBuilder.update(cassandraTable);
+}
+
+// Loop through the field names, setting those that are not in the 
update key set, and using those
+// in the update key set as conditions.
+for (String fieldName : schema.getFieldNames()) {
+Object fieldValue = recordContentMap.get(fieldName);
+
+if (updateKeyNames.contains(fieldName)) {
+updateQuery.where(QueryBuilder.eq(fieldName, fieldValue));
+} else {
+Assignment assignment;
+if (SET_TYPE.getValue().equalsIgnoreCase(updateMethod)) {
+assignment = QueryBuilder.set(fieldName, fieldValue);
+} else {
+// Check if the fieldValue is of type long, as this is the 
only type that is can be used,
+// to increment or decrement.
+if (!(fieldValue instanceof Long)) {
+throw new IllegalArgumentException("Field '" + 
fieldName + "' is not of type Long, and cannot be used" +
+" to increment or decrement.");
+}
+
+if (INCR_TYPE.getValue().equalsIgnoreCase(updateMethod)) {
+assignment = QueryBuilder.incr(fieldName, 
(Long)fieldValue);
+} else if 
(DECR_TYPE.getValue().equalsIgnoreCase(updateMethod)) {
+assignment = QueryBuilder.decr(fieldName, 
(Long)fieldValue);
+} else {
+throw new IllegalArgumentException("Update Method '" + 
updateMethod + "' is not valid.");
 
 Review comment:
   If the Update Method is invalid but the field is not of type Long, we get a 
"Field is not of type Long, and cannot be used to increment or decrement." 
error message (instead of the "Update Method is not valid.").


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] bakaid commented on issue #656: MINIFI-1013 Used soci library.

2020-01-21 Thread GitBox
bakaid commented on issue #656: MINIFI-1013 Used soci library.
URL: https://github.com/apache/nifi-minifi-cpp/pull/656#issuecomment-576677328
 
 
   @am-c-p-p Please rebase this branch to the latest master.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (MINIFICPP-1123) Processors should handle errors in onSchedule phase with exceptions, cleanup config in onUnschedule

2020-01-21 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated MINIFICPP-1123:
--
Summary: Processors should handle errors in onSchedule phase with 
exceptions, cleanup config in onUnschedule  (was: Processors should handle 
errors in onSchedule fails with exceptions, cleanup config in onUnschedule)

> Processors should handle errors in onSchedule phase with exceptions, cleanup 
> config in onUnschedule
> ---
>
> Key: MINIFICPP-1123
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1123
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Epic
>Affects Versions: 0.7.0
>Reporter: Arpad Boda
>Assignee: Arpad Boda
>Priority: Major
>  Labels: core
> Fix For: 0.8.0
>
>
> All processors (at least the ones in standard processors extensions and the 
> ones in the most commonly used extensions) should handle configuration errors 
> properly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1123) Processors should handle errors in onSchedule fails with exceptions, cleanup config in onUnschedule

2020-01-21 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated MINIFICPP-1123:
--
Description: All processors (at least the ones in standard processors 
extensions and the ones in the most commonly used extensions) should handle 
configuration errors properly. 

> Processors should handle errors in onSchedule fails with exceptions, cleanup 
> config in onUnschedule
> ---
>
> Key: MINIFICPP-1123
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1123
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Epic
>Affects Versions: 0.7.0
>Reporter: Arpad Boda
>Assignee: Arpad Boda
>Priority: Major
>  Labels: core
> Fix For: 0.8.0
>
>
> All processors (at least the ones in standard processors extensions and the 
> ones in the most commonly used extensions) should handle configuration errors 
> properly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1123) Processors should handle errors in onSchedule fails with exceptions, cleanup config in onUnschedule

2020-01-21 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated MINIFICPP-1123:
--
Labels: core  (was: )

> Processors should handle errors in onSchedule fails with exceptions, cleanup 
> config in onUnschedule
> ---
>
> Key: MINIFICPP-1123
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1123
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Epic
>Affects Versions: 0.7.0
>Reporter: Arpad Boda
>Assignee: Arpad Boda
>Priority: Major
>  Labels: core
> Fix For: 0.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1123) Processors should handle errors in onSchedule fails with exceptions, cleanup config in onUnschedule

2020-01-21 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated MINIFICPP-1123:
--
Issue Type: Epic  (was: Improvement)

> Processors should handle errors in onSchedule fails with exceptions, cleanup 
> config in onUnschedule
> ---
>
> Key: MINIFICPP-1123
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1123
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Epic
>Affects Versions: 0.7.0
>Reporter: Arpad Boda
>Assignee: Arpad Boda
>Priority: Major
> Fix For: 0.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (MINIFICPP-1123) Processors should handle errors in onSchedule fails with exceptions, cleanup config in onUnschedule

2020-01-21 Thread Arpad Boda (Jira)
Arpad Boda created MINIFICPP-1123:
-

 Summary: Processors should handle errors in onSchedule fails with 
exceptions, cleanup config in onUnschedule
 Key: MINIFICPP-1123
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1123
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Affects Versions: 0.7.0
Reporter: Arpad Boda
Assignee: Arpad Boda
 Fix For: 0.8.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (MINIFICPP-1122) Ensure that all flow files have a resource claim

2020-01-21 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda resolved MINIFICPP-1122.
---
Resolution: Duplicate

> Ensure that all flow files have a resource claim
> 
>
> Key: MINIFICPP-1122
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1122
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Marton Szasz
>Priority: Major
> Fix For: 1.0.0
>
>
> The current minifi behavior is not creating {{ResourceClaim}} for all empty 
> (content) flow files. The change would change behavior of some processors, as 
> {{ProcessSession::read}} only calls the passed callback when there is a 
> {{ResourceClaim}} and the flow file is not empty. The behavior will be made 
> configurable in MINIFICPP-1047 with the current behavior being the default 
> and the default should change in 1.0 to always creating {{ResourceClaim}}.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (MINIFICPP-1122) Ensure that all flow files have a resource claim

2020-01-21 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda reopened MINIFICPP-1122:
---

> Ensure that all flow files have a resource claim
> 
>
> Key: MINIFICPP-1122
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1122
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Marton Szasz
>Priority: Major
> Fix For: 1.0.0
>
>
> The current minifi behavior is not creating {{ResourceClaim}} for all empty 
> (content) flow files. The change would change behavior of some processors, as 
> {{ProcessSession::read}} only calls the passed callback when there is a 
> {{ResourceClaim}} and the flow file is not empty. The behavior will be made 
> configurable in MINIFICPP-1047 with the current behavior being the default 
> and the default should change in 1.0 to always creating {{ResourceClaim}}.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1122) Ensure that all flow files have a resource claim

2020-01-21 Thread Marton Szasz (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Szasz updated MINIFICPP-1122:

Description: 
The current minifi behavior is not creating {{ResourceClaim}} for all empty 
(content) flow files. The change would change behavior of some processors, as 
{{ProcessSession::read}} only calls the passed callback when there is a 
{{ResourceClaim}} and the flow file is not empty. The behavior will be made 
configurable in MINIFICPP-1047 with the current behavior being the default and 
the default should change in 1.0 to always creating {{ResourceClaim}}.

 

  was:
The current minifi behavior is not creating {{ResourceClaim}} for empty 
(content) flow files. The change would change behavior of some processors, as 
{{ProcessSession::read}} only calls the passed callback when there is a 
{{ResourceClaim}} and the flow file is not empty. The behavior will be made 
configurable in MINIFICPP-1047 with the current behavior being the default and 
the default should change in 1.0 to always creating {{ResourceClaim}}.

 


> Ensure that all flow files have a resource claim
> 
>
> Key: MINIFICPP-1122
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1122
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Marton Szasz
>Priority: Major
> Fix For: 1.0.0
>
>
> The current minifi behavior is not creating {{ResourceClaim}} for all empty 
> (content) flow files. The change would change behavior of some processors, as 
> {{ProcessSession::read}} only calls the passed callback when there is a 
> {{ResourceClaim}} and the flow file is not empty. The behavior will be made 
> configurable in MINIFICPP-1047 with the current behavior being the default 
> and the default should change in 1.0 to always creating {{ResourceClaim}}.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (MINIFICPP-1078) Flowfiles shoudn't exist without claim

2020-01-21 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda reassigned MINIFICPP-1078:
-

Assignee: Marton Szasz  (was: Arpad Boda)

> Flowfiles shoudn't exist without claim
> --
>
> Key: MINIFICPP-1078
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1078
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Affects Versions: 0.6.0
>Reporter: Arpad Boda
>Assignee: Marton Szasz
>Priority: Major
> Fix For: 1.0.0
>
>
> Even if a given flowfile is empty, there should be a content claim associated 
> and reading the content should succeed (naturally reading 0 bytes) without 
> the need of adding error handling to a lot of different code paths 
> (processors). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (MINIFICPP-1122) Ensure that all flow files have a resource claim

2020-01-21 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda resolved MINIFICPP-1122.
---
Resolution: Fixed

> Ensure that all flow files have a resource claim
> 
>
> Key: MINIFICPP-1122
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1122
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Marton Szasz
>Priority: Major
> Fix For: 1.0.0
>
>
> The current minifi behavior is not creating {{ResourceClaim}} for empty 
> (content) flow files. The change would change behavior of some processors, as 
> {{ProcessSession::read}} only calls the passed callback when there is a 
> {{ResourceClaim}} and the flow file is not empty. The behavior will be made 
> configurable in MINIFICPP-1047 with the current behavior being the default 
> and the default should change in 1.0 to always creating {{ResourceClaim}}.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (MINIFICPP-1078) Flowfiles shoudn't exist without claim

2020-01-21 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda reassigned MINIFICPP-1078:
-

Assignee: Arpad Boda

> Flowfiles shoudn't exist without claim
> --
>
> Key: MINIFICPP-1078
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1078
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Affects Versions: 0.6.0
>Reporter: Arpad Boda
>Assignee: Arpad Boda
>Priority: Major
> Fix For: 1.0.0
>
>
> Even if a given flowfile is empty, there should be a content claim associated 
> and reading the content should succeed (naturally reading 0 bytes) without 
> the need of adding error handling to a lot of different code paths 
> (processors). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] am-c-p-p commented on a change in pull request #656: MINIFI-1013 Used soci library.

2020-01-21 Thread GitBox
am-c-p-p commented on a change in pull request #656: MINIFI-1013 Used soci 
library.
URL: https://github.com/apache/nifi-minifi-cpp/pull/656#discussion_r368881657
 
 

 ##
 File path: extensions/sql/data/SQLRowsetProcessor.cpp
 ##
 @@ -0,0 +1,120 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "SQLRowsetProcessor.h"
+
+#include "Exception.h"
+#include "Utils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace sql {
+
+SQLRowsetProcessor::SQLRowsetProcessor(const soci::rowset& rowset, 
const std::vector& rowSubscribers)
+  : rowset_(rowset), rowSubscribers_(rowSubscribers) {
+  iter_ = rowset_.begin();
+}
+
+size_t SQLRowsetProcessor::process(size_t max) {
+  size_t count = 0;
+
+  for (; iter_ != rowset_.end(); ) {
+addRow(*iter_, count);
+iter_++;
+count++;
+totalCount_++;
+if (max > 0 && count >= max) {
+  break;
+}
+  }
+
+  return count;
+}
+
+void SQLRowsetProcessor::addRow(const soci::row& row, size_t rowCount) {
+  for (const auto& pRowSubscriber : rowSubscribers_) {
+pRowSubscriber->beginProcessRow();
+  }
+
+  if (rowCount == 0) {
+for (std::size_t i = 0; i != row.size(); ++i) {
+  for (const auto& pRowSubscriber : rowSubscribers_) {
+
pRowSubscriber->processColumnName(utils::toLower(row.get_properties(i).get_name()));
+  }
+}
+  }
+
+  for (std::size_t i = 0; i != row.size(); ++i) {
+const soci::column_properties& props = row.get_properties(i);
+
+const auto& name = utils::toLower(props.get_name());
+
+if (row.get_indicator(i) == soci::i_null) {
+  processColumn(name, "NULL");
+} else {
+  switch (const auto dataType = props.get_data_type()) {
+case soci::data_type::dt_string: {
+  processColumn(name, row.get(i));
+}
+break;
+case soci::data_type::dt_double: {
+  processColumn(name, row.get(i));
+}
+break;
+case soci::data_type::dt_integer: {
+  processColumn(name, row.get(i));
+}
+break;
+case soci::data_type::dt_long_long: {
+  processColumn(name, row.get(i));
+}
+break;
+case soci::data_type::dt_unsigned_long_long: {
+  processColumn(name, row.get(i));
+}
+break;
+case soci::data_type::dt_date: {
 
 Review comment:
   Added in the processor documentation limitation of dt_date.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (MINIFICPP-1022) Review passing shipped versions of libraries to other third parties (ExternalProjects)

2020-01-21 Thread Jira


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dániel Bakai resolved MINIFICPP-1022.
-
Resolution: Fixed

> Review passing shipped versions of libraries to other third parties 
> (ExternalProjects)
> --
>
> Key: MINIFICPP-1022
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1022
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Dániel Bakai
>Assignee: Dániel Bakai
>Priority: Major
> Fix For: 0.8.0
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> Consider having a wrapper function to make this piece of CMake code reusable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (MINIFICPP-1118) MiNiFi C++ on Windows stops running in a secure env when NiFi becomes unreachable

2020-01-21 Thread Jira


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dániel Bakai resolved MINIFICPP-1118.
-
Resolution: Fixed

> MiNiFi C++ on Windows stops running in a secure env when NiFi becomes 
> unreachable
> -
>
> Key: MINIFICPP-1118
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1118
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Affects Versions: 0.7.0
>Reporter: Arpad Boda
>Assignee: Arpad Boda
>Priority: Major
> Fix For: 0.8.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Error handling of TLS socket is imperfect in some cases: in case the socket 
> cannot be created (for eg. the hostname cannot be resolved), it ignores the 
> errors of underlying socket and tries to use it in blocking mode. 
> This should be improved and make sure that errors of basic socket layer are 
> now swallowed. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)