[jira] [Commented] (NIFI-7145) Chained SplitText processors unable to handle files in some circumstances

2020-03-04 Thread Chris Sampson (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17051837#comment-17051837
 ] 

Chris Sampson commented on NIFI-7145:
-

Re-trying this same flow in NiFi 1.11.3 appears to be working, so maybe this 
was related to NIFI-7114 (at a guess)?

Could be there's nothing further to do with this ticket, or maybe it's still 
worth diagnosing further and confirming there's not something still hanging 
around that would be worth addressing?

> Chained SplitText processors unable to handle files in some circumstances
> -
>
> Key: NIFI-7145
> URL: https://issues.apache.org/jira/browse/NIFI-7145
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.11.1
> Environment: Docker Image (apache/nifi) running in Kubernetes (1.15)
>Reporter: Chris Sampson
>Priority: Minor
> Attachments: Broken_SplitText.json, Broken_SplitText.xml, Screen Shot 
> 2020-02-13 at 17.28.58.png, nifi-app.log, test.csv.tgz
>
>
> With chained SplitText processors (NiFi 1.11.1 apache/nifi Docker image with 
> default nifi.properties, although configured to allow secure access in my 
> environment with encrypted flowfile/provenance/content repositories, don't 
> know whether that makes a difference): * ingest 40MB CSV file with 50k lines 
> of data (plus 1 header)
>  * SplitText - chunk the file into 10k segments (including header in each 
> file)
>  * SplitText - break each row out into its own FlowFile
>  
>  The 10k chunking works fine, but then the files sit in the queue between the 
> processors forever with the second SplitText sat showing it’s working but 
> never actually produces anything (can’t see anything in the logs, although 
> haven’t turned on debug logging to see whether that would provide anything 
> more).
>   
>  If I reduce the chunk size to 1k then the per-row split works fine - maybe 
> some sort of issue with SplitText and/or swapping of FlowFiles/content to the 
> repositories? Similarly, trying to same with a smaller file (i.e. just 
> include the first 3 columns from teh attached, but keep the 50k rows) seems 
> to work fine too.
>   
>  Example Flow/Template attached with file that breaks the flow (untar and 
> copy into /tmp). Second SplitText set to Concurrency=3 in the template, but 
> fails just the same when set to default Concurrency=1.
>   
>  SplitRecord would be an alternative (which works fine when I try it), but I 
> can’t use that as we potentially lose data if the CSV is malformed (there are 
> more data fields in a row that defined headers - the extra fields are thrown 
> away by the Record processors, which I understand to be normal and that’s 
> fine, but unfortunately I later need to ValidateRecord for each of these rows 
> to check for this kind of invalidity).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7222) FetchSFTP appears to not advise the remote system it is done with a given resource resulting in too many open files

2020-03-04 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17051830#comment-17051830
 ] 

Joe Witt commented on NIFI-7222:


[~Matt Rodriguez] [~hdo] [~mrsook] [~jmkofoed] I believe this PR addresses the 
issues you've all faced with SFTP behavior in recent NiFi releases.  It appears 
we did not call some necessary close methods which would leak resources and on 
some systems would also make deleting or moving files problematic.  I've 
verified it was broken before and after the fix but cannot replicate all the 
various environments/usages you have.  Are you able to build with this patch 
and test ?

> FetchSFTP appears to not advise the remote system it is done with a given 
> resource resulting in too many open files
> ---
>
> Key: NIFI-7222
> URL: https://issues.apache.org/jira/browse/NIFI-7222
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hi guys,
>  
> We have an issue with the FetchSFTP processor and the max open file 
> descriptors. In short, it seems that the FetchSFTP keeps the file open 
> “forever” on our Synology NAS, so we are reaching always the default max open 
> files limit of 1024 from our Synlogy NAS if we try to fetch 500’000 small 1MB 
> files (so in fact it’s not possible to read the files as everything is 
> blocked after 1024 files).
>  
> We found no option to rise the limit of max open files on the Synology NAS 
> (but that’s not NiFi’s fault 😉). We have also other linux machine with 
> CentOS, but the behavior there isn’t exactly always the same. Sometimes the 
> file descriptors get closed but sometimes as well not.
>  
> Synology has no lsof command, but this is how I’ve checked it:
> user@nas-01:~$ sudo ls -l /proc//fd | wc -l
> 1024
>  
> Any comments how we can troubleshoot the issue?
>  
> Cheers Josef
> Oh sorry, missed one of of the most important parts, we are using a 8-node 
> cluster with nifi 1.11.3 – so perfectly up to date.
>  
> Cheers Josef
> Hi Joe
>  
> Ok, to our setup, we just bought a new powerful Synology NAS to use it as 
> SFTP server mainly for NiFi to replace our current SFTP linux machine. So the 
> NAS is empty and just configured for this single use case (read/write SFTP 
> from NiFi). Nothing else is running there at the moment. Important limit is 
> per SSH/user session ulimit -a 1024 open files max.:
>  
> root@nas-01:~# ulimit -a
> core file size  (blocks, -c) unlimited
> data seg size   (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size   (blocks, -f) unlimited
> pending signals (-i) 62025
> max locked memory   (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 62025
> virtual memory  (kbytes, -v) unlimited
> file locks  (-x) unlimited
>  
>  
> On NiFi side we are using an 8 node cluster, but it doesn’t matter whether 
> I’m using the whole cluster or just one single (primary) node. It’s clearly 
> visible that it’s related to the number of FetchSFTP processors running. So 
> if I’m distributing the load to 8 nodes I’m seeing 8 SFTP sessions on the NAS 
> and we can fetch  8x1024 files. I’m also seeing the file descriptors from 
> each file (per FetchSFTP processor = PID) on the NAS which has been fetched 
> by NiFi. In my understanding this files should be fetched and the file 
> descriptor should be closed after the transfer, but this doesn’t seems to be 
> the case in most of the times.
>  
> As soon as I’m stopping the “FetchSFTP” processor, the SFTP session seems to 
> be closed and all FDs are gone. So after stop/start I can fetch again 1024 
> files.
>  
> So I tried to troubleshoot a bit further and here is what I’ve done in NiFi 
> and on the NAS:
>  
> A screenshot of text
> Description automatically generated
>  
> So I’ve done a ListSFTP and got 2880 flowfiles, they will be loadbalanced to 
> one single node (to simplify to test and only get 1 SFTP session on the NAS). 
> In the ControlRate I’m transferring every 10 seconds 10 flowfiles to the 
> FetchSFTP, that corelates directly with the open file descriptors on my NAS, 
> as you can see below. Sometimes, and I don’t know when or why, the SFTP 
> session will be closed and everything sta

[GitHub] [nifi] joewitt opened a new pull request #4115: NIFI-7222 Cleaned up API for FTP/SFTP remote file retrieval and ensur…

2020-03-04 Thread GitBox
joewitt opened a new pull request #4115: NIFI-7222 Cleaned up API for FTP/SFTP 
remote file retrieval and ensur…
URL: https://github.com/apache/nifi/pull/4115
 
 
   Fixed resource leak in SFTP get requests that impacted remote server and 
ability to delete/move files.  Simplified API and updated tests for the API 
change.  Verified the functionality now works for GetFTP, FetchFTP, GetSFTP, 
FetchSFTP using both delete and move modes.  Verified that resources leaked and 
transfers at times failed previous to the fix.  Verified it does not after the 
fix.  Verified in the client library code that we needed to be calling these 
relevant resource cleaning methods which explained the issue.
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-7222) FetchSFTP appears to not advise the remote system it is done with a given resource resulting in too many open files

2020-03-04 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17051808#comment-17051808
 ] 

Joe Witt commented on NIFI-7222:


Since this issue has been reported now 3 times and with different impacts in 
each case I am merging them into a single issue and plan to resolve it here and 
ensure it lands on 1.12.0.  It possibly warrants doing a 1.11.4 but we'll see.

> FetchSFTP appears to not advise the remote system it is done with a given 
> resource resulting in too many open files
> ---
>
> Key: NIFI-7222
> URL: https://issues.apache.org/jira/browse/NIFI-7222
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
> Fix For: 1.12.0
>
>
> Hi guys,
>  
> We have an issue with the FetchSFTP processor and the max open file 
> descriptors. In short, it seems that the FetchSFTP keeps the file open 
> “forever” on our Synology NAS, so we are reaching always the default max open 
> files limit of 1024 from our Synlogy NAS if we try to fetch 500’000 small 1MB 
> files (so in fact it’s not possible to read the files as everything is 
> blocked after 1024 files).
>  
> We found no option to rise the limit of max open files on the Synology NAS 
> (but that’s not NiFi’s fault 😉). We have also other linux machine with 
> CentOS, but the behavior there isn’t exactly always the same. Sometimes the 
> file descriptors get closed but sometimes as well not.
>  
> Synology has no lsof command, but this is how I’ve checked it:
> user@nas-01:~$ sudo ls -l /proc//fd | wc -l
> 1024
>  
> Any comments how we can troubleshoot the issue?
>  
> Cheers Josef
> Oh sorry, missed one of of the most important parts, we are using a 8-node 
> cluster with nifi 1.11.3 – so perfectly up to date.
>  
> Cheers Josef
> Hi Joe
>  
> Ok, to our setup, we just bought a new powerful Synology NAS to use it as 
> SFTP server mainly for NiFi to replace our current SFTP linux machine. So the 
> NAS is empty and just configured for this single use case (read/write SFTP 
> from NiFi). Nothing else is running there at the moment. Important limit is 
> per SSH/user session ulimit -a 1024 open files max.:
>  
> root@nas-01:~# ulimit -a
> core file size  (blocks, -c) unlimited
> data seg size   (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size   (blocks, -f) unlimited
> pending signals (-i) 62025
> max locked memory   (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 62025
> virtual memory  (kbytes, -v) unlimited
> file locks  (-x) unlimited
>  
>  
> On NiFi side we are using an 8 node cluster, but it doesn’t matter whether 
> I’m using the whole cluster or just one single (primary) node. It’s clearly 
> visible that it’s related to the number of FetchSFTP processors running. So 
> if I’m distributing the load to 8 nodes I’m seeing 8 SFTP sessions on the NAS 
> and we can fetch  8x1024 files. I’m also seeing the file descriptors from 
> each file (per FetchSFTP processor = PID) on the NAS which has been fetched 
> by NiFi. In my understanding this files should be fetched and the file 
> descriptor should be closed after the transfer, but this doesn’t seems to be 
> the case in most of the times.
>  
> As soon as I’m stopping the “FetchSFTP” processor, the SFTP session seems to 
> be closed and all FDs are gone. So after stop/start I can fetch again 1024 
> files.
>  
> So I tried to troubleshoot a bit further and here is what I’ve done in NiFi 
> and on the NAS:
>  
> A screenshot of text
> Description automatically generated
>  
> So I’ve done a ListSFTP and got 2880 flowfiles, they will be loadbalanced to 
> one single node (to simplify to test and only get 1 SFTP session on the NAS). 
> In the ControlRate I’m transferring every 10 seconds 10 flowfiles to the 
> FetchSFTP, that corelates directly with the open file descriptors on my NAS, 
> as you can see below. Sometimes, and I don’t know when or why, the SFTP 
> session will be closed and everything starts from scratch (not happened here) 
> without any notice on NiFi side.  As you see, the FDs are growing with +10 
> every 10sec and if I’m checking the path/filename of the open FDs I see that 
> this are the one which I’ve fetched.
>  
> root@nas-01:~# ps aux | grep sftp
> root  1740  0.5 

[jira] [Resolved] (NIFI-7216) FetchSFTP can't delete or move files upon completion

2020-03-04 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt resolved NIFI-7216.

Resolution: Duplicate

> FetchSFTP can't delete or move files upon completion
> 
>
> Key: NIFI-7216
> URL: https://issues.apache.org/jira/browse/NIFI-7216
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0
>Reporter: Matt Rodriguez
>Assignee: Joe Witt
>Priority: Major
> Fix For: 1.12.0
>
>
> Possibly similar to NIFI-7177 as they're not able to delete files with the 
> GetSFTP processor, but they're getting a different error than I am.
>  
> I'm using the FetchSFTP processor to get data from a third party SFTP server, 
> I don't have a lot of details on version or configuration on their end. I 
> have noticed that neither the "Move File" or "Delete File" Completion 
> Strategy options work.
> When using "Delete File" as the Completion Strategy I get no alert/bulletin 
> from NiFi to show that anything went wrong, however, the original file has 
> not been deleted. I can manually delete the file using an SFTP client with 
> the same username/password that NiFi connects as.
> When using "Move File" as the Completion Strategy I get this warning message 
> when NiFi tries to move the file after completion: 
> {code:java}
> 2020-03-03 13:59:54,375 WARN [Timer-Driven Process Thread-2] 
> o.a.nifi.processors.standard.FetchSFTP 
> FetchSFTP[id=0344354b-3a49-316e-a571-adcca7b3e70e] Successfully fetched the 
> content for 
> StandardFlowFileRecord[uuid=dea58017-8f3f-4991-91a3-1ff8ca167b6c,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1583243711464-8, container=default, 
> section=8], offset=4625, length=7672936],offset=0,name=TEST.csv,size=7672936] 
> from ftp1.X.com:22/Test/TEST.csv but failed to rename the remote file due 
> to java.io.FileNotFoundException: No such file or directory: {code}
> As with deletion, I am able manually move (rename) the file using an SFTP 
> client with the same username/password that NiFi connects as.
> However, I will add that I'm only able to delete or move the file PRIOR to 
> NiFi fetching the file. If I try to do it immediately after NiFi has fetched 
> the file, I'll get the exact same "no such file or directory" error from the 
> SFTP server when using my local client. If I wait some arbitrary amount of 
> time after NiFi has fetched the file, I am then able to delete or move the 
> file.
> I've also noticed the below sequence of events:
>  # NiFi lists the files on the SFTP server
>  # NiFi fetches the files on the SFTP server
>  # NiFi attempts to delete or move the files on the SFTP server and fails
>  # I immediately attempt to delete or move the files on the SFTP server and 
> fail
>  # I stop the FetchSFTP processor in NiFi
>  # I immediately attempt to delete or move the files on the SFTP server and 
> it succeeds
> This leads me to believe that there is some sort of locking behavior 
> happening where the fetch operation keeps the file locked for some arbitrary 
> amount of time or until the fetch processor is stopped, which is preventing 
> any other operation from taking place on the file.
> Interestingly enough, I was able to get both the "Delete File" and "Move 
> File" Completion Strategies to work when I spun up a basic SFTP server on my 
> own. It sounds like there is something specific with how locks are handled 
> with this third part SFTP server I'm connecting to and how NiFi uses (or 
> re-uses) SFTP clients/connections.
> As mentioned above, I unfortunately don't have a lot of information about 
> this SFTP server I'm connecting to, the vendor has been very tight-lipped 
> about their configuration for some reason.
> If there's anything else you need me to provide, please let me know.
> Thanks!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-7177) getSFTP can't delete a original file.

2020-03-04 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt resolved NIFI-7177.

Resolution: Duplicate

> getSFTP can't delete a original file.
> -
>
> Key: NIFI-7177
> URL: https://issues.apache.org/jira/browse/NIFI-7177
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0, 1.11.0, 1.11.1, 1.11.2
>Reporter: Sook Plengchan
>Assignee: Joe Witt
>Priority: Critical
> Fix For: 1.12.0
>
>
> Below is error message.
> 
> 2020-02-21 17:52:47,463 ERROR [Timer-Driven Process Thread-2] 
> o.a.nifi.processors.standard.GetSFTP 
> GetSFTP[id=615e1b8c-0170-1000-a265-547a6116dc45] Failed to remove remote file 
> /TEST/test.ssm due to java.io.IOException: Failed to delete remote file 
> /TEST/test.ssm; deleting local copy: 
> java.io.IOException: Failed to delete remote file /TEST/test.ssm
>         at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.deleteFile(SFTPTransfer.java:394)
>         at 
> org.apache.nifi.processors.standard.GetFileTransfer.onTrigger(GetFileTransfer.java:211)
>         at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>         at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
>         at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
>         at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>         at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
>         at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>         at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
>         at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>         at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>         at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>         at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: net.schmizz.sshj.sftp.SFTPException: Failure
>         at net.schmizz.sshj.sftp.Response.error(Response.java:140)
>         at net.schmizz.sshj.sftp.Response.ensureStatusIs(Response.java:133)
>         at 
> net.schmizz.sshj.sftp.Response.ensureStatusPacketIsOK(Response.java:125)
>         at net.schmizz.sshj.sftp.SFTPEngine.remove(SFTPEngine.java:205)
>         at net.schmizz.sshj.sftp.SFTPClient.rm(SFTPClient.java:125)
>         at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.deleteFile(SFTPTransfer.java:386)
>         ... 12 common frames omitted
> 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7216) FetchSFTP can't delete or move files upon completion

2020-03-04 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-7216:
---
Fix Version/s: 1.12.0

> FetchSFTP can't delete or move files upon completion
> 
>
> Key: NIFI-7216
> URL: https://issues.apache.org/jira/browse/NIFI-7216
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0
>Reporter: Matt Rodriguez
>Priority: Major
> Fix For: 1.12.0
>
>
> Possibly similar to NIFI-7177 as they're not able to delete files with the 
> GetSFTP processor, but they're getting a different error than I am.
>  
> I'm using the FetchSFTP processor to get data from a third party SFTP server, 
> I don't have a lot of details on version or configuration on their end. I 
> have noticed that neither the "Move File" or "Delete File" Completion 
> Strategy options work.
> When using "Delete File" as the Completion Strategy I get no alert/bulletin 
> from NiFi to show that anything went wrong, however, the original file has 
> not been deleted. I can manually delete the file using an SFTP client with 
> the same username/password that NiFi connects as.
> When using "Move File" as the Completion Strategy I get this warning message 
> when NiFi tries to move the file after completion: 
> {code:java}
> 2020-03-03 13:59:54,375 WARN [Timer-Driven Process Thread-2] 
> o.a.nifi.processors.standard.FetchSFTP 
> FetchSFTP[id=0344354b-3a49-316e-a571-adcca7b3e70e] Successfully fetched the 
> content for 
> StandardFlowFileRecord[uuid=dea58017-8f3f-4991-91a3-1ff8ca167b6c,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1583243711464-8, container=default, 
> section=8], offset=4625, length=7672936],offset=0,name=TEST.csv,size=7672936] 
> from ftp1.X.com:22/Test/TEST.csv but failed to rename the remote file due 
> to java.io.FileNotFoundException: No such file or directory: {code}
> As with deletion, I am able manually move (rename) the file using an SFTP 
> client with the same username/password that NiFi connects as.
> However, I will add that I'm only able to delete or move the file PRIOR to 
> NiFi fetching the file. If I try to do it immediately after NiFi has fetched 
> the file, I'll get the exact same "no such file or directory" error from the 
> SFTP server when using my local client. If I wait some arbitrary amount of 
> time after NiFi has fetched the file, I am then able to delete or move the 
> file.
> I've also noticed the below sequence of events:
>  # NiFi lists the files on the SFTP server
>  # NiFi fetches the files on the SFTP server
>  # NiFi attempts to delete or move the files on the SFTP server and fails
>  # I immediately attempt to delete or move the files on the SFTP server and 
> fail
>  # I stop the FetchSFTP processor in NiFi
>  # I immediately attempt to delete or move the files on the SFTP server and 
> it succeeds
> This leads me to believe that there is some sort of locking behavior 
> happening where the fetch operation keeps the file locked for some arbitrary 
> amount of time or until the fetch processor is stopped, which is preventing 
> any other operation from taking place on the file.
> Interestingly enough, I was able to get both the "Delete File" and "Move 
> File" Completion Strategies to work when I spun up a basic SFTP server on my 
> own. It sounds like there is something specific with how locks are handled 
> with this third part SFTP server I'm connecting to and how NiFi uses (or 
> re-uses) SFTP clients/connections.
> As mentioned above, I unfortunately don't have a lot of information about 
> this SFTP server I'm connecting to, the vendor has been very tight-lipped 
> about their configuration for some reason.
> If there's anything else you need me to provide, please let me know.
> Thanks!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7177) getSFTP can't delete a original file.

2020-03-04 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt reassigned NIFI-7177:
--

Assignee: Joe Witt

> getSFTP can't delete a original file.
> -
>
> Key: NIFI-7177
> URL: https://issues.apache.org/jira/browse/NIFI-7177
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0, 1.11.0, 1.11.1, 1.11.2
>Reporter: Sook Plengchan
>Assignee: Joe Witt
>Priority: Critical
> Fix For: 1.12.0
>
>
> Below is error message.
> 
> 2020-02-21 17:52:47,463 ERROR [Timer-Driven Process Thread-2] 
> o.a.nifi.processors.standard.GetSFTP 
> GetSFTP[id=615e1b8c-0170-1000-a265-547a6116dc45] Failed to remove remote file 
> /TEST/test.ssm due to java.io.IOException: Failed to delete remote file 
> /TEST/test.ssm; deleting local copy: 
> java.io.IOException: Failed to delete remote file /TEST/test.ssm
>         at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.deleteFile(SFTPTransfer.java:394)
>         at 
> org.apache.nifi.processors.standard.GetFileTransfer.onTrigger(GetFileTransfer.java:211)
>         at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>         at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
>         at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
>         at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>         at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
>         at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>         at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
>         at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>         at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>         at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>         at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: net.schmizz.sshj.sftp.SFTPException: Failure
>         at net.schmizz.sshj.sftp.Response.error(Response.java:140)
>         at net.schmizz.sshj.sftp.Response.ensureStatusIs(Response.java:133)
>         at 
> net.schmizz.sshj.sftp.Response.ensureStatusPacketIsOK(Response.java:125)
>         at net.schmizz.sshj.sftp.SFTPEngine.remove(SFTPEngine.java:205)
>         at net.schmizz.sshj.sftp.SFTPClient.rm(SFTPClient.java:125)
>         at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.deleteFile(SFTPTransfer.java:386)
>         ... 12 common frames omitted
> 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7222) FetchSFTP appears to not advise the remote system it is done with a given resource resulting in too many open files

2020-03-04 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-7222:
---
Fix Version/s: 1.12.0

> FetchSFTP appears to not advise the remote system it is done with a given 
> resource resulting in too many open files
> ---
>
> Key: NIFI-7222
> URL: https://issues.apache.org/jira/browse/NIFI-7222
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
> Fix For: 1.12.0
>
>
> Hi guys,
>  
> We have an issue with the FetchSFTP processor and the max open file 
> descriptors. In short, it seems that the FetchSFTP keeps the file open 
> “forever” on our Synology NAS, so we are reaching always the default max open 
> files limit of 1024 from our Synlogy NAS if we try to fetch 500’000 small 1MB 
> files (so in fact it’s not possible to read the files as everything is 
> blocked after 1024 files).
>  
> We found no option to rise the limit of max open files on the Synology NAS 
> (but that’s not NiFi’s fault 😉). We have also other linux machine with 
> CentOS, but the behavior there isn’t exactly always the same. Sometimes the 
> file descriptors get closed but sometimes as well not.
>  
> Synology has no lsof command, but this is how I’ve checked it:
> user@nas-01:~$ sudo ls -l /proc//fd | wc -l
> 1024
>  
> Any comments how we can troubleshoot the issue?
>  
> Cheers Josef
> Oh sorry, missed one of of the most important parts, we are using a 8-node 
> cluster with nifi 1.11.3 – so perfectly up to date.
>  
> Cheers Josef
> Hi Joe
>  
> Ok, to our setup, we just bought a new powerful Synology NAS to use it as 
> SFTP server mainly for NiFi to replace our current SFTP linux machine. So the 
> NAS is empty and just configured for this single use case (read/write SFTP 
> from NiFi). Nothing else is running there at the moment. Important limit is 
> per SSH/user session ulimit -a 1024 open files max.:
>  
> root@nas-01:~# ulimit -a
> core file size  (blocks, -c) unlimited
> data seg size   (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size   (blocks, -f) unlimited
> pending signals (-i) 62025
> max locked memory   (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 62025
> virtual memory  (kbytes, -v) unlimited
> file locks  (-x) unlimited
>  
>  
> On NiFi side we are using an 8 node cluster, but it doesn’t matter whether 
> I’m using the whole cluster or just one single (primary) node. It’s clearly 
> visible that it’s related to the number of FetchSFTP processors running. So 
> if I’m distributing the load to 8 nodes I’m seeing 8 SFTP sessions on the NAS 
> and we can fetch  8x1024 files. I’m also seeing the file descriptors from 
> each file (per FetchSFTP processor = PID) on the NAS which has been fetched 
> by NiFi. In my understanding this files should be fetched and the file 
> descriptor should be closed after the transfer, but this doesn’t seems to be 
> the case in most of the times.
>  
> As soon as I’m stopping the “FetchSFTP” processor, the SFTP session seems to 
> be closed and all FDs are gone. So after stop/start I can fetch again 1024 
> files.
>  
> So I tried to troubleshoot a bit further and here is what I’ve done in NiFi 
> and on the NAS:
>  
> A screenshot of text
> Description automatically generated
>  
> So I’ve done a ListSFTP and got 2880 flowfiles, they will be loadbalanced to 
> one single node (to simplify to test and only get 1 SFTP session on the NAS). 
> In the ControlRate I’m transferring every 10 seconds 10 flowfiles to the 
> FetchSFTP, that corelates directly with the open file descriptors on my NAS, 
> as you can see below. Sometimes, and I don’t know when or why, the SFTP 
> session will be closed and everything starts from scratch (not happened here) 
> without any notice on NiFi side.  As you see, the FDs are growing with +10 
> every 10sec and if I’m checking the path/filename of the open FDs I see that 
> this are the one which I’ve fetched.
>  
> root@nas-01:~# ps aux | grep sftp
> root  1740  0.5  0.0 240848  8584 ?Ss   15:01   0:00 sshd: 
> ldr@internal-sftp
> root  1753  0.0  0.0  23144  2360 pts/2S+   15:01   0:00 grep 
> --color=auto sftp
> root 15520  0.0  0.0 241088  9252 ?Ss   13:38   0:02 sshd: 
> ldr@internal-sftp
> root@nas-

[jira] [Updated] (NIFI-7177) getSFTP can't delete a original file.

2020-03-04 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-7177:
---
Fix Version/s: 1.12.0

> getSFTP can't delete a original file.
> -
>
> Key: NIFI-7177
> URL: https://issues.apache.org/jira/browse/NIFI-7177
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0, 1.11.0, 1.11.1, 1.11.2
>Reporter: Sook Plengchan
>Priority: Critical
> Fix For: 1.12.0
>
>
> Below is error message.
> 
> 2020-02-21 17:52:47,463 ERROR [Timer-Driven Process Thread-2] 
> o.a.nifi.processors.standard.GetSFTP 
> GetSFTP[id=615e1b8c-0170-1000-a265-547a6116dc45] Failed to remove remote file 
> /TEST/test.ssm due to java.io.IOException: Failed to delete remote file 
> /TEST/test.ssm; deleting local copy: 
> java.io.IOException: Failed to delete remote file /TEST/test.ssm
>         at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.deleteFile(SFTPTransfer.java:394)
>         at 
> org.apache.nifi.processors.standard.GetFileTransfer.onTrigger(GetFileTransfer.java:211)
>         at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>         at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
>         at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
>         at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>         at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
>         at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>         at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
>         at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>         at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>         at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>         at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: net.schmizz.sshj.sftp.SFTPException: Failure
>         at net.schmizz.sshj.sftp.Response.error(Response.java:140)
>         at net.schmizz.sshj.sftp.Response.ensureStatusIs(Response.java:133)
>         at 
> net.schmizz.sshj.sftp.Response.ensureStatusPacketIsOK(Response.java:125)
>         at net.schmizz.sshj.sftp.SFTPEngine.remove(SFTPEngine.java:205)
>         at net.schmizz.sshj.sftp.SFTPClient.rm(SFTPClient.java:125)
>         at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.deleteFile(SFTPTransfer.java:386)
>         ... 12 common frames omitted
> 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7216) FetchSFTP can't delete or move files upon completion

2020-03-04 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt reassigned NIFI-7216:
--

Assignee: Joe Witt

> FetchSFTP can't delete or move files upon completion
> 
>
> Key: NIFI-7216
> URL: https://issues.apache.org/jira/browse/NIFI-7216
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0
>Reporter: Matt Rodriguez
>Assignee: Joe Witt
>Priority: Major
> Fix For: 1.12.0
>
>
> Possibly similar to NIFI-7177 as they're not able to delete files with the 
> GetSFTP processor, but they're getting a different error than I am.
>  
> I'm using the FetchSFTP processor to get data from a third party SFTP server, 
> I don't have a lot of details on version or configuration on their end. I 
> have noticed that neither the "Move File" or "Delete File" Completion 
> Strategy options work.
> When using "Delete File" as the Completion Strategy I get no alert/bulletin 
> from NiFi to show that anything went wrong, however, the original file has 
> not been deleted. I can manually delete the file using an SFTP client with 
> the same username/password that NiFi connects as.
> When using "Move File" as the Completion Strategy I get this warning message 
> when NiFi tries to move the file after completion: 
> {code:java}
> 2020-03-03 13:59:54,375 WARN [Timer-Driven Process Thread-2] 
> o.a.nifi.processors.standard.FetchSFTP 
> FetchSFTP[id=0344354b-3a49-316e-a571-adcca7b3e70e] Successfully fetched the 
> content for 
> StandardFlowFileRecord[uuid=dea58017-8f3f-4991-91a3-1ff8ca167b6c,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1583243711464-8, container=default, 
> section=8], offset=4625, length=7672936],offset=0,name=TEST.csv,size=7672936] 
> from ftp1.X.com:22/Test/TEST.csv but failed to rename the remote file due 
> to java.io.FileNotFoundException: No such file or directory: {code}
> As with deletion, I am able manually move (rename) the file using an SFTP 
> client with the same username/password that NiFi connects as.
> However, I will add that I'm only able to delete or move the file PRIOR to 
> NiFi fetching the file. If I try to do it immediately after NiFi has fetched 
> the file, I'll get the exact same "no such file or directory" error from the 
> SFTP server when using my local client. If I wait some arbitrary amount of 
> time after NiFi has fetched the file, I am then able to delete or move the 
> file.
> I've also noticed the below sequence of events:
>  # NiFi lists the files on the SFTP server
>  # NiFi fetches the files on the SFTP server
>  # NiFi attempts to delete or move the files on the SFTP server and fails
>  # I immediately attempt to delete or move the files on the SFTP server and 
> fail
>  # I stop the FetchSFTP processor in NiFi
>  # I immediately attempt to delete or move the files on the SFTP server and 
> it succeeds
> This leads me to believe that there is some sort of locking behavior 
> happening where the fetch operation keeps the file locked for some arbitrary 
> amount of time or until the fetch processor is stopped, which is preventing 
> any other operation from taking place on the file.
> Interestingly enough, I was able to get both the "Delete File" and "Move 
> File" Completion Strategies to work when I spun up a basic SFTP server on my 
> own. It sounds like there is something specific with how locks are handled 
> with this third part SFTP server I'm connecting to and how NiFi uses (or 
> re-uses) SFTP clients/connections.
> As mentioned above, I unfortunately don't have a lot of information about 
> this SFTP server I'm connecting to, the vendor has been very tight-lipped 
> about their configuration for some reason.
> If there's anything else you need me to provide, please let me know.
> Thanks!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] Xeinn commented on issue #4104: NIFI-7159

2020-03-04 Thread GitBox
Xeinn commented on issue #4104: NIFI-7159
URL: https://github.com/apache/nifi/pull/4104#issuecomment-595001978
 
 
   Sorry, it's 3am here, just read first line and replied... missed rest of 
content on email(was on my phone) will read through in morning.
   
   Regards
   
   Chris
   
   On 5 Mar 2020 12:06 am, Mike  wrote:
   
   @MikeThomsen commented on this pull request.
   
   Looks pretty good, but there is definitely some scope creep that will 
require others' input.
   
   
   
   In 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java:
   
   > @@ -669,6 +669,9 @@ private void writeValue(final JsonGenerator generator, 
final Object value, final
}
break;
}
   +case DECIMAL:
   
   
   I think this should actually be a double because that's the maximum that 
Elasticsearch supports. See here:
   
   
https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-types.html
   
   
   
   In 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java:
   
   > @@ -341,6 +341,8 @@ protected PutFlowFile createPut(ProcessSession 
session, ProcessContext context,
case BOOLEAN:
retVal = 
clientService.toBytes(record.getAsBoolean(field));
break;
   +case DECIMAL:
   +// Decimal to be treated as the same as double
   
   
   It should be broken down into a byte array like the other types.
   
   
   
   In 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive3-processors/src/main/java/org/apache/hadoop/hive/ql/io/orc/NiFiOrcUtils.java:
   
   > @@ -104,6 +105,10 @@ public static Object convertToORCObject(TypeInfo 
typeInfo, Object o, final boole
if (o instanceof Double) {
return new DoubleWritable((double) o);
}
   +// Map BigDecimal to a Double type - this should be improved to 
map to Hive Decimal type
   +if (o instanceof BigDecimal) {
   
   
   There were some runtime issues with unit tests related to Orc. 
@mattyb149
 
@bbende
 
@ijokarumawak
 do y'all have any insight into this ORC conversion?
   
   
   
   In 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-controller-service/src/main/java/org/apache/nifi/controller/kudu/KuduLookupService.java

[GitHub] [nifi] Xeinn commented on issue #4104: NIFI-7159

2020-03-04 Thread GitBox
Xeinn commented on issue #4104: NIFI-7159
URL: https://github.com/apache/nifi/pull/4104#issuecomment-595001438
 
 
   I had thought that might be the case with a change to the core types.
   
   Think there will be some work still to do to allow for good backward 
compatibility.
   
   What should the next steps be?
   
   Regards
   
   Chris
   
   On 5 Mar 2020 12:06 am, Mike  wrote:
   
   @MikeThomsen commented on this pull request.
   
   Looks pretty good, but there is definitely some scope creep that will 
require others' input.
   
   
   
   In 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java:
   
   > @@ -669,6 +669,9 @@ private void writeValue(final JsonGenerator generator, 
final Object value, final
}
break;
}
   +case DECIMAL:
   
   
   I think this should actually be a double because that's the maximum that 
Elasticsearch supports. See here:
   
   
https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-types.html
   
   
   
   In 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java:
   
   > @@ -341,6 +341,8 @@ protected PutFlowFile createPut(ProcessSession 
session, ProcessContext context,
case BOOLEAN:
retVal = 
clientService.toBytes(record.getAsBoolean(field));
break;
   +case DECIMAL:
   +// Decimal to be treated as the same as double
   
   
   It should be broken down into a byte array like the other types.
   
   
   
   In 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive3-processors/src/main/java/org/apache/hadoop/hive/ql/io/orc/NiFiOrcUtils.java:
   
   > @@ -104,6 +105,10 @@ public static Object convertToORCObject(TypeInfo 
typeInfo, Object o, final boole
if (o instanceof Double) {
return new DoubleWritable((double) o);
}
   +// Map BigDecimal to a Double type - this should be improved to 
map to Hive Decimal type
   +if (o instanceof BigDecimal) {
   
   
   There were some runtime issues with unit tests related to Orc. 
@mattyb149
 
@bbende
 
@ijokarumawak
 do y'all have any insight into this ORC conversion?
   
   
   
   In 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-controller-service/src/main/java/org/apache/nifi/controller/kudu/KuduLookupService.java

[GitHub] [nifi] mattyb149 opened a new pull request #4114: NIFI-7055: Removed unit test that is now covered by ListValidator

2020-03-04 Thread GitBox
mattyb149 opened a new pull request #4114: NIFI-7055: Removed unit test that is 
now covered by ListValidator
URL: https://github.com/apache/nifi/pull/4114
 
 
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   The check for the update keys being empty has been overcome by events; the 
ListValidator will now check for empty entries (via #4012)
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [x] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Reopened] (NIFI-7055) createListValidator returns valid for empty list with "," input

2020-03-04 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reopened NIFI-7055:


Reopening due to issue in PutCassandraRecord unit test. Instead of testing the 
PCR code finds the error, the processor will be invalid, so we can remove that 
test

> createListValidator returns valid for empty list with "," input
> ---
>
> Key: NIFI-7055
> URL: https://issues.apache.org/jira/browse/NIFI-7055
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Otto Fowler
>Assignee: Otto Fowler
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> from Slack:
> 
> "I'm looking at the createListValidator, and to my surprise passing in a list 
> of (essentially) two empty elements "," validates, while a totally empty 
> string "" does not. Apparently due to some underlying behavior of 
> String.split."
> The string "," does return a String[0] from split. This should fail 
> validation as if here were no elements as null, "", " " do possibly.  
> But that kind of goes against or doesn't consider the ignore empty entries.  
> I think the difference is whether or not you consider "," to be a list of two 
> empty elements or an empty list.
> The current implementation with String.spilt() will produce an empty list.  
> Is that correct?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] pvillard31 opened a new pull request #4113: NIFI-7229 - Upgrade jackson-databind direct dependencies

2020-03-04 Thread GitBox
pvillard31 opened a new pull request #4113: NIFI-7229 - Upgrade 
jackson-databind direct dependencies
URL: https://github.com/apache/nifi/pull/4113
 
 
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-7229) Upgrade jackson-databind direct dependencies

2020-03-04 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-7229:
-
Status: Patch Available  (was: Open)

> Upgrade jackson-databind direct dependencies
> 
>
> Key: NIFI-7229
> URL: https://issues.apache.org/jira/browse/NIFI-7229
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> A new version of com.fasterxml.jackson.core:jackson-databind is available. 
> This Jira is to update com.fasterxml.jackson.core:jackson-databind to version 
> 2.9.10.3.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7229) Upgrade jackson-databind direct dependencies

2020-03-04 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-7229:


 Summary: Upgrade jackson-databind direct dependencies
 Key: NIFI-7229
 URL: https://issues.apache.org/jira/browse/NIFI-7229
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework, Extensions
Reporter: Pierre Villard
Assignee: Pierre Villard


A new version of com.fasterxml.jackson.core:jackson-databind is available. This 
Jira is to update com.fasterxml.jackson.core:jackson-databind to version 
2.9.10.3.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] pvillard31 edited a comment on issue #4088: NIFI-7197 - In-place replacement in LookupRecord processor

2020-03-04 Thread GitBox
pvillard31 edited a comment on issue #4088: NIFI-7197 - In-place replacement in 
LookupRecord processor
URL: https://github.com/apache/nifi/pull/4088#issuecomment-594952963
 
 
   Thanks for the review @markap14 - I pushed a commit to address your remarks 
and added an ``additionalDetails.html`` page to document the processor with 
examples. Feel free to comment/suggest additional modifications. Thanks again!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] MikeThomsen commented on a change in pull request #4104: NIFI-7159

2020-03-04 Thread GitBox
MikeThomsen commented on a change in pull request #4104: NIFI-7159
URL: https://github.com/apache/nifi/pull/4104#discussion_r388006141
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java
 ##
 @@ -669,6 +669,9 @@ private void writeValue(final JsonGenerator generator, 
final Object value, final
 }
 break;
 }
+case DECIMAL:
 
 Review comment:
   I think this should actually be a double because that's the maximum that 
Elasticsearch supports. See here:
   
   
https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-types.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] MikeThomsen commented on a change in pull request #4104: NIFI-7159

2020-03-04 Thread GitBox
MikeThomsen commented on a change in pull request #4104: NIFI-7159
URL: https://github.com/apache/nifi/pull/4104#discussion_r388006930
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive3-processors/src/main/java/org/apache/hadoop/hive/ql/io/orc/NiFiOrcUtils.java
 ##
 @@ -104,6 +105,10 @@ public static Object convertToORCObject(TypeInfo 
typeInfo, Object o, final boole
 if (o instanceof Double) {
 return new DoubleWritable((double) o);
 }
+// Map BigDecimal to a Double type - this should be improved to 
map to Hive Decimal type
+if (o instanceof BigDecimal) {
 
 Review comment:
   There were some runtime issues with unit tests related to Orc. @mattyb149 
@bbende @ijokarumawak do y'all have any insight into this ORC conversion?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] MikeThomsen commented on a change in pull request #4104: NIFI-7159

2020-03-04 Thread GitBox
MikeThomsen commented on a change in pull request #4104: NIFI-7159
URL: https://github.com/apache/nifi/pull/4104#discussion_r388006605
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ##
 @@ -341,6 +341,8 @@ protected PutFlowFile createPut(ProcessSession session, 
ProcessContext context,
 case BOOLEAN:
 retVal = clientService.toBytes(record.getAsBoolean(field));
 break;
+case DECIMAL:
+// Decimal to be treated as the same as double
 
 Review comment:
   It should be broken down into a byte array like the other types.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] MikeThomsen commented on a change in pull request #4104: NIFI-7159

2020-03-04 Thread GitBox
MikeThomsen commented on a change in pull request #4104: NIFI-7159
URL: https://github.com/apache/nifi/pull/4104#discussion_r388007299
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-controller-service/src/main/java/org/apache/nifi/controller/kudu/KuduLookupService.java
 ##
 @@ -317,7 +317,7 @@ private RecordSchema kuduSchemaToNiFiSchema(Schema 
kuduTableSchema, List
 case BINARY:
 case STRING:
 case DECIMAL:
-fields.add(new RecordField(cs.getName(), 
RecordFieldType.STRING.getDataType()));
+fields.add(new RecordField(cs.getName(), 
RecordFieldType.DECIMAL.getDataType()));
 
 Review comment:
   I don't know anything about Kudu, so I may put this ticket on hold and ask 
for input from nifi-dev on a few of these other components (ex Hive)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] pvillard31 commented on issue #4088: NIFI-7197 - In-place replacement in LookupRecord processor

2020-03-04 Thread GitBox
pvillard31 commented on issue #4088: NIFI-7197 - In-place replacement in 
LookupRecord processor
URL: https://github.com/apache/nifi/pull/4088#issuecomment-594952963
 
 
   Thanks for the review @markap14 - I pushed a commit to address your remarks 
and added an ``additionalDetails.html`` to document the processor with 
examples. Feel free to comment/suggest additional modifications. Thanks again!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-7055) createListValidator returns valid for empty list with "," input

2020-03-04 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17051689#comment-17051689
 ] 

ASF subversion and git services commented on NIFI-7055:
---

Commit f1c6e92df58bf24eb5199cdcb1784cbc438946db in nifi's branch 
refs/heads/master from Otto Fowler
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=f1c6e92 ]

NIFI-7055 handle empty split evaluations, which contain only ,

add explict test for " , "

updated with counting validator

Signed-off-by: Matthew Burgess 

This closes #4012


> createListValidator returns valid for empty list with "," input
> ---
>
> Key: NIFI-7055
> URL: https://issues.apache.org/jira/browse/NIFI-7055
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Otto Fowler
>Assignee: Otto Fowler
>Priority: Major
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> from Slack:
> 
> "I'm looking at the createListValidator, and to my surprise passing in a list 
> of (essentially) two empty elements "," validates, while a totally empty 
> string "" does not. Apparently due to some underlying behavior of 
> String.split."
> The string "," does return a String[0] from split. This should fail 
> validation as if here were no elements as null, "", " " do possibly.  
> But that kind of goes against or doesn't consider the ignore empty entries.  
> I think the difference is whether or not you consider "," to be a list of two 
> empty elements or an empty list.
> The current implementation with String.spilt() will produce an empty list.  
> Is that correct?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-7055) createListValidator returns valid for empty list with "," input

2020-03-04 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess resolved NIFI-7055.

Fix Version/s: 1.12.0
   Resolution: Fixed

> createListValidator returns valid for empty list with "," input
> ---
>
> Key: NIFI-7055
> URL: https://issues.apache.org/jira/browse/NIFI-7055
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Otto Fowler
>Assignee: Otto Fowler
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> from Slack:
> 
> "I'm looking at the createListValidator, and to my surprise passing in a list 
> of (essentially) two empty elements "," validates, while a totally empty 
> string "" does not. Apparently due to some underlying behavior of 
> String.split."
> The string "," does return a String[0] from split. This should fail 
> validation as if here were no elements as null, "", " " do possibly.  
> But that kind of goes against or doesn't consider the ignore empty entries.  
> I think the difference is whether or not you consider "," to be a list of two 
> empty elements or an empty list.
> The current implementation with String.spilt() will produce an empty list.  
> Is that correct?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4012: NIFI-7055 createListValidator should treat ", " as invalid

2020-03-04 Thread GitBox
asfgit closed pull request #4012: NIFI-7055 createListValidator should treat 
"," as invalid
URL: https://github.com/apache/nifi/pull/4012
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] mattyb149 commented on issue #4012: NIFI-7055 createListValidator should treat ", " as invalid

2020-03-04 Thread GitBox
mattyb149 commented on issue #4012: NIFI-7055 createListValidator should treat 
"," as invalid
URL: https://github.com/apache/nifi/pull/4012#issuecomment-594936311
 
 
   +1 LGTM, ran contrib-check and tested with various list entries. Thanks for 
the fix! Merging to master


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-7228) Provide Archetype for Processor and controller service api pattern

2020-03-04 Thread Matt Burgess (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17051676#comment-17051676
 ] 

Matt Burgess commented on NIFI-7228:


A ControllerServiceLookup archetype would be nice too, it would use flowfile 
attributes to select an existing impl based on whatever variables are passed to 
it. There are examples in current NiFi code such as DBCPConnectionPoolLookup 
and RecordSinkServiceLookup.

> Provide Archetype for Processor and controller service api pattern
> --
>
> Key: NIFI-7228
> URL: https://issues.apache.org/jira/browse/NIFI-7228
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Otto Fowler
>Priority: Major
>
> It is very often now with nifi's processors that the implementation involves 
> a single processor, using a service interface and multiple controllers that 
> implement external versions or specific implementations.
> Why this is recommended to developers, an archetype would be great for having 
> a smaller scale, working demonstration and starting point.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7228) Provide Archetype for Processor and controller service api pattern

2020-03-04 Thread Otto Fowler (Jira)
Otto Fowler created NIFI-7228:
-

 Summary: Provide Archetype for Processor and controller service 
api pattern
 Key: NIFI-7228
 URL: https://issues.apache.org/jira/browse/NIFI-7228
 Project: Apache NiFi
  Issue Type: New Feature
Reporter: Otto Fowler


It is very often now with nifi's processors that the implementation involves a 
single processor, using a service interface and multiple controllers that 
implement external versions or specific implementations.

Why this is recommended to developers, an archetype would be great for having a 
smaller scale, working demonstration and starting point.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFIREG-368) Registry breaks when key password and keystore password differ

2020-03-04 Thread Justin Rittenhouse (Jira)
Justin Rittenhouse created NIFIREG-368:
--

 Summary: Registry breaks when key password and keystore password 
differ
 Key: NIFIREG-368
 URL: https://issues.apache.org/jira/browse/NIFIREG-368
 Project: NiFi Registry
  Issue Type: Bug
Affects Versions: 0.5.0
Reporter: Justin Rittenhouse


(Running via Docker)

If nifi.registry.security.keystorePasswd and nifi.registry.security.keyPasswd 
differ, the registry fails to boot.  Running via Docker, the container shuts 
down.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] alopresto commented on a change in pull request #4111: NIFI-7119 Implement boundary checking for Argon2 cost parameters

2020-03-04 Thread GitBox
alopresto commented on a change in pull request #4111: NIFI-7119 Implement 
boundary checking for Argon2 cost parameters
URL: https://github.com/apache/nifi/pull/4111#discussion_r387936873
 
 

 ##
 File path: 
nifi-commons/nifi-security-utils/src/main/java/org/apache/nifi/security/util/crypto/Argon2SecureHasher.java
 ##
 @@ -53,11 +53,14 @@
 private final int iterations;
 private final int saltLength;
 
-private final boolean usingStaticSalt;
+private boolean usingStaticSalt;
 
 // A 16 byte salt (nonce) is recommended for password hashing
 private static final byte[] staticSalt = "NiFi Static 
Salt".getBytes(StandardCharsets.UTF_8);
 
+// Upper boundary for several cost parameters
+private static final double upperBoundary = Math.pow(2, 32) - 1;
 
 Review comment:
   As `2^32 - 1` is `4*10^12`, an `int` can hold `2^32` possible values, but 
_signed_, so the `Integer.MAX_VALUE` is `2*10^12`. A `long` is required for 
all of these fields. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] alopresto commented on a change in pull request #4111: NIFI-7119 Implement boundary checking for Argon2 cost parameters

2020-03-04 Thread GitBox
alopresto commented on a change in pull request #4111: NIFI-7119 Implement 
boundary checking for Argon2 cost parameters
URL: https://github.com/apache/nifi/pull/4111#discussion_r387934796
 
 

 ##
 File path: 
nifi-commons/nifi-security-utils/src/main/java/org/apache/nifi/security/util/crypto/Argon2SecureHasher.java
 ##
 @@ -53,11 +53,14 @@
 private final int iterations;
 private final int saltLength;
 
-private final boolean usingStaticSalt;
+private boolean usingStaticSalt;
 
 // A 16 byte salt (nonce) is recommended for password hashing
 private static final byte[] staticSalt = "NiFi Static 
Salt".getBytes(StandardCharsets.UTF_8);
 
+// Upper boundary for several cost parameters
+private static final double upperBoundary = Math.pow(2, 32) - 1;
 
 Review comment:
   Should be `int` rather than `double`. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] alopresto commented on a change in pull request #4111: NIFI-7119 Implement boundary checking for Argon2 cost parameters

2020-03-04 Thread GitBox
alopresto commented on a change in pull request #4111: NIFI-7119 Implement 
boundary checking for Argon2 cost parameters
URL: https://github.com/apache/nifi/pull/4111#discussion_r387934796
 
 

 ##
 File path: 
nifi-commons/nifi-security-utils/src/main/java/org/apache/nifi/security/util/crypto/Argon2SecureHasher.java
 ##
 @@ -53,11 +53,14 @@
 private final int iterations;
 private final int saltLength;
 
-private final boolean usingStaticSalt;
+private boolean usingStaticSalt;
 
 // A 16 byte salt (nonce) is recommended for password hashing
 private static final byte[] staticSalt = "NiFi Static 
Salt".getBytes(StandardCharsets.UTF_8);
 
+// Upper boundary for several cost parameters
+private static final double upperBoundary = Math.pow(2, 32) - 1;
 
 Review comment:
   Should be `long` rather than `double`. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-7227) Fix typo in NiFi administrator guide

2020-03-04 Thread Andy LoPresto (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-7227:

Fix Version/s: 1.12.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Fix typo in NiFi administrator guide
> 
>
> Key: NIFI-7227
> URL: https://issues.apache.org/jira/browse/NIFI-7227
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation & Website
>Affects Versions: 1.11.3
>Reporter: Sandra Pius
>Assignee: Sandra Pius
>Priority: Minor
>  Labels: documentation
> Fix For: 1.12.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Fix typo in the Global Access Policy table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7227) Fix typo in NiFi administrator guide

2020-03-04 Thread Andy LoPresto (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-7227:

Status: Patch Available  (was: Open)

> Fix typo in NiFi administrator guide
> 
>
> Key: NIFI-7227
> URL: https://issues.apache.org/jira/browse/NIFI-7227
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation & Website
>Affects Versions: 1.11.3
>Reporter: Sandra Pius
>Assignee: Sandra Pius
>Priority: Minor
>  Labels: documentation
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Fix typo in the Global Access Policy table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7227) Fix typo in NiFi administrator guide

2020-03-04 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17051626#comment-17051626
 ] 

ASF subversion and git services commented on NIFI-7227:
---

Commit 7773681eeaa82f9c4099a3191d1f7f784f91bf7a in nifi's branch 
refs/heads/master from Andy LoPresto
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=7773681 ]

NIFI-7227 Fixed typo in Global Access Policy table (#4112)

Co-authored-by: spius <57421336+sp...@users.noreply.github.com>

Signed-off-by: Andy LoPresto 

> Fix typo in NiFi administrator guide
> 
>
> Key: NIFI-7227
> URL: https://issues.apache.org/jira/browse/NIFI-7227
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation & Website
>Affects Versions: 1.11.3
>Reporter: Sandra Pius
>Assignee: Sandra Pius
>Priority: Minor
>  Labels: documentation
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Fix typo in the Global Access Policy table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] alopresto merged pull request #4112: NIFI-7227 Fixed typo in Global Access Policy table

2020-03-04 Thread GitBox
alopresto merged pull request #4112: NIFI-7227 Fixed typo in Global Access 
Policy table
URL: https://github.com/apache/nifi/pull/4112
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] alopresto commented on issue #4112: NIFI-7227 Fixed typo in Global Access Policy table

2020-03-04 Thread GitBox
alopresto commented on issue #4112: NIFI-7227 Fixed typo in Global Access 
Policy table
URL: https://github.com/apache/nifi/pull/4112#issuecomment-594835703
 
 
   Ran `contrib-check` and all tests pass. +1, merging. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] alopresto commented on issue #4112: NIFI-7227 Fixed typo in Global Access Policy table

2020-03-04 Thread GitBox
alopresto commented on issue #4112: NIFI-7227 Fixed typo in Global Access 
Policy table
URL: https://github.com/apache/nifi/pull/4112#issuecomment-594833870
 
 
   Reviewing...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] spius1 opened a new pull request #4112: NIFI-7227 Fixed typo in Global Access Policy table

2020-03-04 Thread GitBox
spius1 opened a new pull request #4112: NIFI-7227 Fixed typo in Global Access 
Policy table
URL: https://github.com/apache/nifi/pull/4112
 
 
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Fixing typo in admin guide._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [x] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-3303) escapeJson in ReplaceText

2020-03-04 Thread Otto Fowler (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17051616#comment-17051616
 ] 

Otto Fowler commented on NIFI-3303:
---

Maybe we can have a new configuration flag to govern escaping in the final 
stage.
There is an implicit pipeline going on here, but it maybe needs to be more 
explicit?

> escapeJson in ReplaceText
> -
>
> Key: NIFI-3303
> URL: https://issues.apache.org/jira/browse/NIFI-3303
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.1.1
>Reporter: tianzk
>Priority: Major
> Attachments: ReplaceText_Bug.xml, config.png, dataflow.png
>
>
> I have some problems while using excapeJson and unescapeJson in ReplaceText 
> processor.
> When I give a string: He didn’t say, “Stop”!  to ReplaceText as input,and 
> configure ReplaceText like: attachment config.png
> The output of ReplaceText is same with the input: He didn’t say, “Stop!” 
> ,nothing changed.
> As described in NiFI Documentation the output should be: He didn’t say, 
> \"Stop!\”.Did I miss something?
> Also there are problems with unescapeJson.If input is: He didn’t say, 
> \”Sto\\\"p!\”,the return string will be: He didn’t say, ”Sto"p!”.
> My dataflow:(GetFile just read a file with a  string as content.)
> dataflow.png
> Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-3303) escapeJson in ReplaceText

2020-03-04 Thread Otto Fowler (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17051604#comment-17051604
 ] 

Otto Fowler commented on NIFI-3303:
---

So, the issue here is this:

In RegexReplace, for each match found we:

{code:java}
String replacement = 
replacementValueProperty.evaluateAttributeExpressions(flowFile, 
additionalAttrs, escapeBackRefDecorator).getValue();
 replacement = escapeLiteralBackReferences(replacement, numCapturingGroups);
  
String replacementFinal = normalizeReplacementString(replacement);  

matcher.appendReplacement(sb, replacementFinal);
{code}

So we find the matched text, evaluate the expressions ( with the group vars 
added ) and then escape some literals.

When we do the appendReplacement call, the string is correct.  The issue is 
that appendReplacement still wants to support the $ literals and \ escapes.  
And in this case it does and the escapes wipe out the inner quotes.

Adding a call before appendReplacment to matcherQuoteReplacement resolves this 
issue.

HOWEVER.

The regex stuff is very complex, and kind of fragile.  Trying to support so 
many things with overlapping symbols and escaping rules.

This fix actually breaks and regresses : 

{code:bash}
[ERROR] Failures: 
[ERROR]   TestReplaceText.testBackRefFollowedByNumbers:504 expected: but was:
[ERROR]   TestReplaceText.testBackRefWithNoCapturingGroup:520 
expected: but was:
[ERROR]   TestReplaceText.testBackReference:486 expected: 
but was:
[ERROR]   TestReplaceText.testBackReferenceEscapeWithRegexReplaceUsingEL:1565 
expected: but was:
[ERROR]   TestReplaceText.testBackReferenceWithInvalidReferenceIsEscaped:626 
expected: but was:
[ERROR]   TestReplaceText.testBackReferenceWithTooLargeOfIndexIsEscaped:608 
expected: but was:
[ERROR]   TestReplaceText.testConfigurationCornerCase:65 FlowFile content 
differs from input at byte 0 with input having value 72 and FlowFile having 
value 36
[ERROR]   TestReplaceText.testEscapingDollarSign:644 expected: 
but was:
[ERROR]   TestReplaceText.testGetExistingContent:775
[ERROR]   TestReplaceText.testIterativeRegexReplace:79 
expected:<{"NAME":"[Smith","MIDDLE":"nifi","FIRSTNAME":"John]"}> but 
was:<{"NAME":"[$2","MIDDLE":"$2","FIRSTNAME":"$2]"}>
[ERROR]   TestReplaceText.testRegexNoCaptureDefaultReplacement Expected test to 
throw (an instance of java.lang.AssertionError and exception with message a 
string containing "java.lang.IndexOutOfBoundsException: No group 1")
[ERROR]   TestReplaceText.testReplacementWithExpressionLanguageIsEscaped:554 
expected: but was:
[ERROR]   TestReplaceText.testWithEscaped$InReplacement:123 FlowFile content 
differs from input at byte 2 with input having value 36 and FlowFile having 
value 92
[ERROR]   TestReplaceText.testWithUnEscaped$InReplacement:137 FlowFile content 
differs from input at byte 1 with input having value 36 and FlowFile having 
value 92
[INFO] 
[ERROR] Tests run: 1497, Failures: 14, Errors: 0, Skipped: 23
[INFO] 
{code}

I am not sure how we can untangle this.  

[~joewitt] [~mcgilman] ?

Who is the sme on this?

> escapeJson in ReplaceText
> -
>
> Key: NIFI-3303
> URL: https://issues.apache.org/jira/browse/NIFI-3303
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.1.1
>Reporter: tianzk
>Priority: Major
> Attachments: ReplaceText_Bug.xml, config.png, dataflow.png
>
>
> I have some problems while using excapeJson and unescapeJson in ReplaceText 
> processor.
> When I give a string: He didn’t say, “Stop”!  to ReplaceText as input,and 
> configure ReplaceText like: attachment config.png
> The output of ReplaceText is same with the input: He didn’t say, “Stop!” 
> ,nothing changed.
> As described in NiFI Documentation the output should be: He didn’t say, 
> \"Stop!\”.Did I miss something?
> Also there are problems with unescapeJson.If input is: He didn’t say, 
> \”Sto\\\"p!\”,the return string will be: He didn’t say, ”Sto"p!”.
> My dataflow:(GetFile just read a file with a  string as content.)
> dataflow.png
> Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7227) Fix typo in NiFi administrator guide

2020-03-04 Thread Sandra Pius (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandra Pius reassigned NIFI-7227:
-

Assignee: Sandra Pius

> Fix typo in NiFi administrator guide
> 
>
> Key: NIFI-7227
> URL: https://issues.apache.org/jira/browse/NIFI-7227
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation & Website
>Affects Versions: 1.11.3
>Reporter: Sandra Pius
>Assignee: Sandra Pius
>Priority: Minor
>  Labels: documentation
>
> Fix typo in the Global Access Policy table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7227) Fix typo in NiFi administrator guide

2020-03-04 Thread Sandra Pius (Jira)
Sandra Pius created NIFI-7227:
-

 Summary: Fix typo in NiFi administrator guide
 Key: NIFI-7227
 URL: https://issues.apache.org/jira/browse/NIFI-7227
 Project: Apache NiFi
  Issue Type: Bug
  Components: Documentation & Website
Affects Versions: 1.11.3
Reporter: Sandra Pius


Fix typo in the Global Access Policy table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-3303) escapeJson in ReplaceText

2020-03-04 Thread Otto Fowler (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17051594#comment-17051594
 ] 

Otto Fowler commented on NIFI-3303:
---


{code:bash}
--
Standard FlowFile Attributes
Key: 'entryDate'
Value: 'Wed Mar 04 15:21:48 EST 2020'
Key: 'lineageStartDate'
Value: 'Wed Mar 04 15:21:48 EST 2020'
Key: 'fileSize'
Value: '32'
FlowFile Attribute Map Content
Key: 'filename'
Value: 'f46ce8cb-00b5-4e25-9291-e729fd210c38'
Key: 'path'
Value: './'
Key: 'uuid'
Value: 'f46ce8cb-00b5-4e25-9291-e729fd210c38'
--
{"NO_PARENT":"{\"TEST\":\"A\"}"}

{code}

This is removing the double escapeJson() call.

The issue is, it breaks 14+ tests in the system with my fix.

I'll try to write up what is happening and get some comments for others, but 
like I said earlier, supporting regex, literals with symbols and EL at the same 
time, is really tough


> escapeJson in ReplaceText
> -
>
> Key: NIFI-3303
> URL: https://issues.apache.org/jira/browse/NIFI-3303
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.1.1
>Reporter: tianzk
>Priority: Major
> Attachments: ReplaceText_Bug.xml, config.png, dataflow.png
>
>
> I have some problems while using excapeJson and unescapeJson in ReplaceText 
> processor.
> When I give a string: He didn’t say, “Stop”!  to ReplaceText as input,and 
> configure ReplaceText like: attachment config.png
> The output of ReplaceText is same with the input: He didn’t say, “Stop!” 
> ,nothing changed.
> As described in NiFI Documentation the output should be: He didn’t say, 
> \"Stop!\”.Did I miss something?
> Also there are problems with unescapeJson.If input is: He didn’t say, 
> \”Sto\\\"p!\”,the return string will be: He didn’t say, ”Sto"p!”.
> My dataflow:(GetFile just read a file with a  string as content.)
> dataflow.png
> Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] alopresto commented on issue #4111: NIFI-7119 Implement boundary checking for Argon2 cost parameters

2020-03-04 Thread GitBox
alopresto commented on issue #4111: NIFI-7119 Implement boundary checking for 
Argon2 cost parameters
URL: https://github.com/apache/nifi/pull/4111#issuecomment-594777885
 
 
   Reviewing...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-7141) ValidateRecord does not handle nested Map key type other than string

2020-03-04 Thread Pierre Villard (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17051540#comment-17051540
 ] 

Pierre Villard commented on NIFI-7141:
--

Hi [~harmie], what kind of validation do you want to perform on the payload?

By definition Avro prevents field names to be only numerical and there is not 
much we can do about this (AVRO-153). Unless you accept to change the field 
names, you won't be able to write in Avro. If you can rename the fields/keys, 
then it'd be doable.

If I only use JSON for reader/writer, it works but it's probably not doing the 
validation you're looking for.

> ValidateRecord does not handle nested Map key type other than string
> 
>
> Key: NIFI-7141
> URL: https://issues.apache.org/jira/browse/NIFI-7141
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
> Environment: Tested with 1.9.0 and 1.11.1 in Linux environment but 
> not related to operating system.
>Reporter: Harri Miettinen
>Priority: Major
> Attachments: MapKeyInt.json, MapKeyInt.xml
>
>
> Hi
> We have some incoming data that where the type is map. Problem is that key is 
> int and not string which cause the map to have invalid data.
> Can Nifi add support for other data types for map keys as well?
> I have attached dummy example flow for example. 
> If there is any good workaround that would be good start.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mtien-apache opened a new pull request #4111: NIFI-7119 Implement boundary checking for Argon2 cost parameters

2020-03-04 Thread GitBox
mtien-apache opened a new pull request #4111: NIFI-7119 Implement boundary 
checking for Argon2 cost parameters
URL: https://github.com/apache/nifi/pull/4111
 
 
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Implemented boundary checking for Argon2SecureHasher cost parameters and 
added unit tests._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [x] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [x] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] bakaid closed pull request #741: MINIFICPP-1139 Implemented.

2020-03-04 Thread GitBox
bakaid closed pull request #741: MINIFICPP-1139 Implemented.
URL: https://github.com/apache/nifi-minifi-cpp/pull/741
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-7215) ScanHbase : Get row key when set to col-qual-and-val in json format

2020-03-04 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-7215:
-
Component/s: (was: Core Framework)
 Extensions

> ScanHbase : Get row key when set to col-qual-and-val in json format 
> 
>
> Key: NIFI-7215
> URL: https://issues.apache.org/jira/browse/NIFI-7215
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.3
>Reporter: CHANDAN KUMAR
>Priority: Major
>  Labels: triage
>
> ScanHBase processor does not return the row key, when configured to use 
> col-qual-and-val JSON Format.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7215) ScanHbase : Get row key when set to col-qual-and-val in json format

2020-03-04 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-7215:
-
Affects Version/s: (was: 1.11.3)

> ScanHbase : Get row key when set to col-qual-and-val in json format 
> 
>
> Key: NIFI-7215
> URL: https://issues.apache.org/jira/browse/NIFI-7215
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: CHANDAN KUMAR
>Priority: Major
>  Labels: triage
>
> ScanHBase processor does not return the row key, when configured to use 
> col-qual-and-val JSON Format.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] turcsanyip opened a new pull request #4110: NIFI-7226: Add Connection Factory configuration properties to Publish…

2020-03-04 Thread GitBox
turcsanyip opened a new pull request #4110: NIFI-7226: Add Connection Factory 
configuration properties to Publish…
URL: https://github.com/apache/nifi/pull/4110
 
 
   …JMS and ConsumeJMS processors
   
   Some JMS client libraries may not work with the existing controller services 
due to incompatible
   classloader handling between the 3rd party library and NiFi.
   Via configuring the Connection Factory on the processor itself, only the 
processor's and its
   children's classloaders will be used which eliminates the mentioned 
incompatibility.
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-7222) FetchSFTP appears to not advise the remote system it is done with a given resource resulting in too many open files

2020-03-04 Thread Matt Rodriguez (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17051490#comment-17051490
 ] 

Matt Rodriguez commented on NIFI-7222:
--

Same issue as Harald (reported under NIFI-7216), I'd bet this bug is related.

> FetchSFTP appears to not advise the remote system it is done with a given 
> resource resulting in too many open files
> ---
>
> Key: NIFI-7222
> URL: https://issues.apache.org/jira/browse/NIFI-7222
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
>
> Hi guys,
>  
> We have an issue with the FetchSFTP processor and the max open file 
> descriptors. In short, it seems that the FetchSFTP keeps the file open 
> “forever” on our Synology NAS, so we are reaching always the default max open 
> files limit of 1024 from our Synlogy NAS if we try to fetch 500’000 small 1MB 
> files (so in fact it’s not possible to read the files as everything is 
> blocked after 1024 files).
>  
> We found no option to rise the limit of max open files on the Synology NAS 
> (but that’s not NiFi’s fault 😉). We have also other linux machine with 
> CentOS, but the behavior there isn’t exactly always the same. Sometimes the 
> file descriptors get closed but sometimes as well not.
>  
> Synology has no lsof command, but this is how I’ve checked it:
> user@nas-01:~$ sudo ls -l /proc//fd | wc -l
> 1024
>  
> Any comments how we can troubleshoot the issue?
>  
> Cheers Josef
> Oh sorry, missed one of of the most important parts, we are using a 8-node 
> cluster with nifi 1.11.3 – so perfectly up to date.
>  
> Cheers Josef
> Hi Joe
>  
> Ok, to our setup, we just bought a new powerful Synology NAS to use it as 
> SFTP server mainly for NiFi to replace our current SFTP linux machine. So the 
> NAS is empty and just configured for this single use case (read/write SFTP 
> from NiFi). Nothing else is running there at the moment. Important limit is 
> per SSH/user session ulimit -a 1024 open files max.:
>  
> root@nas-01:~# ulimit -a
> core file size  (blocks, -c) unlimited
> data seg size   (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size   (blocks, -f) unlimited
> pending signals (-i) 62025
> max locked memory   (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 62025
> virtual memory  (kbytes, -v) unlimited
> file locks  (-x) unlimited
>  
>  
> On NiFi side we are using an 8 node cluster, but it doesn’t matter whether 
> I’m using the whole cluster or just one single (primary) node. It’s clearly 
> visible that it’s related to the number of FetchSFTP processors running. So 
> if I’m distributing the load to 8 nodes I’m seeing 8 SFTP sessions on the NAS 
> and we can fetch  8x1024 files. I’m also seeing the file descriptors from 
> each file (per FetchSFTP processor = PID) on the NAS which has been fetched 
> by NiFi. In my understanding this files should be fetched and the file 
> descriptor should be closed after the transfer, but this doesn’t seems to be 
> the case in most of the times.
>  
> As soon as I’m stopping the “FetchSFTP” processor, the SFTP session seems to 
> be closed and all FDs are gone. So after stop/start I can fetch again 1024 
> files.
>  
> So I tried to troubleshoot a bit further and here is what I’ve done in NiFi 
> and on the NAS:
>  
> A screenshot of text
> Description automatically generated
>  
> So I’ve done a ListSFTP and got 2880 flowfiles, they will be loadbalanced to 
> one single node (to simplify to test and only get 1 SFTP session on the NAS). 
> In the ControlRate I’m transferring every 10 seconds 10 flowfiles to the 
> FetchSFTP, that corelates directly with the open file descriptors on my NAS, 
> as you can see below. Sometimes, and I don’t know when or why, the SFTP 
> session will be closed and everything starts from scratch (not happened here) 
> without any notice on NiFi side.  As you see, the FDs are growing with +10 
> every 10sec and if I’m checking the path/filename of the open FDs I see that 
> this are the one which I’ve fetched.
>  
> root@nas-01:~# ps aux | grep sftp
> root  1740  0.5  0.0 240848  8584 ?Ss   15:01   0:00 sshd: 
> ldr@internal-sftp
> root  1753  0.0  0.0  23144  2360 pts/2S+   15:01   0:00 grep 
> --color=auto sftp
> root 1552

[jira] [Created] (NIFI-7226) Add Connection Factory configuration properties to PublishJMS and ConsumeJMS processors

2020-03-04 Thread Peter Turcsanyi (Jira)
Peter Turcsanyi created NIFI-7226:
-

 Summary: Add Connection Factory configuration properties to 
PublishJMS and ConsumeJMS processors
 Key: NIFI-7226
 URL: https://issues.apache.org/jira/browse/NIFI-7226
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Peter Turcsanyi
Assignee: Peter Turcsanyi


Connection factories can be configured via JndiJmsConnectionFactoryProvider or 
JMSConnectionFactoryProvider controller services for PublishJMS / ConsumeJMS.

However, some JMS client libraries may not work with the controller services. 
For example WebLogic JMS client throws the following exception when receiving a 
certain type of messages:
{code:java}
java.lang.ClassCastException: 
weblogic.diagnostics.context.DiagnosticContextImpl cannot be cast to 
weblogic.workarea.WorkContext
{code}
This is due to incompatible Java ClassLoader handling between the WebLogic JMS 
client library and NiFi. NiFi applies classloader isolation between its 
components. Apparently there is also classloader manipulation within the 
WebLogic client. These incompatible classloader switches lead to the situation 
that some JMS client classes are loaded by the controller service's 
classloader, while others by the WebLogic's custom classloader (which is a 
child of the processor's classloader, so it is on a different branch in the 
classloader hierarchy than the controller service).

The issue can be eliminated by using the processor's classloader to load the 
JMS client classes instead of the controller service's. This can be achieved 
via configuring the connection factory on the processor itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-7219) Authentication fails if nifi.security.keyPasswd is empty

2020-03-04 Thread Andy LoPresto (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto resolved NIFI-7219.
-
Resolution: Duplicate

> Authentication fails if nifi.security.keyPasswd is empty
> 
>
> Key: NIFI-7219
> URL: https://issues.apache.org/jira/browse/NIFI-7219
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 1.11.3
>Reporter: Gergely Novák
>Assignee: Nathan Gough
>Priority: Major
>
> nifi.properties:
> {code}
> nifi.security.keyPasswd=
> {code}
> The 
> [documentation|https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#security_configuration]
>  says that "if not set, the value of nifi.security.keystorePasswd will be 
> used." This is true until 1.11.2, but in 1.11.3 with the above setup the 
> authentication fails.
> When using the API, every (replicated) call, e.g. {{/flow/current-user}}, 
> fails with
> {code}
> Unknown user with identity 'anonymous'. Contact the system administrator.
> {code}
> When using the UI, it always redirects to {{/nifi/login}} and then says "You 
> are already logged in."
> Both of the following solves the issue:
> * downgrading to 1.11.2
> * removing {{nifi.security.keyPasswd}} from {{nifi.properties}} completely



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7219) Authentication fails if nifi.security.keyPasswd is empty

2020-03-04 Thread Andy LoPresto (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17051468#comment-17051468
 ] 

Andy LoPresto commented on NIFI-7219:
-

Gergely, thank you for reporting this. I am closing it as a duplicate of 
NIFI-7223. It is actively being worked and I believe the last set of testing is 
being run today. Please continue to track status on that ticket. 

> Authentication fails if nifi.security.keyPasswd is empty
> 
>
> Key: NIFI-7219
> URL: https://issues.apache.org/jira/browse/NIFI-7219
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 1.11.3
>Reporter: Gergely Novák
>Assignee: Nathan Gough
>Priority: Major
>
> nifi.properties:
> {code}
> nifi.security.keyPasswd=
> {code}
> The 
> [documentation|https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#security_configuration]
>  says that "if not set, the value of nifi.security.keystorePasswd will be 
> used." This is true until 1.11.2, but in 1.11.3 with the above setup the 
> authentication fails.
> When using the API, every (replicated) call, e.g. {{/flow/current-user}}, 
> fails with
> {code}
> Unknown user with identity 'anonymous'. Contact the system administrator.
> {code}
> When using the UI, it always redirects to {{/nifi/login}} and then says "You 
> are already logged in."
> Both of the following solves the issue:
> * downgrading to 1.11.2
> * removing {{nifi.security.keyPasswd}} from {{nifi.properties}} completely



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7225) FetchSFTP processor: "routing to not.found" error given when Private Key Path property is invalid

2020-03-04 Thread Nissim Shiman (Jira)
Nissim Shiman created NIFI-7225:
---

 Summary: FetchSFTP processor: "routing to not.found" error given 
when Private Key Path property is invalid
 Key: NIFI-7225
 URL: https://issues.apache.org/jira/browse/NIFI-7225
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.11.3, 1.10.0
Reporter: Nissim Shiman


In apache nifi 1.8.0, for the FetchSFTP processor, if the "Private Key Path" 
property was a directory, this would be flagged immediately as a configuration 
issue.

In apache 1.10 (and later) a directory is an acceptable value here, but then on 
runtime, the "not.found" relationship is hit, even though the remote file 
exists.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7219) Authentication fails if nifi.security.keyPasswd is empty

2020-03-04 Thread Nathan Gough (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Gough reassigned NIFI-7219:
--

Assignee: Nathan Gough

> Authentication fails if nifi.security.keyPasswd is empty
> 
>
> Key: NIFI-7219
> URL: https://issues.apache.org/jira/browse/NIFI-7219
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 1.11.3
>Reporter: Gergely Novák
>Assignee: Nathan Gough
>Priority: Major
>
> nifi.properties:
> {code}
> nifi.security.keyPasswd=
> {code}
> The 
> [documentation|https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#security_configuration]
>  says that "if not set, the value of nifi.security.keystorePasswd will be 
> used." This is true until 1.11.2, but in 1.11.3 with the above setup the 
> authentication fails.
> When using the API, every (replicated) call, e.g. {{/flow/current-user}}, 
> fails with
> {code}
> Unknown user with identity 'anonymous'. Contact the system administrator.
> {code}
> When using the UI, it always redirects to {{/nifi/login}} and then says "You 
> are already logged in."
> Both of the following solves the issue:
> * downgrading to 1.11.2
> * removing {{nifi.security.keyPasswd}} from {{nifi.properties}} completely



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on issue #731: MINIFICPP-1096 fix BackTrace, OOB indexing, tests, appveyor reporting

2020-03-04 Thread GitBox
szaszm commented on issue #731: MINIFICPP-1096 fix BackTrace, OOB indexing, 
tests, appveyor reporting
URL: https://github.com/apache/nifi-minifi-cpp/pull/731#issuecomment-594662529
 
 
   WIP: debugging C2VerifyHeartbeatAndStop on windows (flicker)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-7222) FetchSFTP appears to not advise the remote system it is done with a given resource resulting in too many open files

2020-03-04 Thread Harald Dobbernack (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17051423#comment-17051423
 ] 

Harald Dobbernack commented on NIFI-7222:
-

I guess we are experiencing the same thing: we want the fetched files deleted 
from the sftp - but the processor does not delete them. Only once we stop the 
processor the file gets deleted...   (NiFi 1.11.1 Debian 10.2)

> FetchSFTP appears to not advise the remote system it is done with a given 
> resource resulting in too many open files
> ---
>
> Key: NIFI-7222
> URL: https://issues.apache.org/jira/browse/NIFI-7222
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
>
> Hi guys,
>  
> We have an issue with the FetchSFTP processor and the max open file 
> descriptors. In short, it seems that the FetchSFTP keeps the file open 
> “forever” on our Synology NAS, so we are reaching always the default max open 
> files limit of 1024 from our Synlogy NAS if we try to fetch 500’000 small 1MB 
> files (so in fact it’s not possible to read the files as everything is 
> blocked after 1024 files).
>  
> We found no option to rise the limit of max open files on the Synology NAS 
> (but that’s not NiFi’s fault 😉). We have also other linux machine with 
> CentOS, but the behavior there isn’t exactly always the same. Sometimes the 
> file descriptors get closed but sometimes as well not.
>  
> Synology has no lsof command, but this is how I’ve checked it:
> user@nas-01:~$ sudo ls -l /proc//fd | wc -l
> 1024
>  
> Any comments how we can troubleshoot the issue?
>  
> Cheers Josef
> Oh sorry, missed one of of the most important parts, we are using a 8-node 
> cluster with nifi 1.11.3 – so perfectly up to date.
>  
> Cheers Josef
> Hi Joe
>  
> Ok, to our setup, we just bought a new powerful Synology NAS to use it as 
> SFTP server mainly for NiFi to replace our current SFTP linux machine. So the 
> NAS is empty and just configured for this single use case (read/write SFTP 
> from NiFi). Nothing else is running there at the moment. Important limit is 
> per SSH/user session ulimit -a 1024 open files max.:
>  
> root@nas-01:~# ulimit -a
> core file size  (blocks, -c) unlimited
> data seg size   (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size   (blocks, -f) unlimited
> pending signals (-i) 62025
> max locked memory   (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 62025
> virtual memory  (kbytes, -v) unlimited
> file locks  (-x) unlimited
>  
>  
> On NiFi side we are using an 8 node cluster, but it doesn’t matter whether 
> I’m using the whole cluster or just one single (primary) node. It’s clearly 
> visible that it’s related to the number of FetchSFTP processors running. So 
> if I’m distributing the load to 8 nodes I’m seeing 8 SFTP sessions on the NAS 
> and we can fetch  8x1024 files. I’m also seeing the file descriptors from 
> each file (per FetchSFTP processor = PID) on the NAS which has been fetched 
> by NiFi. In my understanding this files should be fetched and the file 
> descriptor should be closed after the transfer, but this doesn’t seems to be 
> the case in most of the times.
>  
> As soon as I’m stopping the “FetchSFTP” processor, the SFTP session seems to 
> be closed and all FDs are gone. So after stop/start I can fetch again 1024 
> files.
>  
> So I tried to troubleshoot a bit further and here is what I’ve done in NiFi 
> and on the NAS:
>  
> A screenshot of text
> Description automatically generated
>  
> So I’ve done a ListSFTP and got 2880 flowfiles, they will be loadbalanced to 
> one single node (to simplify to test and only get 1 SFTP session on the NAS). 
> In the ControlRate I’m transferring every 10 seconds 10 flowfiles to the 
> FetchSFTP, that corelates directly with the open file descriptors on my NAS, 
> as you can see below. Sometimes, and I don’t know when or why, the SFTP 
> session will be closed and everything starts from scratch (not happened here) 
> without any notice on NiFi side.  As you see, the FDs are growing with +10 
> every 10sec and if I’m checking the path/filename of the open FDs I see that 
> this are the one which I’ve fetched.
>  
> root@nas-01:~# ps aux | grep sftp
> root  1740  0.5  0.0 240848  8584 ?Ss  

[jira] [Commented] (NIFI-3303) escapeJson in ReplaceText

2020-03-04 Thread Otto Fowler (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17051413#comment-17051413
 ] 

Otto Fowler commented on NIFI-3303:
---

Ok, I think I know what is going on:

We evaluate the json escapes correctly in the evaluate script
But we do appendReplacement to take the result and put it in the end buffer.
Matcher.appendReplacement still treats $ and \ as special, so it is messing 
with the output.

I have a fix I'll try

> escapeJson in ReplaceText
> -
>
> Key: NIFI-3303
> URL: https://issues.apache.org/jira/browse/NIFI-3303
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.1.1
>Reporter: tianzk
>Priority: Major
> Attachments: ReplaceText_Bug.xml, config.png, dataflow.png
>
>
> I have some problems while using excapeJson and unescapeJson in ReplaceText 
> processor.
> When I give a string: He didn’t say, “Stop”!  to ReplaceText as input,and 
> configure ReplaceText like: attachment config.png
> The output of ReplaceText is same with the input: He didn’t say, “Stop!” 
> ,nothing changed.
> As described in NiFI Documentation the output should be: He didn’t say, 
> \"Stop!\”.Did I miss something?
> Also there are problems with unescapeJson.If input is: He didn’t say, 
> \”Sto\\\"p!\”,the return string will be: He didn’t say, ”Sto"p!”.
> My dataflow:(GetFile just read a file with a  string as content.)
> dataflow.png
> Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7224) Unable to import a "Download flow" JSON file into Registry

2020-03-04 Thread Andrew M. Lim (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew M. Lim updated NIFI-7224:

Description: 
Selecting "Download flow" for a process group which generated the file:

{{simple_download_flow.json}}

{{Tried to import this into Registry:}}
./cli.sh demo quick-import -i 
/Users/andrew.lim/Downloads/simple_download_flow.json

But got this error:

{{ERROR: Error executing command 'quick-import' : null}}

Added -verbose and see this stack trace:
org.apache.nifi.toolkit.cli.api.CommandException: Error executing command 
'quick-import' : null
at 
org.apache.nifi.toolkit.cli.impl.command.composite.AbstractCompositeCommand.execute(AbstractCompositeCommand.java:84)
at 
org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processCommand(CommandProcessor.java:252)
at 
org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processGroupCommand(CommandProcessor.java:233)
at 
org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.process(CommandProcessor.java:188)
at 
org.apache.nifi.toolkit.cli.CLIMain.runSingleCommand(CLIMain.java:145)
at org.apache.nifi.toolkit.cli.CLIMain.main(CLIMain.java:72)
Caused by: java.lang.NullPointerException
at 
org.apache.nifi.toolkit.cli.impl.command.registry.flow.ImportFlowVersion.doExecute(ImportFlowVersion.java:92)
at 
org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.importFlowVersion(QuickImport.java:150)
at 
org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.doExecute(QuickImport.java:124)
at 
org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.doExecute(QuickImport.java:48)
at 
org.apache.nifi.toolkit.cli.impl.command.composite.AbstractCompositeCommand.execute(AbstractCompositeCommand.java:80)
... 5 more

  was:
Selecting "Download flow" for a process group which generated the file:

{{simple_download_flow.json}}

{{Tried to import this into Registry:}}
{{ ./cli.sh demo quick-import -i 
/Users/andrew.lim/Downloads/simple_download_flow.json}}

But got this error:

{{ERROR: Error executing command 'quick-import' : null}}

Added -verbose and see this stack trace:


{{ org.apache.nifi.toolkit.cli.api.CommandException: Error executing command 
'quick-import' : null}}
{{ at 
org.apache.nifi.toolkit.cli.impl.command.composite.AbstractCompositeCommand.execute(AbstractCompositeCommand.java:84)}}
{{ at 
org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processCommand(CommandProcessor.java:252)}}
{{ at 
org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processGroupCommand(CommandProcessor.java:233)}}
{{ at 
org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.process(CommandProcessor.java:188)}}
{{ at org.apache.nifi.toolkit.cli.CLIMain.runSingleCommand(CLIMain.java:145)}}
{{ at org.apache.nifi.toolkit.cli.CLIMain.main(CLIMain.java:72)}}
{{ Caused by: java.lang.NullPointerException}}
{{ at 
org.apache.nifi.toolkit.cli.impl.command.registry.flow.ImportFlowVersion.doExecute(ImportFlowVersion.java:92)}}
{{ at 
org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.importFlowVersion(QuickImport.java:150)}}
{{ at 
org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.doExecute(QuickImport.java:124)}}
{{ at 
org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.doExecute(QuickImport.java:48)}}
{{ at 
org.apache.nifi.toolkit.cli.impl.command.composite.AbstractCompositeCommand.execute(AbstractCompositeCommand.java:80)}}
{{ ... 5 more}}


> Unable to import a "Download flow" JSON file into Registry
> --
>
> Key: NIFI-7224
> URL: https://issues.apache.org/jira/browse/NIFI-7224
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Andrew M. Lim
>Priority: Major
>
> Selecting "Download flow" for a process group which generated the file:
> {{simple_download_flow.json}}
> {{Tried to import this into Registry:}}
> ./cli.sh demo quick-import -i 
> /Users/andrew.lim/Downloads/simple_download_flow.json
> But got this error:
> {{ERROR: Error executing command 'quick-import' : null}}
> Added -verbose and see this stack trace:
> org.apache.nifi.toolkit.cli.api.CommandException: Error executing command 
> 'quick-import' : null
>   at 
> org.apache.nifi.toolkit.cli.impl.command.composite.AbstractCompositeCommand.execute(AbstractCompositeCommand.java:84)
>   at 
> org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processCommand(CommandProcessor.java:252)
>   at 
> org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processGroupCommand(CommandProcessor.java:233)
>   at 
> org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.process(CommandProcessor.java:188)
>   at 
> org.apache.nifi.toolkit.cli.CLIMain.runSingleCommand(CLIMain.java:145)
>   

[jira] [Updated] (NIFI-7224) Unable to import a "Download flow" JSON file into Registry

2020-03-04 Thread Andrew M. Lim (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew M. Lim updated NIFI-7224:

Description: 
Selecting "Download flow" for a process group which generated the file:

{{simple_download_flow.json}}

{{Tried to import this into Registry:}}
{{ ./cli.sh demo quick-import -i 
/Users/andrew.lim/Downloads/simple_download_flow.json}}

But got this error:

{{ERROR: Error executing command 'quick-import' : null}}

Added -verbose and see this stack trace:


{{ org.apache.nifi.toolkit.cli.api.CommandException: Error executing command 
'quick-import' : null}}
{{ at 
org.apache.nifi.toolkit.cli.impl.command.composite.AbstractCompositeCommand.execute(AbstractCompositeCommand.java:84)}}
{{ at 
org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processCommand(CommandProcessor.java:252)}}
{{ at 
org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processGroupCommand(CommandProcessor.java:233)}}
{{ at 
org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.process(CommandProcessor.java:188)}}
{{ at org.apache.nifi.toolkit.cli.CLIMain.runSingleCommand(CLIMain.java:145)}}
{{ at org.apache.nifi.toolkit.cli.CLIMain.main(CLIMain.java:72)}}
{{ Caused by: java.lang.NullPointerException}}
{{ at 
org.apache.nifi.toolkit.cli.impl.command.registry.flow.ImportFlowVersion.doExecute(ImportFlowVersion.java:92)}}
{{ at 
org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.importFlowVersion(QuickImport.java:150)}}
{{ at 
org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.doExecute(QuickImport.java:124)}}
{{ at 
org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.doExecute(QuickImport.java:48)}}
{{ at 
org.apache.nifi.toolkit.cli.impl.command.composite.AbstractCompositeCommand.execute(AbstractCompositeCommand.java:80)}}
{{ ... 5 more}}

  was:
Selecting "Download flow" for a process group which generated the file:

simple_download_flow.json

Tried to import this into Registry and got the following:
./cli.sh demo quick-import -i 
/Users/andrew.lim/Downloads/simple_download_flow.jsonERROR: Error executing 
command 'quick-import' : null
Added -verbose and see this stack trace:
org.apache.nifi.toolkit.cli.api.CommandException: Error executing command 
'quick-import' : null
at 
org.apache.nifi.toolkit.cli.impl.command.composite.AbstractCompositeCommand.execute(AbstractCompositeCommand.java:84)
at 
org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processCommand(CommandProcessor.java:252)
at 
org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processGroupCommand(CommandProcessor.java:233)
at 
org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.process(CommandProcessor.java:188)
at 
org.apache.nifi.toolkit.cli.CLIMain.runSingleCommand(CLIMain.java:145)
at org.apache.nifi.toolkit.cli.CLIMain.main(CLIMain.java:72)
Caused by: java.lang.NullPointerException
at 
org.apache.nifi.toolkit.cli.impl.command.registry.flow.ImportFlowVersion.doExecute(ImportFlowVersion.java:92)
at 
org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.importFlowVersion(QuickImport.java:150)
at 
org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.doExecute(QuickImport.java:124)
at 
org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.doExecute(QuickImport.java:48)
at 
org.apache.nifi.toolkit.cli.impl.command.composite.AbstractCompositeCommand.execute(AbstractCompositeCommand.java:80)
... 5 more


> Unable to import a "Download flow" JSON file into Registry
> --
>
> Key: NIFI-7224
> URL: https://issues.apache.org/jira/browse/NIFI-7224
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Andrew M. Lim
>Priority: Major
>
> Selecting "Download flow" for a process group which generated the file:
> {{simple_download_flow.json}}
> {{Tried to import this into Registry:}}
> {{ ./cli.sh demo quick-import -i 
> /Users/andrew.lim/Downloads/simple_download_flow.json}}
> But got this error:
> {{ERROR: Error executing command 'quick-import' : null}}
> Added -verbose and see this stack trace:
> {{ org.apache.nifi.toolkit.cli.api.CommandException: Error executing command 
> 'quick-import' : null}}
> {{ at 
> org.apache.nifi.toolkit.cli.impl.command.composite.AbstractCompositeCommand.execute(AbstractCompositeCommand.java:84)}}
> {{ at 
> org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processCommand(CommandProcessor.java:252)}}
> {{ at 
> org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processGroupCommand(CommandProcessor.java:233)}}
> {{ at 
> org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.process(CommandProcessor.java:188)}}
> {{ at org.apache.nifi.toolkit.cli.CLIMain.runSingleCommand(CLIMain.java:145)}}
> {{ at org.apa

[jira] [Commented] (NIFI-7208) PutSQL doesn't handle nanoseconds

2020-03-04 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17051406#comment-17051406
 ] 

ASF subversion and git services commented on NIFI-7208:
---

Commit 74b1b2fc596f43389b9a7629e4c8544e9e008997 in nifi's branch 
refs/heads/master from Matt Burgess
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=74b1b2f ]

NIFI-7208: Fixed PutSQL/JdbcCommon handling of timestamps (nanoseconds, e.g.)


> PutSQL doesn't handle nanoseconds
> -
>
> Key: NIFI-7208
> URL: https://issues.apache.org/jira/browse/NIFI-7208
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> According to the documentation PutSQL should be able to manage nanoseconds:
> https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.PutSQL/
> ".]sql.args.N.format [...] as specified according to 
> java.time.format.DateTimeFormatter"
> DateTimeFormatter should be able to manage nanoseconds.
> The issue seems to be happening in JdbcCommon.java
> Line 840-843:
> final DateTimeFormatter dtFormatter = getDateTimeFormatter(valueFormat);
> TemporalAccessor accessor = dtFormatter.parse(parameterValue);
> java.util.Date parsedDate = java.util.Date.from(Instant.from(accessor));
> lTimestamp = parsedDate.getTime();
> It seems to be truncated on line 842
> java.util.Date parsedDate = java.util.Date.from(Instant.from(accessor));
> as java.util.Date doesn't handle nanoseconds. A Java time construct that can 
> handle nanoseconds should be used instead of Date.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7208) PutSQL doesn't handle nanoseconds

2020-03-04 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-7208:
-
Fix Version/s: 1.12.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> PutSQL doesn't handle nanoseconds
> -
>
> Key: NIFI-7208
> URL: https://issues.apache.org/jira/browse/NIFI-7208
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> According to the documentation PutSQL should be able to manage nanoseconds:
> https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.PutSQL/
> ".]sql.args.N.format [...] as specified according to 
> java.time.format.DateTimeFormatter"
> DateTimeFormatter should be able to manage nanoseconds.
> The issue seems to be happening in JdbcCommon.java
> Line 840-843:
> final DateTimeFormatter dtFormatter = getDateTimeFormatter(valueFormat);
> TemporalAccessor accessor = dtFormatter.parse(parameterValue);
> java.util.Date parsedDate = java.util.Date.from(Instant.from(accessor));
> lTimestamp = parsedDate.getTime();
> It seems to be truncated on line 842
> java.util.Date parsedDate = java.util.Date.from(Instant.from(accessor));
> as java.util.Date doesn't handle nanoseconds. A Java time construct that can 
> handle nanoseconds should be used instead of Date.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markap14 merged pull request #4094: NIFI-7208: Fixed PutSQL/JdbcCommon handling of timestamps (nanoseconds, e.g.)

2020-03-04 Thread GitBox
markap14 merged pull request #4094: NIFI-7208: Fixed PutSQL/JdbcCommon handling 
of timestamps (nanoseconds, e.g.)
URL: https://github.com/apache/nifi/pull/4094
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] markap14 commented on issue #4094: NIFI-7208: Fixed PutSQL/JdbcCommon handling of timestamps (nanoseconds, e.g.)

2020-03-04 Thread GitBox
markap14 commented on issue #4094: NIFI-7208: Fixed PutSQL/JdbcCommon handling 
of timestamps (nanoseconds, e.g.)
URL: https://github.com/apache/nifi/pull/4094#issuecomment-594654057
 
 
   Thanks @mattyb149 that should do the trick! :) +1 merged to master


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-7224) Unable to import a "Download flow" JSON file into Registry

2020-03-04 Thread Andrew M. Lim (Jira)
Andrew M. Lim created NIFI-7224:
---

 Summary: Unable to import a "Download flow" JSON file into Registry
 Key: NIFI-7224
 URL: https://issues.apache.org/jira/browse/NIFI-7224
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Andrew M. Lim


Selecting "Download flow" for a process group which generated the file:

simple_download_flow.json

Tried to import this into Registry and got the following:
./cli.sh demo quick-import -i 
/Users/andrew.lim/Downloads/simple_download_flow.jsonERROR: Error executing 
command 'quick-import' : null
Added -verbose and see this stack trace:
org.apache.nifi.toolkit.cli.api.CommandException: Error executing command 
'quick-import' : null
at 
org.apache.nifi.toolkit.cli.impl.command.composite.AbstractCompositeCommand.execute(AbstractCompositeCommand.java:84)
at 
org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processCommand(CommandProcessor.java:252)
at 
org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processGroupCommand(CommandProcessor.java:233)
at 
org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.process(CommandProcessor.java:188)
at 
org.apache.nifi.toolkit.cli.CLIMain.runSingleCommand(CLIMain.java:145)
at org.apache.nifi.toolkit.cli.CLIMain.main(CLIMain.java:72)
Caused by: java.lang.NullPointerException
at 
org.apache.nifi.toolkit.cli.impl.command.registry.flow.ImportFlowVersion.doExecute(ImportFlowVersion.java:92)
at 
org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.importFlowVersion(QuickImport.java:150)
at 
org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.doExecute(QuickImport.java:124)
at 
org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.doExecute(QuickImport.java:48)
at 
org.apache.nifi.toolkit.cli.impl.command.composite.AbstractCompositeCommand.execute(AbstractCompositeCommand.java:80)
... 5 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-registry] jrittenh commented on issue #264: Bugfix: Add prop_replace command to update nifi.registry.security.keyPasswd

2020-03-04 Thread GitBox
jrittenh commented on issue #264: Bugfix: Add prop_replace command to update 
nifi.registry.security.keyPasswd
URL: https://github.com/apache/nifi-registry/pull/264#issuecomment-594647546
 
 
   Fixes NIFIREG-367


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFIREG-367) Registry Docker secure.sh doesn't update the key password property

2020-03-04 Thread Justin Rittenhouse (Jira)
Justin Rittenhouse created NIFIREG-367:
--

 Summary: Registry Docker secure.sh doesn't update the key password 
property
 Key: NIFIREG-367
 URL: https://issues.apache.org/jira/browse/NIFIREG-367
 Project: NiFi Registry
  Issue Type: Bug
Affects Versions: 0.5.0
Reporter: Justin Rittenhouse


In nifi-registry.properties, there is a line for the key password 
(nifi.registry.security.keyPasswd).  secure.sh does not update this property.

 

[https://github.com/apache/nifi-registry/pull/264]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-registry] jrittenh opened a new pull request #264: Add prop_replace command to update nifi.registry.security.keyPasswd

2020-03-04 Thread GitBox
jrittenh opened a new pull request #264: Add prop_replace command to update 
nifi.registry.security.keyPasswd
URL: https://github.com/apache/nifi-registry/pull/264
 
 
   Add prop_replace command to update nifi.registry.security.keyPasswd using 
either the KEY_PASSWORD or KEYSTORE_PASSWORD environment variable


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-7025) Add kerberos password property to NiFi Hive components

2020-03-04 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-7025:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add kerberos password property to NiFi Hive components
> --
>
> Key: NIFI-7025
> URL: https://issues.apache.org/jira/browse/NIFI-7025
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Extensions
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> In addition to the principal/keytab and KerberosCredentialsService options 
> for accessing kerberized services from NiFi Hive components, a password field 
> should be added.
> Components should validate that only one set of options should be configured:
>  * principal and keytab
>  * principal and password
>  * KerberosCredentialsService
> The components that will be affected by this change:
>  * Hive3ConnectionPool
>  * Hive_1_1ConnectionPool
>  * HiveConnectionPool
>  * PutHive3Streaming
>  * PutHiveStreaming



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7025) Add kerberos password property to NiFi Hive components

2020-03-04 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17051379#comment-17051379
 ] 

ASF subversion and git services commented on NIFI-7025:
---

Commit 4b6de8d164a2fe52d03fe06e751e2ece4ce7c680 in nifi's branch 
refs/heads/master from jstorck
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=4b6de8d ]

NIFI-7025: Wrap Hive 3 calls with UGI.doAs
Updated PutHive3Streaming to wrap calls to Hive in UGI.doAs methods
Fixed misleading logging message after the principal has been authenticated 
with the KDC
When connecting to unsecured Hive 3, a UGI with "simple" auth will be used

Signed-off-by: Matthew Burgess 

This closes #4108


> Add kerberos password property to NiFi Hive components
> --
>
> Key: NIFI-7025
> URL: https://issues.apache.org/jira/browse/NIFI-7025
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Extensions
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> In addition to the principal/keytab and KerberosCredentialsService options 
> for accessing kerberized services from NiFi Hive components, a password field 
> should be added.
> Components should validate that only one set of options should be configured:
>  * principal and keytab
>  * principal and password
>  * KerberosCredentialsService
> The components that will be affected by this change:
>  * Hive3ConnectionPool
>  * Hive_1_1ConnectionPool
>  * HiveConnectionPool
>  * PutHive3Streaming
>  * PutHiveStreaming



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4108: NIFI-7025: Wrap Hive 3 calls with UGI.doAs

2020-03-04 Thread GitBox
asfgit closed pull request #4108: NIFI-7025: Wrap Hive 3 calls with UGI.doAs
URL: https://github.com/apache/nifi/pull/4108
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-7223) NiFi fails to start correctly when keystorePasswd is filled and keyPasswd is blank

2020-03-04 Thread Nathan Gough (Jira)
Nathan Gough created NIFI-7223:
--

 Summary: NiFi fails to start correctly when keystorePasswd is 
filled and keyPasswd is blank
 Key: NIFI-7223
 URL: https://issues.apache.org/jira/browse/NIFI-7223
 Project: Apache NiFi
  Issue Type: Task
Reporter: Nathan Gough
Assignee: Nathan Gough


A secure NiFi will start up but will throw exceptions on attempting to access 
the UI. This is due to a change in -NIFI-6927-

For a keystore and key with matching passwords, when the keystorePasswd 
property in nifi.properties is set, and the keyPasswd is not, the 
OkHttpReplicationClient would receive "" for the keyPasswd instead of null, 
causing the OkHttpReplicationClient and NiFi to still initialize but not run 
correctly. Historical usage suggests the OkHttpReplicationClient should use the 
keystorePasswd value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7222) FetchSFTP appears to not advise the remote system it is done with a given resource resulting in too many open files

2020-03-04 Thread Joe Witt (Jira)
Joe Witt created NIFI-7222:
--

 Summary: FetchSFTP appears to not advise the remote system it is 
done with a given resource resulting in too many open files
 Key: NIFI-7222
 URL: https://issues.apache.org/jira/browse/NIFI-7222
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Joe Witt
Assignee: Joe Witt


Hi guys,

 

We have an issue with the FetchSFTP processor and the max open file 
descriptors. In short, it seems that the FetchSFTP keeps the file open 
“forever” on our Synology NAS, so we are reaching always the default max open 
files limit of 1024 from our Synlogy NAS if we try to fetch 500’000 small 1MB 
files (so in fact it’s not possible to read the files as everything is blocked 
after 1024 files).

 

We found no option to rise the limit of max open files on the Synology NAS (but 
that’s not NiFi’s fault 😉). We have also other linux machine with CentOS, but 
the behavior there isn’t exactly always the same. Sometimes the file 
descriptors get closed but sometimes as well not.

 

Synology has no lsof command, but this is how I’ve checked it:

user@nas-01:~$ sudo ls -l /proc//fd | wc -l

1024

 

Any comments how we can troubleshoot the issue?

 

Cheers Josef

Oh sorry, missed one of of the most important parts, we are using a 8-node 
cluster with nifi 1.11.3 – so perfectly up to date.

 

Cheers Josef

Hi Joe

 

Ok, to our setup, we just bought a new powerful Synology NAS to use it as SFTP 
server mainly for NiFi to replace our current SFTP linux machine. So the NAS is 
empty and just configured for this single use case (read/write SFTP from NiFi). 
Nothing else is running there at the moment. Important limit is per SSH/user 
session ulimit -a 1024 open files max.:

 

root@nas-01:~# ulimit -a

core file size  (blocks, -c) unlimited

data seg size   (kbytes, -d) unlimited

scheduling priority (-e) 0

file size   (blocks, -f) unlimited

pending signals (-i) 62025

max locked memory   (kbytes, -l) 64

max memory size (kbytes, -m) unlimited

open files  (-n) 1024

pipe size(512 bytes, -p) 8

POSIX message queues (bytes, -q) 819200

real-time priority  (-r) 0

stack size  (kbytes, -s) 8192

cpu time   (seconds, -t) unlimited

max user processes  (-u) 62025

virtual memory  (kbytes, -v) unlimited

file locks  (-x) unlimited

 

 

On NiFi side we are using an 8 node cluster, but it doesn’t matter whether I’m 
using the whole cluster or just one single (primary) node. It’s clearly visible 
that it’s related to the number of FetchSFTP processors running. So if I’m 
distributing the load to 8 nodes I’m seeing 8 SFTP sessions on the NAS and we 
can fetch  8x1024 files. I’m also seeing the file descriptors from each file 
(per FetchSFTP processor = PID) on the NAS which has been fetched by NiFi. In 
my understanding this files should be fetched and the file descriptor should be 
closed after the transfer, but this doesn’t seems to be the case in most of the 
times.

 

As soon as I’m stopping the “FetchSFTP” processor, the SFTP session seems to be 
closed and all FDs are gone. So after stop/start I can fetch again 1024 files.

 

So I tried to troubleshoot a bit further and here is what I’ve done in NiFi and 
on the NAS:

 

A screenshot of text

Description automatically generated

 

So I’ve done a ListSFTP and got 2880 flowfiles, they will be loadbalanced to 
one single node (to simplify to test and only get 1 SFTP session on the NAS). 
In the ControlRate I’m transferring every 10 seconds 10 flowfiles to the 
FetchSFTP, that corelates directly with the open file descriptors on my NAS, as 
you can see below. Sometimes, and I don’t know when or why, the SFTP session 
will be closed and everything starts from scratch (not happened here) without 
any notice on NiFi side.  As you see, the FDs are growing with +10 every 10sec 
and if I’m checking the path/filename of the open FDs I see that this are the 
one which I’ve fetched.
 

root@nas-01:~# ps aux | grep sftp

root  1740  0.5  0.0 240848  8584 ?Ss   15:01   0:00 sshd: 
ldr@internal-sftp

root  1753  0.0  0.0  23144  2360 pts/2S+   15:01   0:00 grep 
--color=auto sftp

root 15520  0.0  0.0 241088  9252 ?Ss   13:38   0:02 sshd: 
ldr@internal-sftp

root@nas-01:~#

root@nas-01:~# ls -l /proc/1740/fd | wc -l

24

root@nas-01:~# ls -l /proc/1740/fd | wc -l

34

root@nas-01:~# ls -l /proc/1740/fd | wc -l

44

root@nas-01:~# ls -l /proc/1740/fd | wc -l

54

root@nas-01:~# ls -l /proc/1740/fd | wc -l

64

 

root@p-li-nas-01:~# ls -l /proc/1740/fd | head

total 0

lr-x--  1 root root 64 Mar  4 15:01 0 -> pipe:[1086218]

l-wx--  1 root root 64 Mar  4 15:01 1 -> pipe:[1086219]

lr-x--+ 1 ro

[GitHub] [nifi] markap14 commented on a change in pull request #4088: NIFI-7197 - In-place replacement in LookupRecord processor

2020-03-04 Thread GitBox
markap14 commented on a change in pull request #4088: NIFI-7197 - In-place 
replacement in LookupRecord processor
URL: https://github.com/apache/nifi/pull/4088#discussion_r387731844
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/LookupRecord.java
 ##
 @@ -144,6 +144,18 @@
 .required(true)
 .build();
 
+static final PropertyDescriptor IN_PLACE_REPLACEMENT = new 
PropertyDescriptor.Builder()
 
 Review comment:
   I do think this change makes a lot of sense. However, I am a bit hesitant to 
add a boolean property value to control this. I think it's a bit unclear how 
exactly the behavior of the processor changes, as a "true" or "false" doesn't 
convey well that the entire behavior of the processor is really changed.
   
   Rather, I would recommend a "Strategy" property. Similar to how ReplaceText 
and work, and even this Processor already has a "Routing Strategy" property. 
For example, a property maybe named "Record Update Strategy" with allowable 
values of "Use  Property" and "Replace Existing Values". I 
feel this makes it more clear to the user that the behavior is significantly 
changed by this property. It also allows each of these Allowable Values to have 
a description that explains more clearly exactly how the Processor will behave, 
rather than having a single property whose description attempts to describe 
both behaviors.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] markap14 commented on a change in pull request #4088: NIFI-7197 - In-place replacement in LookupRecord processor

2020-03-04 Thread GitBox
markap14 commented on a change in pull request #4088: NIFI-7197 - In-place 
replacement in LookupRecord processor
URL: https://github.com/apache/nifi/pull/4088#discussion_r387735460
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/LookupRecord.java
 ##
 @@ -263,6 +288,70 @@ public void onPropertyModified(final PropertyDescriptor 
descriptor, final String
 protected Set route(final Record record, final RecordSchema 
writeSchema, final FlowFile flowFile, final ProcessContext context,
 final Tuple, RecordPath> flowFileContext) {
 
+final boolean isInPlaceReplacement = 
context.getProperty(IN_PLACE_REPLACEMENT).asBoolean();
+
+if(isInPlaceReplacement) {
+return doInPlaceReplacement(record, writeSchema, flowFile, 
context, flowFileContext);
+} else {
+return doResultPathReplacement(record, writeSchema, flowFile, 
context, flowFileContext);
+}
+
+}
+
+private Set doInPlaceReplacement(Record record, RecordSchema 
writeSchema, FlowFile flowFile,
 
 Review comment:
   `writeSchema` is not used. Should remove it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] markap14 commented on a change in pull request #4088: NIFI-7197 - In-place replacement in LookupRecord processor

2020-03-04 Thread GitBox
markap14 commented on a change in pull request #4088: NIFI-7197 - In-place 
replacement in LookupRecord processor
URL: https://github.com/apache/nifi/pull/4088#discussion_r387737802
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/LookupRecord.java
 ##
 @@ -263,6 +288,70 @@ public void onPropertyModified(final PropertyDescriptor 
descriptor, final String
 protected Set route(final Record record, final RecordSchema 
writeSchema, final FlowFile flowFile, final ProcessContext context,
 final Tuple, RecordPath> flowFileContext) {
 
+final boolean isInPlaceReplacement = 
context.getProperty(IN_PLACE_REPLACEMENT).asBoolean();
+
+if(isInPlaceReplacement) {
+return doInPlaceReplacement(record, writeSchema, flowFile, 
context, flowFileContext);
+} else {
+return doResultPathReplacement(record, writeSchema, flowFile, 
context, flowFileContext);
+}
+
+}
+
+private Set doInPlaceReplacement(Record record, RecordSchema 
writeSchema, FlowFile flowFile,
+ProcessContext context, Tuple, RecordPath> 
flowFileContext) {
+
+final String lookupKey = (String) 
context.getProperty(LOOKUP_SERVICE).asControllerService(LookupService.class).getRequiredKeys().iterator().next();
+
+final Map recordPaths = flowFileContext.getKey();
+final Map lookupCoordinates = new 
HashMap<>(recordPaths.size());
+
+for (final Map.Entry entry : 
recordPaths.entrySet()) {
+final String coordinateKey = entry.getKey();
+final RecordPath recordPath = entry.getValue();
+
+final RecordPathResult pathResult = recordPath.evaluate(record);
+final List lookupFieldValues = 
pathResult.getSelectedFields()
+.filter(fieldVal -> fieldVal.getValue() != null)
+.collect(Collectors.toList());
+
+if (lookupFieldValues.isEmpty()) {
+final Set rels = routeToMatchedUnmatched ? 
UNMATCHED_COLLECTION : SUCCESS_COLLECTION;
+getLogger().debug("RecordPath for property '{}' did not match 
any fields in a record for {}; routing record to {}", new Object[] 
{coordinateKey, flowFile, rels});
+return rels;
+}
+
+for (FieldValue fieldValue : lookupFieldValues) {
+final Object coordinateValue = (fieldValue.getValue() 
instanceof Number || fieldValue.getValue() instanceof Boolean)
+? fieldValue.getValue() : 
DataTypeUtils.toString(fieldValue.getValue(), (String) null);
+lookupCoordinates.put(lookupKey, coordinateValue);
+
+final Optional lookupValueOption;
+try {
+lookupValueOption = 
lookupService.lookup(lookupCoordinates, flowFile.getAttributes());
+} catch (final Exception e) {
+throw new ProcessException("Failed to lookup coordinates " 
+ lookupCoordinates + " in Lookup Service", e);
+}
+
+if (!lookupValueOption.isPresent()) {
+final Set rels = routeToMatchedUnmatched ? 
UNMATCHED_COLLECTION : SUCCESS_COLLECTION;
+return rels;
+}
+
+final Object lookupValue = lookupValueOption.get();
+
+final DataType inferredDataType = 
DataTypeUtils.inferDataType(lookupValue, RecordFieldType.STRING.getDataType());
+fieldValue.updateValue(lookupValue, inferredDataType);
+
+}
+}
+
+final Set rels = routeToMatchedUnmatched ? 
MATCHED_COLLECTION : SUCCESS_COLLECTION;
+return rels;
+}
+
+private Set doResultPathReplacement(Record record, 
RecordSchema writeSchema, FlowFile flowFile,
 
 Review comment:
   `writeSchema` is not used. Can remove it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] markap14 commented on a change in pull request #4088: NIFI-7197 - In-place replacement in LookupRecord processor

2020-03-04 Thread GitBox
markap14 commented on a change in pull request #4088: NIFI-7197 - In-place 
replacement in LookupRecord processor
URL: https://github.com/apache/nifi/pull/4088#discussion_r38776
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/LookupRecord.java
 ##
 @@ -214,24 +227,36 @@ protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String
 }
 
 final Set requiredKeys = 
validationContext.getProperty(LOOKUP_SERVICE).asControllerService(LookupService.class).getRequiredKeys();
-final Set missingKeys = requiredKeys.stream()
-.filter(key -> !dynamicPropNames.contains(key))
-.collect(Collectors.toSet());
 
-if (!missingKeys.isEmpty()) {
-final List validationResults = new ArrayList<>();
-for (final String missingKey : missingKeys) {
-final ValidationResult result = new ValidationResult.Builder()
-.subject(missingKey)
-.valid(false)
-.explanation("The configured Lookup Services requires that 
a key be provided with the name '" + missingKey
-+ "'. Please add a new property to this Processor with 
a name '" + missingKey
-+ "' and provide a RecordPath that can be used to 
retrieve the appropriate value.")
-.build();
-validationResults.add(result);
+if(validationContext.getProperty(IN_PLACE_REPLACEMENT).asBoolean()) {
+// it must be a single key lookup service
+if(requiredKeys.size() != 1) {
+return Collections.singleton(new ValidationResult.Builder()
+.subject(LOOKUP_SERVICE.getDisplayName())
+.valid(false)
+.explanation("The configured Lookup Services should 
only require one key when in-place replacement is set to true.")
 
 Review comment:
   Would also recommend updating this explanation if updating the "In-Place 
Replacement" property to be a strategy. Also would even recommend being a bit 
more explicit in telling the user that the configured Lookup Service is not 
compatible with the selected strategy. As written, it sounds like the service 
is doing something wrong because the service "should only require one key".


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] markap14 commented on a change in pull request #4088: NIFI-7197 - In-place replacement in LookupRecord processor

2020-03-04 Thread GitBox
markap14 commented on a change in pull request #4088: NIFI-7197 - In-place 
replacement in LookupRecord processor
URL: https://github.com/apache/nifi/pull/4088#discussion_r387732307
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/LookupRecord.java
 ##
 @@ -214,24 +227,36 @@ protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String
 }
 
 final Set requiredKeys = 
validationContext.getProperty(LOOKUP_SERVICE).asControllerService(LookupService.class).getRequiredKeys();
-final Set missingKeys = requiredKeys.stream()
-.filter(key -> !dynamicPropNames.contains(key))
-.collect(Collectors.toSet());
 
-if (!missingKeys.isEmpty()) {
-final List validationResults = new ArrayList<>();
-for (final String missingKey : missingKeys) {
-final ValidationResult result = new ValidationResult.Builder()
-.subject(missingKey)
-.valid(false)
-.explanation("The configured Lookup Services requires that 
a key be provided with the name '" + missingKey
-+ "'. Please add a new property to this Processor with 
a name '" + missingKey
-+ "' and provide a RecordPath that can be used to 
retrieve the appropriate value.")
-.build();
-validationResults.add(result);
+if(validationContext.getProperty(IN_PLACE_REPLACEMENT).asBoolean()) {
+// it must be a single key lookup service
+if(requiredKeys.size() != 1) {
+return Collections.singleton(new ValidationResult.Builder()
+.subject(LOOKUP_SERVICE.getDisplayName())
+.valid(false)
+.explanation("The configured Lookup Services should 
only require one key when in-place replacement is set to true.")
 
 Review comment:
   I think this is a typo. Should read "The configured Lookup Service" (i.e., 
Service should not be plural).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] mcgilman commented on issue #4099: NIFI-7170: Add option to disable anonymous authentication

2020-03-04 Thread GitBox
mcgilman commented on issue #4099: NIFI-7170: Add option to disable anonymous 
authentication
URL: https://github.com/apache/nifi/pull/4099#issuecomment-594581175
 
 
   There is no code that prevents a user with the identity of `anonymous`. From 
an authorization perspective, the authorizer should be checking whether the 
user is anonymous [1] which is ultimately driven by [2]. The fact that a real 
user could have an identity of `anonymous` should still be ok. Let me know if 
I'm missing something.
   
   Thanks for the review!
   
   [1] 
https://github.com/apache/nifi/blob/master/nifi-framework-api/src/main/java/org/apache/nifi/authorization/AuthorizationRequest.java#L129
   [2] 
https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-authorization/src/main/java/org/apache/nifi/authorization/user/StandardNiFiUser.java#L74


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-7221) Add support for protocol v2 and v3 with Hortonworks Schema Registry

2020-03-04 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-7221:


 Summary: Add support for protocol v2 and v3 with Hortonworks 
Schema Registry
 Key: NIFI-7221
 URL: https://issues.apache.org/jira/browse/NIFI-7221
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Pierre Villard
Assignee: Pierre Villard


According to [https://registry-project.readthedocs.io/en/latest/serdes.html#]

Support should be added to support protocol v2 and v3 which relies on a "schema 
version ID" which corresponds to the unique identifier of the schema version 
when stored in the backend database.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mcgilman commented on a change in pull request #4099: NIFI-7170: Add option to disable anonymous authentication

2020-03-04 Thread GitBox
mcgilman commented on a change in pull request #4099: NIFI-7170: Add option to 
disable anonymous authentication
URL: https://github.com/apache/nifi/pull/4099#discussion_r387719324
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/test/java/org/apache/nifi/web/security/anonymous/NiFiAnonymousAuthenticationProviderTest.java
 ##
 @@ -0,0 +1,91 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.web.security.anonymous;
+
+import org.apache.nifi.authorization.Authorizer;
+import org.apache.nifi.authorization.user.NiFiUserDetails;
+import org.apache.nifi.util.NiFiProperties;
+import org.apache.nifi.util.StringUtils;
+import org.apache.nifi.web.security.InvalidAuthenticationException;
+import org.apache.nifi.web.security.token.NiFiAuthenticationToken;
+import org.junit.Test;
+import org.mockito.Mockito;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+public class NiFiAnonymousAuthenticationProviderTest {
+
+private static final Logger logger = 
LoggerFactory.getLogger(NiFiAnonymousAuthenticationProviderTest.class);
+
+@Test
+public void testAnonymousDisabledNotSecure() throws Exception {
 
 Review comment:
   This scenario is testing that the new setting is only applicable when 
running securely. We only reject the anonymous authentication when running 
securely and the instance has not been configured to allow this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] mcgilman commented on a change in pull request #4099: NIFI-7170: Add option to disable anonymous authentication

2020-03-04 Thread GitBox
mcgilman commented on a change in pull request #4099: NIFI-7170: Add option to 
disable anonymous authentication
URL: https://github.com/apache/nifi/pull/4099#discussion_r387717265
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/main/java/org/apache/nifi/web/security/anonymous/NiFiAnonymousAuthenticationRequestToken.java
 ##
 @@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.web.security.anonymous;
+
+import org.apache.nifi.web.security.NiFiAuthenticationRequestToken;
+
+import static 
org.apache.nifi.authorization.user.StandardNiFiUser.ANONYMOUS_IDENTITY;
+
+/**
+ * This is an authentication request for an anonymous user.
+ */
+public class NiFiAnonymousAuthenticationRequestToken extends 
NiFiAuthenticationRequestToken {
+
+final boolean secureRequest;
 
 Review comment:
   No. All users of unsecured instances will be anonymous. The existing filter 
chain follows a pattern where the filter extracts the authentication attempt 
from the incoming request. If the request does not contain an attempt to 
authenticate then it returns null and the next authentication filter is 
checked. 
   
   The authentication provider contains the logic to actually validate the 
authentication attempt. For instance, with JTM the filter extracts the bearer 
token and provides it to the authentication provider to verify the token. If 
there is no bearer token in the request, the filter returns null.
   
   For anonymous authentication, every request is an attempt to authenticate as 
anonymous. The only request specific detail we need for the authentication 
provider to do its job is whether the request was secure or not.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] mcgilman commented on a change in pull request #4099: NIFI-7170: Add option to disable anonymous authentication

2020-03-04 Thread GitBox
mcgilman commented on a change in pull request #4099: NIFI-7170: Add option to 
disable anonymous authentication
URL: https://github.com/apache/nifi/pull/4099#discussion_r387709478
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/main/java/org/apache/nifi/web/security/anonymous/NiFiAnonymousAuthenticationProvider.java
 ##
 @@ -0,0 +1,56 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.web.security.anonymous;
+
+import org.apache.nifi.authorization.Authorizer;
+import org.apache.nifi.authorization.user.NiFiUserDetails;
+import org.apache.nifi.authorization.user.StandardNiFiUser;
+import org.apache.nifi.util.NiFiProperties;
+import org.apache.nifi.web.security.InvalidAuthenticationException;
+import org.apache.nifi.web.security.NiFiAuthenticationProvider;
+import org.apache.nifi.web.security.token.NiFiAuthenticationToken;
+import org.springframework.security.core.Authentication;
+import org.springframework.security.core.AuthenticationException;
+
+/**
+ *
+ */
+public class NiFiAnonymousAuthenticationProvider extends 
NiFiAuthenticationProvider {
+
+final NiFiProperties properties;
+
+public NiFiAnonymousAuthenticationProvider(NiFiProperties nifiProperties, 
Authorizer authorizer) {
+super(nifiProperties, authorizer);
+this.properties = nifiProperties;
+}
+
+@Override
+public Authentication authenticate(Authentication authentication) throws 
AuthenticationException {
+final NiFiAnonymousAuthenticationRequestToken request = 
(NiFiAnonymousAuthenticationRequestToken) authentication;
+
+if (request.isSecureRequest() && 
!properties.isAnonymousAuthenticationAllowed()) {
+throw new InvalidAuthenticationException("Anonymous authentication 
is not been configured.");
 
 Review comment:
   Will update.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] mcgilman commented on a change in pull request #4099: NIFI-7170: Add option to disable anonymous authentication

2020-03-04 Thread GitBox
mcgilman commented on a change in pull request #4099: NIFI-7170: Add option to 
disable anonymous authentication
URL: https://github.com/apache/nifi/pull/4099#discussion_r387708816
 
 

 ##
 File path: nifi-docs/src/main/asciidoc/administration-guide.adoc
 ##
 @@ -3269,6 +3271,7 @@ These properties pertain to various security features in 
NiFi. Many of these pro
 |`nifi.security.truststoreType`|The truststore type. It is blank by default.
 |`nifi.security.truststorePasswd`|The truststore password. It is blank by 
default.
 |`nifi.security.user.authorizer`|Specifies which of the configured Authorizers 
in the _authorizers.xml_ file to use.  By default, it is set to `file-provider`.
+|`nifi.security.allow.anonymous.authentication`|Whether anonymous 
authentication is allowed when running over HTTPS. If set to true, this setting 
will also ensure that one way SSL is enabled.
 
 Review comment:
   Will update.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] mcgilman commented on a change in pull request #4099: NIFI-7170: Add option to disable anonymous authentication

2020-03-04 Thread GitBox
mcgilman commented on a change in pull request #4099: NIFI-7170: Add option to 
disable anonymous authentication
URL: https://github.com/apache/nifi/pull/4099#discussion_r387708612
 
 

 ##
 File path: 
nifi-commons/nifi-properties/src/main/java/org/apache/nifi/util/NiFiProperties.java
 ##
 @@ -904,10 +905,19 @@ public boolean isLoginIdentityProviderEnabled() {
 return 
!StringUtils.isBlank(getProperty(NiFiProperties.SECURITY_USER_LOGIN_IDENTITY_PROVIDER));
 }
 
+/**
+ * @return True if property value is 'true'; False otherwise.
+ */
+public Boolean isAnonymousAuthenticationAllowed() {
+final String anonymousAuthenticationAllowed = 
getProperty(SECURITY_ANONYMOUS_AUTHENTICATION, "false");
+
+return "true".equalsIgnoreCase(anonymousAuthenticationAllowed);
 
 Review comment:
   I think this is a good idea. However [1] is already filed to address this. 
Do you want me to update how we treat this property specifically for this PR 
even if we provide a more general solution later?
   
   [1] https://issues.apache.org/jira/browse/NIFI-7172


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-3303) escapeJson in ReplaceText

2020-03-04 Thread Otto Fowler (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17051276#comment-17051276
 ] 

Otto Fowler commented on NIFI-3303:
---

I don't think so...  
The issue, in my mind is that mix of expression language and regex's is very 
difficult to handle in a lot of cases.  

I will try to see what is going on, I just need to find some time.

> escapeJson in ReplaceText
> -
>
> Key: NIFI-3303
> URL: https://issues.apache.org/jira/browse/NIFI-3303
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.1.1
>Reporter: tianzk
>Priority: Major
> Attachments: ReplaceText_Bug.xml, config.png, dataflow.png
>
>
> I have some problems while using excapeJson and unescapeJson in ReplaceText 
> processor.
> When I give a string: He didn’t say, “Stop”!  to ReplaceText as input,and 
> configure ReplaceText like: attachment config.png
> The output of ReplaceText is same with the input: He didn’t say, “Stop!” 
> ,nothing changed.
> As described in NiFI Documentation the output should be: He didn’t say, 
> \"Stop!\”.Did I miss something?
> Also there are problems with unescapeJson.If input is: He didn’t say, 
> \”Sto\\\"p!\”,the return string will be: He didn’t say, ”Sto"p!”.
> My dataflow:(GetFile just read a file with a  string as content.)
> dataflow.png
> Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" format strings in log lines

2020-03-04 Thread GitBox
szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" 
format strings in log lines
URL: https://github.com/apache/nifi-minifi-cpp/pull/744#discussion_r387669675
 
 

 ##
 File path: extensions/windows-event-log/ConsumeWindowsEventLog.cpp
 ##
 @@ -623,7 +624,7 @@ int ConsumeWindowsEventLog::processQueue(const 
std::shared_ptrlog_debug("processQueue processed %d Events in %llu ms",
+logger_->log_debug("processQueue processed %d Events in %" PRIu64 " ms",
 
 Review comment:
   fixed in c38d098


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" format strings in log lines

2020-03-04 Thread GitBox
szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" 
format strings in log lines
URL: https://github.com/apache/nifi-minifi-cpp/pull/744#discussion_r387669675
 
 

 ##
 File path: extensions/windows-event-log/ConsumeWindowsEventLog.cpp
 ##
 @@ -623,7 +624,7 @@ int ConsumeWindowsEventLog::processQueue(const 
std::shared_ptrlog_debug("processQueue processed %d Events in %llu ms",
+logger_->log_debug("processQueue processed %d Events in %" PRIu64 " ms",
 
 Review comment:
   fixed in 61107c8b


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" format strings in log lines

2020-03-04 Thread GitBox
szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" 
format strings in log lines
URL: https://github.com/apache/nifi-minifi-cpp/pull/744#discussion_r387662411
 
 

 ##
 File path: extensions/windows-event-log/ConsumeWindowsEventLog.cpp
 ##
 @@ -623,7 +624,7 @@ int ConsumeWindowsEventLog::processQueue(const 
std::shared_ptrlog_debug("processQueue processed %d Events in %llu ms",
+logger_->log_debug("processQueue processed %d Events in %" PRIu64 " ms",
 
 Review comment:
   There was a similar issue in #741: 
https://github.com/apache/nifi-minifi-cpp/pull/741#discussion_r386539946


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" format strings in log lines

2020-03-04 Thread GitBox
szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" 
format strings in log lines
URL: https://github.com/apache/nifi-minifi-cpp/pull/744#discussion_r387660534
 
 

 ##
 File path: extensions/windows-event-log/ConsumeWindowsEventLog.cpp
 ##
 @@ -623,7 +624,7 @@ int ConsumeWindowsEventLog::processQueue(const 
std::shared_ptrlog_debug("processQueue processed %d Events in %llu ms",
+logger_->log_debug("processQueue processed %d Events in %" PRIu64 " ms",
 
 Review comment:
   chrono durations are sadly not a good fit with printf-style format strings. 
The standard doesn't specify the return type of 
`std::chrono::milliseconds::count`, only that it's a signed integer type of at 
least 45 bits.
   
   I recommend wrapping the return value in `int64_t{ ... }` and using `PRId64` 
as a format specifier. With list-initialization, we get a compiler error if a 
narrowing conversion would be necessary to convert the return value to 
`int64_t` and until then we can assume that we're safe and don't lose any data.
   
   This applies to both occurences of printing durations.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" format strings in log lines

2020-03-04 Thread GitBox
szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" 
format strings in log lines
URL: https://github.com/apache/nifi-minifi-cpp/pull/744#discussion_r387654905
 
 

 ##
 File path: libminifi/include/FlowControlProtocol.h
 ##
 @@ -168,12 +168,12 @@ class FlowControlProtocol {
   logger_->log_info("NiFi Server Name %s", _serverName);
 }
 if (configure->get(Configure::nifi_server_port, value) && 
core::Property::StringToInt(value, _serverPort)) {
-  logger_->log_info("NiFi Server Port: [%ll]", _serverPort);
+  logger_->log_info("NiFi Server Port: [%hu]", _serverPort);
 }
 if (configure->get(Configure::nifi_server_report_interval, value)) {
   core::TimeUnit unit;
   if (core::Property::StringToTime(value, _reportInterval, unit) && 
core::Property::ConvertTimeUnitToMS(_reportInterval, unit, _reportInterval)) {
-logger_->log_info("NiFi server report interval: [%ll] ms", 
_reportInterval);
+logger_->log_info("NiFi server report interval: [%lld] ms", 
_reportInterval);
 
 Review comment:
   fixed by da22085 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" format strings in log lines

2020-03-04 Thread GitBox
szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" 
format strings in log lines
URL: https://github.com/apache/nifi-minifi-cpp/pull/744#discussion_r387654075
 
 

 ##
 File path: libminifi/src/core/yaml/YamlConfiguration.cpp
 ##
 @@ -195,20 +195,20 @@ void 
YamlConfiguration::parseProcessorNodeYaml(YAML::Node processorsNode, core::
 
 if (procCfg.schedulingStrategy == "TIMER_DRIVEN" || 
procCfg.schedulingStrategy == "EVENT_DRIVEN") {
   if (core::Property::StringToTime(procCfg.schedulingPeriod, 
schedulingPeriod, unit) && 
core::Property::ConvertTimeUnitToNS(schedulingPeriod, unit, schedulingPeriod)) {
-logger_->log_debug("convert: parseProcessorNode: schedulingPeriod 
=> [%ll] ns", schedulingPeriod);
+logger_->log_debug("convert: parseProcessorNode: schedulingPeriod 
=> [%lld] ns", schedulingPeriod);
 
 Review comment:
   fixed by da22085 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" format strings in log lines

2020-03-04 Thread GitBox
szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" 
format strings in log lines
URL: https://github.com/apache/nifi-minifi-cpp/pull/744#discussion_r387652866
 
 

 ##
 File path: libminifi/src/core/ProcessSession.cpp
 ##
 @@ -184,7 +184,7 @@ std::shared_ptr 
ProcessSession::clone(const std::shared_ptrgetResourceClaim()) {
   if ((uint64_t) (offset + size) > parent->getSize()) {
 // Set offset and size
-logger_->log_error("clone offset %ll and size %ll exceed parent size 
%llu", offset, size, parent->getSize());
+logger_->log_error("clone offset %lld and size %lld exceed parent size 
%llu", offset, size, parent->getSize());
 
 Review comment:
   fixed by da22085 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" format strings in log lines

2020-03-04 Thread GitBox
szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" 
format strings in log lines
URL: https://github.com/apache/nifi-minifi-cpp/pull/744#discussion_r387652316
 
 

 ##
 File path: libminifi/src/RemoteProcessorGroupPort.cpp
 ##
 @@ -360,7 +360,7 @@ std::pair 
RemoteProcessorGroupPort::refreshRemoteSite2SiteInfo
   return std::make_pair(host, siteTosite_port_);
 }
   } else {
-logger_->log_error("Cannot output body to content for 
ProcessGroup::refreshRemoteSite2SiteInfo: received HTTP code %ll from %s", 
client->getResponseCode(), fullUrl.str());
+logger_->log_error("Cannot output body to content for 
ProcessGroup::refreshRemoteSite2SiteInfo: received HTTP code %lld from %s", 
client->getResponseCode(), fullUrl.str());
 
 Review comment:
   fixed by da22085 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" format strings in log lines

2020-03-04 Thread GitBox
szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" 
format strings in log lines
URL: https://github.com/apache/nifi-minifi-cpp/pull/744#discussion_r387652483
 
 

 ##
 File path: libminifi/src/ThreadedSchedulingAgent.cpp
 ##
 @@ -47,15 +47,15 @@ void 
ThreadedSchedulingAgent::schedule(std::shared_ptr processo
   if (configure_->get(Configure::nifi_administrative_yield_duration, 
yieldValue)) {
 core::TimeUnit unit;
 if (core::Property::StringToTime(yieldValue, admin_yield_duration_, unit) 
&& core::Property::ConvertTimeUnitToMS(admin_yield_duration_, unit, 
admin_yield_duration_)) {
-  logger_->log_debug("nifi_administrative_yield_duration: [%ll] ms", 
admin_yield_duration_);
+  logger_->log_debug("nifi_administrative_yield_duration: [%lld] ms", 
admin_yield_duration_);
 
 Review comment:
   fixed by da22085 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" format strings in log lines

2020-03-04 Thread GitBox
szaszm commented on a change in pull request #744: MINIFICPP-1171 - Fix "%ll" 
format strings in log lines
URL: https://github.com/apache/nifi-minifi-cpp/pull/744#discussion_r387650415
 
 

 ##
 File path: libminifi/src/FlowControlProtocol.cpp
 ##
 @@ -205,9 +205,9 @@ int FlowControlProtocol::sendRegisterReq() {
 return -1;
   }
   logger_->log_debug("Flow Control Protocol receive MsgType %s", 
FlowControlMsgTypeToStr((FlowControlMsgType) hdr.msgType));
-  logger_->log_debug("Flow Control Protocol receive Seq Num %ll", 
hdr.seqNumber);
+  logger_->log_debug("Flow Control Protocol receive Seq Num %zu", 
hdr.seqNumber);
 
 Review comment:
   fixed by da22085 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >