[jira] [Commented] (NIFI-5764) Allow ListSftp connection parameter

2018-11-06 Thread Alfredo De Luca (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677794#comment-16677794
 ] 

Alfredo De Luca commented on NIFI-5764:
---

Hi [~ijokarumawak]. 

> Do you see the ListSFTP processor instance works fine for sometime, but 
> encounter the auth failure issue some other times against the same SFTP 
> server without changing any ListSFTP configuration?

 {color:#FF}that's right. it goes ok and randomly  we got the errors listed 
above. {color}{color:#33}{color}

> Do you have access to the SFTP server to see any error happened?

{color:#d04437}yes. I mange the SFTP server and we get the following 
error{color}

{color:#d04437}Nov  7 08:30:09 sftp sshd[23487]: pam_sss(sshd:account): Access 
denied for user nifi_sftp: 4 (System error){color}

{color:#d04437}We use freeipa for authentication and nifi is on 3 ndoes on 
Kubernetes PODs{color}

{color:#d04437}{color:#33}> {color}{color}Did you confirm 'controlmaster' 
is the only difference between NiFi and the command you used? Did you perform 
ssh command without controlmaster option and if so, did it fail?? I mean:

{color:#d04437}With a manually ssh not problem at all. It's when we have 
multiple connections that randomly we have that problem. {color}

 

 
{code}
 {code}
{color:#d04437} {color}

{color:#d04437} {color}

> Allow ListSftp connection parameter
> ---
>
> Key: NIFI-5764
> URL: https://issues.apache.org/jira/browse/NIFI-5764
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: dav
>Priority: Critical
>  Labels: SFTP, customization, sftp
> Attachments: dumpone
>
>
> ListSftp and other Sftp processors should be able to add parameters
> (like [-B buffer_size] [-b batchfile] [-c cipher]
>  [-D sftp_server_path] [-F ssh_config] [-i identity_file] [-l limit]
>  [-o ssh_option] [-P port] [-R num_requests] [-S program]
>  [-s subsystem | sftp_server] host
>  sftp [user@]host[:file ...]
>  sftp [user@]host[:dir[/]]
>  sftp -b batchfile [user@]host) 
> in order to edit the type of connection on Sftp Server.
> For instance, I have this error on nifi:
> 2018-10-29 11:06:09,462 ERROR [Timer-Driven Process Thread-5] 
> SimpleProcessLogger.java:254 
> ListSFTP[id=766ac418-27ce-335a-5b13-52abe3495592] Failed to perform listing 
> on remote host due to java.io.IOException: Failed to obtain connection to 
> remote host due to com.jcraft.jsch.JSchException: Auth fail: {}
> java.io.IOException: Failed to obtain connection to remote host due to 
> com.jcraft.jsch.JSchException: Auth fail
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getChannel(SFTPTransfer.java:468)
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:192)
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:156)
>  at 
> org.apache.nifi.processors.standard.ListFileTransfer.performListing(ListFileTransfer.java:105)
>  at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.onTrigger(AbstractListProcessor.java:401)
>  at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1147)
>  at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:175)
>  at 
> org.apache.nifi.controller.scheduling.QuartzSchedulingAgent$2.run(QuartzSchedulingAgent.java:140)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: com.jcraft.jsch.JSchException: Auth fail
>  at com.jcraft.jsch.Session.connect(Session.java:519)
>  at com.jcraft.jsch.Session.connect(Session.java:183)
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getChannel(SFTPTransfer.java:448)
>  ... 15 common frames omitted
> This can be avoided by connect to Sftp server with this string:
> *sftp  -o “controlmaster auto” username@sftp_server*
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3137: NIFI-5797 : EscapedJava for FlattenJson

2018-11-06 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/3137
  
Can you please add some explanation for why this change is submitted, 
documentation, and at least one unit test? Thank you. 


---


[jira] [Commented] (NIFI-5797) FlattenJson processor converts special characters to hex

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677785#comment-16677785
 ] 

ASF GitHub Bot commented on NIFI-5797:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/3137
  
Can you please add some explanation for why this change is submitted, 
documentation, and at least one unit test? Thank you. 


> FlattenJson processor converts special characters to hex
> 
>
> Key: NIFI-5797
> URL: https://issues.apache.org/jira/browse/NIFI-5797
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.9.0
>Reporter: Ravi Bhardwaj
>Priority: Major
>
> FlattenJson will convert the special characters in the json to hex format
>  
> Example Input:
> {"name": "http://localhost:8080/nifi","full": \{   "name": "José Muñoz"}
> Output:
> {"name":"http://localhost:8080/nifi","full.name":"Jos\u00E9 Mu\u00F1oz"}
> Expected Output:
> {"name":"http://localhost:8080/nifi","full.name":"José Muñoz"}
>  
> Possibly regression from NIFI-4962



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5797) FlattenJson processor converts special characters to hex

2018-11-06 Thread Ravi Bhardwaj (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Bhardwaj updated NIFI-5797:

External issue URL: https://github.com/apache/nifi/pull/3137

> FlattenJson processor converts special characters to hex
> 
>
> Key: NIFI-5797
> URL: https://issues.apache.org/jira/browse/NIFI-5797
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.9.0
>Reporter: Ravi Bhardwaj
>Priority: Major
>
> FlattenJson will convert the special characters in the json to hex format
>  
> Example Input:
> {"name": "http://localhost:8080/nifi","full": \{   "name": "José Muñoz"}
> Output:
> {"name":"http://localhost:8080/nifi","full.name":"Jos\u00E9 Mu\u00F1oz"}
> Expected Output:
> {"name":"http://localhost:8080/nifi","full.name":"José Muñoz"}
>  
> Possibly regression from NIFI-4962



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3137: NIFI-5797 : EscapedJava for FlattenJson

2018-11-06 Thread ravib777
GitHub user ravib777 opened a pull request:

https://github.com/apache/nifi/pull/3137

NIFI-5797 : EscapedJava for FlattenJson

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ravib777/nifi 
NIFI-5797-FlattenJson-Special-Chars-Doesnt-work

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3137.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3137


commit c1c1ef41e541601d913fcda68212a4cff12818cd
Author: Ravi Bhardwaj 
Date:   2018-11-07T07:24:12Z

NIFI-5797 : EscapedJava for FlattenJson




---


[jira] [Commented] (NIFI-5797) FlattenJson processor converts special characters to hex

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677784#comment-16677784
 ] 

ASF GitHub Bot commented on NIFI-5797:
--

GitHub user ravib777 opened a pull request:

https://github.com/apache/nifi/pull/3137

NIFI-5797 : EscapedJava for FlattenJson

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ravib777/nifi 
NIFI-5797-FlattenJson-Special-Chars-Doesnt-work

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3137.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3137


commit c1c1ef41e541601d913fcda68212a4cff12818cd
Author: Ravi Bhardwaj 
Date:   2018-11-07T07:24:12Z

NIFI-5797 : EscapedJava for FlattenJson




> FlattenJson processor converts special characters to hex
> 
>
> Key: NIFI-5797
> URL: https://issues.apache.org/jira/browse/NIFI-5797
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.9.0
>Reporter: Ravi Bhardwaj
>Priority: Major
>
> FlattenJson will convert the special characters in the json to hex format
>  
> Example Input:
> {"name": "http://localhost:8080/nifi","full": \{   "name": "José Muñoz"}
> Output:
> {"name":"http://localhost:8080/nifi","full.name":"Jos\u00E9 Mu\u00F1oz"}
> Expected Output:
> {"name":"http://localhost:8080/nifi","full.name":"José Muñoz"}
>  
> Possibly regression from NIFI-4962



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5797) FlattenJson processor converts special characters to hex

2018-11-06 Thread Ravi Bhardwaj (JIRA)
Ravi Bhardwaj created NIFI-5797:
---

 Summary: FlattenJson processor converts special characters to hex
 Key: NIFI-5797
 URL: https://issues.apache.org/jira/browse/NIFI-5797
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.9.0
Reporter: Ravi Bhardwaj


FlattenJson will convert the special characters in the json to hex format

 

Example Input:

{"name": "http://localhost:8080/nifi","full": \{   "name": "José Muñoz"}

Output:

{"name":"http://localhost:8080/nifi","full.name":"Jos\u00E9 Mu\u00F1oz"}

Expected Output:

{"name":"http://localhost:8080/nifi","full.name":"José Muñoz"}

 

Possibly regression from NIFI-4962



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5752) Load balancing fails with wildcard certs

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677766#comment-16677766
 ] 

ASF GitHub Bot commented on NIFI-5752:
--

Github user kotarot commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3110#discussion_r231397141
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/server/ClusterLoadBalanceAuthorizer.java
 ---
@@ -33,14 +42,27 @@
 
 private final ClusterCoordinator clusterCoordinator;
 private final EventReporter eventReporter;
+private final HostnameVerifier hostnameVerifier;
 
 public ClusterLoadBalanceAuthorizer(final ClusterCoordinator 
clusterCoordinator, final EventReporter eventReporter) {
 this.clusterCoordinator = clusterCoordinator;
 this.eventReporter = eventReporter;
+this.hostnameVerifier = new DefaultHostnameVerifier();
 }
 
 @Override
-public String authorize(final Collection clientIdentities) 
throws NotAuthorizedException {
+public String authorize(SSLSocket sslSocket) throws 
NotAuthorizedException, IOException {
+final SSLSession sslSession = sslSocket.getSession();
+
+final Set clientIdentities;
+try {
+clientIdentities = getCertificateIdentities(sslSession);
+} catch (final CertificateException e) {
+throw new IOException("Failed to extract Client Certificate", 
e);
+}
+
+logger.debug("Will perform authorization against Client Identities 
'{}'", clientIdentities);
+
 if (clientIdentities == null) {
--- End diff --

@ijokarumawak OK, I get it now. Thanks for kindly telling me that. I pushed 
a new commit. Please check it. Thanks!


> Load balancing fails with wildcard certs
> 
>
> Key: NIFI-5752
> URL: https://issues.apache.org/jira/browse/NIFI-5752
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Kotaro Terada
>Assignee: Kotaro Terada
>Priority: Major
>
> Load balancing fails when we construct a secure cluster with wildcard certs.
> For example, assume that we have a valid wildcard cert for {{*.example.com}} 
> and a cluster consists of {{nf1.example.com}}, {{nf2.example.com}}, and 
> {{nf3.example.com}} . We cannot transfer a FlowFile between nodes for load 
> balancing because of the following authorization error:
> {noformat}
> 2018-10-25 19:05:13,520 WARN [Load Balance Server Thread-2] 
> o.a.n.c.q.c.s.ClusterLoadBalanceAuthorizer Authorization failed for Client 
> ID's [*.example.com] to Load Balance data because none of the ID's are known 
> Cluster Node Identifiers
> 2018-10-25 19:05:13,521 ERROR [Load Balance Server Thread-2] 
> o.a.n.c.q.c.s.ConnectionLoadBalanceServer Failed to communicate with Peer 
> /xxx.xxx.xxx.xxx:x
> org.apache.nifi.controller.queue.clustered.server.NotAuthorizedException: 
> Client ID's [*.example.com] are not authorized to Load Balance data
>   at 
> org.apache.nifi.controller.queue.clustered.server.ClusterLoadBalanceAuthorizer.authorize(ClusterLoadBalanceAuthorizer.java:65)
>   at 
> org.apache.nifi.controller.queue.clustered.server.StandardLoadBalanceProtocol.receiveFlowFiles(StandardLoadBalanceProtocol.java:142)
>   at 
> org.apache.nifi.controller.queue.clustered.server.ConnectionLoadBalanceServer$CommunicateAction.run(ConnectionLoadBalanceServer.java:176)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> This problem occurs because in {{authorize}} method in 
> {{ClusterLoadBalanceAuthorizer}} class, authorization is tried by just 
> matching strings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3110: NIFI-5752: Load balancing fails with wildcard certs

2018-11-06 Thread kotarot
Github user kotarot commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3110#discussion_r231397141
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/server/ClusterLoadBalanceAuthorizer.java
 ---
@@ -33,14 +42,27 @@
 
 private final ClusterCoordinator clusterCoordinator;
 private final EventReporter eventReporter;
+private final HostnameVerifier hostnameVerifier;
 
 public ClusterLoadBalanceAuthorizer(final ClusterCoordinator 
clusterCoordinator, final EventReporter eventReporter) {
 this.clusterCoordinator = clusterCoordinator;
 this.eventReporter = eventReporter;
+this.hostnameVerifier = new DefaultHostnameVerifier();
 }
 
 @Override
-public String authorize(final Collection clientIdentities) 
throws NotAuthorizedException {
+public String authorize(SSLSocket sslSocket) throws 
NotAuthorizedException, IOException {
+final SSLSession sslSession = sslSocket.getSession();
+
+final Set clientIdentities;
+try {
+clientIdentities = getCertificateIdentities(sslSession);
+} catch (final CertificateException e) {
+throw new IOException("Failed to extract Client Certificate", 
e);
+}
+
+logger.debug("Will perform authorization against Client Identities 
'{}'", clientIdentities);
+
 if (clientIdentities == null) {
--- End diff --

@ijokarumawak OK, I get it now. Thanks for kindly telling me that. I pushed 
a new commit. Please check it. Thanks!


---


[jira] [Comment Edited] (NIFI-5748) Improve handling of X-Forwarded-* headers in URI Rewriting

2018-11-06 Thread Jeff Storck (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677696#comment-16677696
 ] 

Jeff Storck edited comment on NIFI-5748 at 11/7/18 5:26 AM:


The proxy-nifi-docker repo can be used to test the PR.  It creates several 
containers:
- NiFi
- Traefik
- Knox
- LDAP

nginx will be added soon.


was (Author: jtstorck):
The proxy-nifi-docker repo can be used to test this PR.  It creates several 
containers:
- NiFi
- Traefik
- Knox
- LDAP

nginx will be added soon.

> Improve handling of X-Forwarded-* headers in URI Rewriting
> --
>
> Key: NIFI-5748
> URL: https://issues.apache.org/jira/browse/NIFI-5748
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Kevin Doran
>Assignee: Jeff Storck
>Priority: Major
>
> This ticket is to improve the handling of headers used by popular proxies 
> when rewriting URIs in NiFI. Currently, NiFi checks the following headers 
> when determining how to re-write URLs it returns clients:
> From 
> [ApplicationResource|https://github.com/apache/nifi/blob/2201f7746fd16874aefbd12d546565f5d105ab04/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ApplicationResource.java#L110]:
> {code:java}
> public static final String PROXY_SCHEME_HTTP_HEADER = "X-ProxyScheme";
> public static final String PROXY_HOST_HTTP_HEADER = "X-ProxyHost";
> public static final String PROXY_PORT_HTTP_HEADER = "X-ProxyPort";
> public static final String PROXY_CONTEXT_PATH_HTTP_HEADER = 
> "X-ProxyContextPath";
> public static final String FORWARDED_PROTO_HTTP_HEADER = "X-Forwarded-Proto";
> public static final String FORWARDED_HOST_HTTP_HEADER = "X-Forwarded-Server";
> public static final String FORWARDED_PORT_HTTP_HEADER = "X-Forwarded-Port";
> public static final String FORWARDED_CONTEXT_HTTP_HEADER = 
> "X-Forwarded-Context";
> // ...
> final String scheme = getFirstHeaderValue(PROXY_SCHEME_HTTP_HEADER, 
> FORWARDED_PROTO_HTTP_HEADER);
> final String host = getFirstHeaderValue(PROXY_HOST_HTTP_HEADER, 
> FORWARDED_HOST_HTTP_HEADER);
> final String port = getFirstHeaderValue(PROXY_PORT_HTTP_HEADER, 
> FORWARDED_PORT_HTTP_HEADER);
> {code}
> Based on this, it looks like if both {{X-Forwarded-Server}} and 
> {{X-Forwarded-Host}} are set, that {{-Host}} will contain the hostname the 
> user/client requested, and {{-Server}} will contain the hostname of the proxy 
> server (ie, what the proxy server is able to determine from inspecting the 
> hostname of the instance it is on). See this for more details:
> https://stackoverflow.com/questions/43689625/x-forwarded-host-vs-x-forwarded-server
> Here is a demo based on docker containers and a reverse-proxy called Traefik 
> that shows where the current NiFi logic can break:
> https://gist.github.com/kevdoran/2892004ccbfbb856115c8a756d9d4538
> To use this gist, you can run the following:
> {noformat}
> wget -qO- 
> https://gist.githubusercontent.com/kevdoran/2892004ccbfbb856115c8a756d9d4538/raw/fb72151900d4d8fdcf4919fe5c8a94805fbb8401/docker-compose.yml
>  | docker-compose -f - up
> {noformat}
> Once the environment is up. Go to http://nifi.docker.localhost/nifi in Chrome 
> and try adding/configuring/moving a processor. This will reproduce the issue.
> For this task, the following improvement is recommended:
> Change the Header (string literal) for FORWARDED_HOST_HTTP_HEADER from 
> "X-Forwarded-Server" to "X-Forwarded-Host"
> Additionally, some proxies use a different header for the context path 
> prefix. Traefik uses {{X-Forwarded-Prefix}}. There does not appear to be a 
> universal standard. In the future, we could make this header configurable, 
> but for now, perhaps we should add {{X-Forwarded-Prefix}} to the headers 
> checked by NiFi so that Traefik is supported out-of-the-box.
> *Important:* After making this change, verify that proxying NiFi via Knox 
> still works, i.e., Knox should be sending the X-Forwarded-Host header that 
> matches what the user requested in the browser.
> This change applies to NiFi Registry as well.
> /cc [~jtstorck]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5748) Improve handling of X-Forwarded-* headers in URI Rewriting

2018-11-06 Thread Jeff Storck (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677696#comment-16677696
 ] 

Jeff Storck commented on NIFI-5748:
---

The proxy-nifi-docker repo can be used to test this PR.  It creates several 
containers:
- NiFi
- Traefik
- Knox
- LDAP

nginx will be added soon.

> Improve handling of X-Forwarded-* headers in URI Rewriting
> --
>
> Key: NIFI-5748
> URL: https://issues.apache.org/jira/browse/NIFI-5748
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Kevin Doran
>Assignee: Jeff Storck
>Priority: Major
>
> This ticket is to improve the handling of headers used by popular proxies 
> when rewriting URIs in NiFI. Currently, NiFi checks the following headers 
> when determining how to re-write URLs it returns clients:
> From 
> [ApplicationResource|https://github.com/apache/nifi/blob/2201f7746fd16874aefbd12d546565f5d105ab04/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ApplicationResource.java#L110]:
> {code:java}
> public static final String PROXY_SCHEME_HTTP_HEADER = "X-ProxyScheme";
> public static final String PROXY_HOST_HTTP_HEADER = "X-ProxyHost";
> public static final String PROXY_PORT_HTTP_HEADER = "X-ProxyPort";
> public static final String PROXY_CONTEXT_PATH_HTTP_HEADER = 
> "X-ProxyContextPath";
> public static final String FORWARDED_PROTO_HTTP_HEADER = "X-Forwarded-Proto";
> public static final String FORWARDED_HOST_HTTP_HEADER = "X-Forwarded-Server";
> public static final String FORWARDED_PORT_HTTP_HEADER = "X-Forwarded-Port";
> public static final String FORWARDED_CONTEXT_HTTP_HEADER = 
> "X-Forwarded-Context";
> // ...
> final String scheme = getFirstHeaderValue(PROXY_SCHEME_HTTP_HEADER, 
> FORWARDED_PROTO_HTTP_HEADER);
> final String host = getFirstHeaderValue(PROXY_HOST_HTTP_HEADER, 
> FORWARDED_HOST_HTTP_HEADER);
> final String port = getFirstHeaderValue(PROXY_PORT_HTTP_HEADER, 
> FORWARDED_PORT_HTTP_HEADER);
> {code}
> Based on this, it looks like if both {{X-Forwarded-Server}} and 
> {{X-Forwarded-Host}} are set, that {{-Host}} will contain the hostname the 
> user/client requested, and {{-Server}} will contain the hostname of the proxy 
> server (ie, what the proxy server is able to determine from inspecting the 
> hostname of the instance it is on). See this for more details:
> https://stackoverflow.com/questions/43689625/x-forwarded-host-vs-x-forwarded-server
> Here is a demo based on docker containers and a reverse-proxy called Traefik 
> that shows where the current NiFi logic can break:
> https://gist.github.com/kevdoran/2892004ccbfbb856115c8a756d9d4538
> To use this gist, you can run the following:
> {noformat}
> wget -qO- 
> https://gist.githubusercontent.com/kevdoran/2892004ccbfbb856115c8a756d9d4538/raw/fb72151900d4d8fdcf4919fe5c8a94805fbb8401/docker-compose.yml
>  | docker-compose -f - up
> {noformat}
> Once the environment is up. Go to http://nifi.docker.localhost/nifi in Chrome 
> and try adding/configuring/moving a processor. This will reproduce the issue.
> For this task, the following improvement is recommended:
> Change the Header (string literal) for FORWARDED_HOST_HTTP_HEADER from 
> "X-Forwarded-Server" to "X-Forwarded-Host"
> Additionally, some proxies use a different header for the context path 
> prefix. Traefik uses {{X-Forwarded-Prefix}}. There does not appear to be a 
> universal standard. In the future, we could make this header configurable, 
> but for now, perhaps we should add {{X-Forwarded-Prefix}} to the headers 
> checked by NiFi so that Traefik is supported out-of-the-box.
> *Important:* After making this change, verify that proxying NiFi via Knox 
> still works, i.e., Knox should be sending the X-Forwarded-Host header that 
> matches what the user requested in the browser.
> This change applies to NiFi Registry as well.
> /cc [~jtstorck]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3129: [WIP] NIFI-5748 Fixed proxy header support to use X-Forwar...

2018-11-06 Thread jtstorck
Github user jtstorck commented on the issue:

https://github.com/apache/nifi/pull/3129
  
https://github.com/jtstorck/proxy-nifi-docker can be used to test this PR.

There's an issue in NiFi with the handling of X-Forwarded-Host when Knox is 
proxying NiFi, which doesn't currently account for the port being present in 
that header.  I'll update the code to handle this case, and update the PR.


---


[jira] [Commented] (NIFI-5748) Improve handling of X-Forwarded-* headers in URI Rewriting

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677691#comment-16677691
 ] 

ASF GitHub Bot commented on NIFI-5748:
--

Github user jtstorck commented on the issue:

https://github.com/apache/nifi/pull/3129
  
https://github.com/jtstorck/proxy-nifi-docker can be used to test this PR.

There's an issue in NiFi with the handling of X-Forwarded-Host when Knox is 
proxying NiFi, which doesn't currently account for the port being present in 
that header.  I'll update the code to handle this case, and update the PR.


> Improve handling of X-Forwarded-* headers in URI Rewriting
> --
>
> Key: NIFI-5748
> URL: https://issues.apache.org/jira/browse/NIFI-5748
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Kevin Doran
>Assignee: Jeff Storck
>Priority: Major
>
> This ticket is to improve the handling of headers used by popular proxies 
> when rewriting URIs in NiFI. Currently, NiFi checks the following headers 
> when determining how to re-write URLs it returns clients:
> From 
> [ApplicationResource|https://github.com/apache/nifi/blob/2201f7746fd16874aefbd12d546565f5d105ab04/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ApplicationResource.java#L110]:
> {code:java}
> public static final String PROXY_SCHEME_HTTP_HEADER = "X-ProxyScheme";
> public static final String PROXY_HOST_HTTP_HEADER = "X-ProxyHost";
> public static final String PROXY_PORT_HTTP_HEADER = "X-ProxyPort";
> public static final String PROXY_CONTEXT_PATH_HTTP_HEADER = 
> "X-ProxyContextPath";
> public static final String FORWARDED_PROTO_HTTP_HEADER = "X-Forwarded-Proto";
> public static final String FORWARDED_HOST_HTTP_HEADER = "X-Forwarded-Server";
> public static final String FORWARDED_PORT_HTTP_HEADER = "X-Forwarded-Port";
> public static final String FORWARDED_CONTEXT_HTTP_HEADER = 
> "X-Forwarded-Context";
> // ...
> final String scheme = getFirstHeaderValue(PROXY_SCHEME_HTTP_HEADER, 
> FORWARDED_PROTO_HTTP_HEADER);
> final String host = getFirstHeaderValue(PROXY_HOST_HTTP_HEADER, 
> FORWARDED_HOST_HTTP_HEADER);
> final String port = getFirstHeaderValue(PROXY_PORT_HTTP_HEADER, 
> FORWARDED_PORT_HTTP_HEADER);
> {code}
> Based on this, it looks like if both {{X-Forwarded-Server}} and 
> {{X-Forwarded-Host}} are set, that {{-Host}} will contain the hostname the 
> user/client requested, and {{-Server}} will contain the hostname of the proxy 
> server (ie, what the proxy server is able to determine from inspecting the 
> hostname of the instance it is on). See this for more details:
> https://stackoverflow.com/questions/43689625/x-forwarded-host-vs-x-forwarded-server
> Here is a demo based on docker containers and a reverse-proxy called Traefik 
> that shows where the current NiFi logic can break:
> https://gist.github.com/kevdoran/2892004ccbfbb856115c8a756d9d4538
> To use this gist, you can run the following:
> {noformat}
> wget -qO- 
> https://gist.githubusercontent.com/kevdoran/2892004ccbfbb856115c8a756d9d4538/raw/fb72151900d4d8fdcf4919fe5c8a94805fbb8401/docker-compose.yml
>  | docker-compose -f - up
> {noformat}
> Once the environment is up. Go to http://nifi.docker.localhost/nifi in Chrome 
> and try adding/configuring/moving a processor. This will reproduce the issue.
> For this task, the following improvement is recommended:
> Change the Header (string literal) for FORWARDED_HOST_HTTP_HEADER from 
> "X-Forwarded-Server" to "X-Forwarded-Host"
> Additionally, some proxies use a different header for the context path 
> prefix. Traefik uses {{X-Forwarded-Prefix}}. There does not appear to be a 
> universal standard. In the future, we could make this header configurable, 
> but for now, perhaps we should add {{X-Forwarded-Prefix}} to the headers 
> checked by NiFi so that Traefik is supported out-of-the-box.
> *Important:* After making this change, verify that proxying NiFi via Knox 
> still works, i.e., Knox should be sending the X-Forwarded-Host header that 
> matches what the user requested in the browser.
> This change applies to NiFi Registry as well.
> /cc [~jtstorck]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4621) Allow inputs to ListSFTP

2018-11-06 Thread Kislay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677670#comment-16677670
 ] 

Kislay Kumar commented on NIFI-4621:


Sure, Can you please assign this task to me. It would be easy for me to keep 
track. 

> Allow inputs to ListSFTP
> 
>
> Key: NIFI-4621
> URL: https://issues.apache.org/jira/browse/NIFI-4621
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.4.0
>Reporter: Soumya Shanta Ghosh
>Assignee: Puspendu Banerjee
>Priority: Critical
>
> ListSFTP supports listing of the supplied directory (Remote Path) 
> out-of-the-box on the supplied "Hostname" using the 'Username" and 'Password" 
> / "Private Key Passphrase". 
> The password can change at a regular interval (depending on organization 
> policy) or the Hostname or the Remote Path can change based on some other 
> requirement.
> This is a case to allow ListSFTP to leverage the use of Nifi Expression 
> language so that the values of Hostname, Password and/or Remote Path can be 
> set based on the attributes of an incoming flow file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5787) Update ReadMe doc to start Nifi on windows

2018-11-06 Thread Brandon Jiang (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677661#comment-16677661
 ] 

Brandon Jiang commented on NIFI-5787:
-

Thanks [~ijokarumawak] !

> Update ReadMe doc to start Nifi on windows
> --
>
> Key: NIFI-5787
> URL: https://issues.apache.org/jira/browse/NIFI-5787
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation  Website
>Affects Versions: 1.8.0
>Reporter: Brandon Jiang
>Assignee: Brandon Jiang
>Priority: Minor
> Fix For: 1.9.0
>
>
> Update 1.8 assembly ReadMe doc to start Nifi on windows. Change it to 
> run-nifi.bat



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4621) Allow inputs to ListSFTP

2018-11-06 Thread Peter Wicks (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677659#comment-16677659
 ] 

Peter Wicks commented on NIFI-4621:
---

Sounds great. Let me know when it's PR'd and I can review.

> Allow inputs to ListSFTP
> 
>
> Key: NIFI-4621
> URL: https://issues.apache.org/jira/browse/NIFI-4621
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.4.0
>Reporter: Soumya Shanta Ghosh
>Assignee: Puspendu Banerjee
>Priority: Critical
>
> ListSFTP supports listing of the supplied directory (Remote Path) 
> out-of-the-box on the supplied "Hostname" using the 'Username" and 'Password" 
> / "Private Key Passphrase". 
> The password can change at a regular interval (depending on organization 
> policy) or the Hostname or the Remote Path can change based on some other 
> requirement.
> This is a case to allow ListSFTP to leverage the use of Nifi Expression 
> language so that the values of Hostname, Password and/or Remote Path can be 
> set based on the attributes of an incoming flow file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-4621) Allow inputs to ListSFTP

2018-11-06 Thread Kislay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677638#comment-16677638
 ] 

Kislay Kumar edited comment on NIFI-4621 at 11/7/18 3:46 AM:
-

[~patricker] : I would like to pick this task if no one is working. 


was (Author: kislayom):
I would like to pick this task if no one is working. 

> Allow inputs to ListSFTP
> 
>
> Key: NIFI-4621
> URL: https://issues.apache.org/jira/browse/NIFI-4621
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.4.0
>Reporter: Soumya Shanta Ghosh
>Assignee: Puspendu Banerjee
>Priority: Critical
>
> ListSFTP supports listing of the supplied directory (Remote Path) 
> out-of-the-box on the supplied "Hostname" using the 'Username" and 'Password" 
> / "Private Key Passphrase". 
> The password can change at a regular interval (depending on organization 
> policy) or the Hostname or the Remote Path can change based on some other 
> requirement.
> This is a case to allow ListSFTP to leverage the use of Nifi Expression 
> language so that the values of Hostname, Password and/or Remote Path can be 
> set based on the attributes of an incoming flow file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4621) Allow inputs to ListSFTP

2018-11-06 Thread Kislay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677638#comment-16677638
 ] 

Kislay Kumar commented on NIFI-4621:


I would like to pick this task if no one is working. 

> Allow inputs to ListSFTP
> 
>
> Key: NIFI-4621
> URL: https://issues.apache.org/jira/browse/NIFI-4621
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.4.0
>Reporter: Soumya Shanta Ghosh
>Assignee: Puspendu Banerjee
>Priority: Critical
>
> ListSFTP supports listing of the supplied directory (Remote Path) 
> out-of-the-box on the supplied "Hostname" using the 'Username" and 'Password" 
> / "Private Key Passphrase". 
> The password can change at a regular interval (depending on organization 
> policy) or the Hostname or the Remote Path can change based on some other 
> requirement.
> This is a case to allow ListSFTP to leverage the use of Nifi Expression 
> language so that the values of Hostname, Password and/or Remote Path can be 
> set based on the attributes of an incoming flow file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5764) Allow ListSftp connection parameter

2018-11-06 Thread Koji Kawamura (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677574#comment-16677574
 ] 

Koji Kawamura commented on NIFI-5764:
-

[~dav] Thanks for sharing the dump. But I couldn't find any SFTP related thread 
in it.

> It is something annoying because some SFTPList processors works fine other 
> not, but in random way.

Do you see the ListSFTP processor instance works fine for sometime, but 
encounter the auth failure issue some other times against the same SFTP server 
without changing any ListSFTP configuration?

Do you have access to the SFTP server to see any error happened?

Did you confirm 'controlmaster' is the only difference between NiFi and the 
command you used? Did you perform ssh command without controlmaster option and 
if so, did it fail?? I mean:
{code}
# While NiFi ListSFTP is failing, following command works.
sftp  -o “controlmaster auto” username@sftp_server
# Then without controlmaster option, does the command fail?
sftp  username@sftp_server
{code}




> Allow ListSftp connection parameter
> ---
>
> Key: NIFI-5764
> URL: https://issues.apache.org/jira/browse/NIFI-5764
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: dav
>Priority: Critical
>  Labels: SFTP, customization, sftp
> Attachments: dumpone
>
>
> ListSftp and other Sftp processors should be able to add parameters
> (like [-B buffer_size] [-b batchfile] [-c cipher]
>  [-D sftp_server_path] [-F ssh_config] [-i identity_file] [-l limit]
>  [-o ssh_option] [-P port] [-R num_requests] [-S program]
>  [-s subsystem | sftp_server] host
>  sftp [user@]host[:file ...]
>  sftp [user@]host[:dir[/]]
>  sftp -b batchfile [user@]host) 
> in order to edit the type of connection on Sftp Server.
> For instance, I have this error on nifi:
> 2018-10-29 11:06:09,462 ERROR [Timer-Driven Process Thread-5] 
> SimpleProcessLogger.java:254 
> ListSFTP[id=766ac418-27ce-335a-5b13-52abe3495592] Failed to perform listing 
> on remote host due to java.io.IOException: Failed to obtain connection to 
> remote host due to com.jcraft.jsch.JSchException: Auth fail: {}
> java.io.IOException: Failed to obtain connection to remote host due to 
> com.jcraft.jsch.JSchException: Auth fail
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getChannel(SFTPTransfer.java:468)
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:192)
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:156)
>  at 
> org.apache.nifi.processors.standard.ListFileTransfer.performListing(ListFileTransfer.java:105)
>  at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.onTrigger(AbstractListProcessor.java:401)
>  at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1147)
>  at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:175)
>  at 
> org.apache.nifi.controller.scheduling.QuartzSchedulingAgent$2.run(QuartzSchedulingAgent.java:140)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: com.jcraft.jsch.JSchException: Auth fail
>  at com.jcraft.jsch.Session.connect(Session.java:519)
>  at com.jcraft.jsch.Session.connect(Session.java:183)
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getChannel(SFTPTransfer.java:448)
>  ... 15 common frames omitted
> This can be avoided by connect to Sftp server with this string:
> *sftp  -o “controlmaster auto” username@sftp_server*
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4625) Add External Versioning to PutElasticSearch5 Processor

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677494#comment-16677494
 ] 

ASF GitHub Bot commented on NIFI-4625:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2287
  
There has been some recent interest in this feature, do you think you'll be 
able to continue with this? Seems like a good feature to add.


> Add External Versioning to PutElasticSearch5 Processor
> --
>
> Key: NIFI-4625
> URL: https://issues.apache.org/jira/browse/NIFI-4625
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
> Environment: All
>Reporter: Pedro Gomes
>Assignee: Pedro Gomes
>Priority: Major
>  Labels: elasticsearch, processor
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Currently the PutElasticSearch5 processor does not support external 
> versioning.
> The idea would be to add a property that follows the same logic as the Id 
> property, and allows index documents with an externally controlled version.
> I've changed the code already and added some tests. Right now the changes 
> proposed are:
> - Add a new property Version in the processor block.
> - Change the Index operation to support the versioning number and versioning 
> type = external
> - Check if the versioning is used with other operation types, fail if so.
> (Idea behind this is that the bulk api doest not support external versioning 
> for any other operation except Index)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2287: NIFI-4625 - Added External Version to the PutElastic5 Proc...

2018-11-06 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2287
  
There has been some recent interest in this feature, do you think you'll be 
able to continue with this? Seems like a good feature to add.


---


[jira] [Commented] (NIFIREG-205) NiFi Registry DB gets out of sync with git repository, no apprent remediation

2018-11-06 Thread Dye357 (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFIREG-205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677397#comment-16677397
 ] 

Dye357 commented on NIFIREG-205:


Thanks for fixing this!!

> NiFi Registry DB gets out of sync with git repository, no apprent remediation
> -
>
> Key: NIFIREG-205
> URL: https://issues.apache.org/jira/browse/NIFIREG-205
> Project: NiFi Registry
>  Issue Type: Bug
> Environment: Centos 7.5
>Reporter: Dye357
>Assignee: Koji Kawamura
>Priority: Major
> Fix For: 0.4.0
>
>
> I've observed a couple issues with the GitFlowPersistenceAdapter:
>  # When adding a new process group to NIFREG If for any reason the git 
> repository is in a "dirty" (untracked file) state the adding of the process 
> group fails. However an entry is still created in the DB with a version of 0. 
> Once in this state you cannot delete the flow from NIFIREG and you cannot 
> restart version control from nifi with the same name. I assume the only way 
> to fix this is to manually go into the DB and delete the record.
>  # When using Remote To Push, if the push fails the same behavior in #1 is 
> exhibited. It's not reasonable to expect that a push will always succeed. The 
> remote git repository could be offline for maintenance etc...
> Steps to reproduce:
>  # Start nifi registry with an empty db and clean git repo.
>  # add an untracked file to the git repo but do-not commit it.
>  # Start a processgroup under version control.
>  # Expect Failure in Nifi UI
>  # Expect Exception in Log saying untracked files in git repo.
>  # Delete flow from nifi-registry using Actions -> Delete.
>  # Expect Failure case, recieve error deleting flow message.
>  # Refresh nifi-registry UI - flow is still present, version is 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5796) Processors' Counter values in Status History not showing correct value

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677385#comment-16677385
 ] 

ASF GitHub Bot commented on NIFI-5796:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3136


> Processors' Counter values in Status History not showing correct value
> --
>
> Key: NIFI-5796
> URL: https://issues.apache.org/jira/browse/NIFI-5796
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.9.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5796) Processors' Counter values in Status History not showing correct value

2018-11-06 Thread Bryan Bende (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-5796:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Processors' Counter values in Status History not showing correct value
> --
>
> Key: NIFI-5796
> URL: https://issues.apache.org/jira/browse/NIFI-5796
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.9.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3136: NIFI-5796: Addressed issue that caused Counters' va...

2018-11-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3136


---


[jira] [Commented] (NIFI-5796) Processors' Counter values in Status History not showing correct value

2018-11-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677383#comment-16677383
 ] 

ASF subversion and git services commented on NIFI-5796:
---

Commit da1f9eaf6a82f7cb0b10cf94c708e3b800071972 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=da1f9ea ]

NIFI-5796 Addressed bug in subtract() method for keeping running total of 
counters for status history

This closes #3136.

Signed-off-by: Bryan Bende 


> Processors' Counter values in Status History not showing correct value
> --
>
> Key: NIFI-5796
> URL: https://issues.apache.org/jira/browse/NIFI-5796
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.9.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5796) Processors' Counter values in Status History not showing correct value

2018-11-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677382#comment-16677382
 ] 

ASF subversion and git services commented on NIFI-5796:
---

Commit 4069d755505444b8730fe7f3f4a1a702a281e78c in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=4069d75 ]

NIFI-5796: Addressed issue that caused Counters' values to show the wrong value 
in Status History

Signed-off-by: Bryan Bende 


> Processors' Counter values in Status History not showing correct value
> --
>
> Key: NIFI-5796
> URL: https://issues.apache.org/jira/browse/NIFI-5796
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.9.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5796) Processors' Counter values in Status History not showing correct value

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677381#comment-16677381
 ] 

ASF GitHub Bot commented on NIFI-5796:
--

Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/3136
  
Looks good, going to merge


> Processors' Counter values in Status History not showing correct value
> --
>
> Key: NIFI-5796
> URL: https://issues.apache.org/jira/browse/NIFI-5796
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.9.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3136: NIFI-5796: Addressed issue that caused Counters' values to...

2018-11-06 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/3136
  
Looks good, going to merge


---


[GitHub] nifi issue #3136: NIFI-5796: Addressed issue that caused Counters' values to...

2018-11-06 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/3136
  
Reviewing...


---


[jira] [Commented] (NIFI-5796) Processors' Counter values in Status History not showing correct value

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677336#comment-16677336
 ] 

ASF GitHub Bot commented on NIFI-5796:
--

Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/3136
  
Reviewing...


> Processors' Counter values in Status History not showing correct value
> --
>
> Key: NIFI-5796
> URL: https://issues.apache.org/jira/browse/NIFI-5796
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.9.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5796) Processors' Counter values in Status History not showing correct value

2018-11-06 Thread Mark Payne (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-5796:
-
Fix Version/s: 1.9.0
   Status: Patch Available  (was: Open)

> Processors' Counter values in Status History not showing correct value
> --
>
> Key: NIFI-5796
> URL: https://issues.apache.org/jira/browse/NIFI-5796
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.9.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5796) Processors' Counter values in Status History not showing correct value

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677305#comment-16677305
 ] 

ASF GitHub Bot commented on NIFI-5796:
--

GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/3136

NIFI-5796: Addressed issue that caused Counters' values to show the w…

…rong value in Status History

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-5796

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3136.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3136


commit ce7633cd010d3db36806a2311096c9fb7b46223a
Author: Mark Payne 
Date:   2018-11-06T21:20:57Z

NIFI-5796: Addressed issue that caused Counters' values to show the wrong 
value in Status History




> Processors' Counter values in Status History not showing correct value
> --
>
> Key: NIFI-5796
> URL: https://issues.apache.org/jira/browse/NIFI-5796
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.9.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3136: NIFI-5796: Addressed issue that caused Counters' va...

2018-11-06 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/3136

NIFI-5796: Addressed issue that caused Counters' values to show the w…

…rong value in Status History

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-5796

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3136.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3136


commit ce7633cd010d3db36806a2311096c9fb7b46223a
Author: Mark Payne 
Date:   2018-11-06T21:20:57Z

NIFI-5796: Addressed issue that caused Counters' values to show the wrong 
value in Status History




---


[jira] [Created] (NIFI-5796) Processors' Counter values in Status History not showing correct value

2018-11-06 Thread Mark Payne (JIRA)
Mark Payne created NIFI-5796:


 Summary: Processors' Counter values in Status History not showing 
correct value
 Key: NIFI-5796
 URL: https://issues.apache.org/jira/browse/NIFI-5796
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Mark Payne
Assignee: Mark Payne






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3130: NIFI-5791: Add Apache Daffodil (incubating) bundle

2018-11-06 Thread ottobackwards
Github user ottobackwards commented on the issue:

https://github.com/apache/nifi/pull/3130
  
Isn't this like the Jolt capability? 


---


[jira] [Commented] (NIFI-5791) Add Apache Daffodil parse/unparse processor

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677296#comment-16677296
 ] 

ASF GitHub Bot commented on NIFI-5791:
--

Github user ottobackwards commented on the issue:

https://github.com/apache/nifi/pull/3130
  
Isn't this like the Jolt capability? 


> Add Apache Daffodil parse/unparse processor
> ---
>
> Key: NIFI-5791
> URL: https://issues.apache.org/jira/browse/NIFI-5791
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Steve Lawrence
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (MINIFICPP-627) Remove unnecessary ternary operators, variable shadowing

2018-11-06 Thread Aldrin Piri (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri resolved MINIFICPP-627.
---
   Resolution: Fixed
Fix Version/s: 0.6.0

> Remove unnecessary ternary operators, variable shadowing
> 
>
> Key: MINIFICPP-627
> URL: https://issues.apache.org/jira/browse/MINIFICPP-627
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Arpad Boda
>Assignee: Arpad Boda
>Priority: Trivial
> Fix For: 0.6.0
>
>
> There are some "? true : false" operations in the code, which is unnecessary. 
> Variable shadowing are to be removed to reduce the possibility of errors when 
> touching the code. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-603) Fill gaps in C2 responses for Windows

2018-11-06 Thread Aldrin Piri (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri updated MINIFICPP-603:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fill gaps in C2 responses for Windows
> -
>
> Key: MINIFICPP-603
> URL: https://issues.apache.org/jira/browse/MINIFICPP-603
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>
> C2 responses aren't functionally complete in windows. This ticket is meant to 
> fill those gaps. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (MINIFICPP-649) Fix some compiler warnings

2018-11-06 Thread Aldrin Piri (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri resolved MINIFICPP-649.
---
Resolution: Fixed

> Fix some compiler warnings
> --
>
> Key: MINIFICPP-649
> URL: https://issues.apache.org/jira/browse/MINIFICPP-649
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Arpad Boda
>Assignee: Arpad Boda
>Priority: Minor
> Fix For: 0.6.0
>
>
> Some warnings might be related to errors, these worth checking and fixing.
> For eg.:
> _/Users/aboda/work/shadow/minifi/nifi-minifi-cpp/LibExample/monitor_directory.c:43:3:
>  warning: implicit declaration of function 'pthread_mutex_lock' is invalid in 
> C99 [-Wimplicit-function-declaration]_



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (MINIFICPP-653) Log message will segfault client if no content produced

2018-11-06 Thread Aldrin Piri (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri resolved MINIFICPP-653.
---
   Resolution: Fixed
Fix Version/s: 0.6.0

> Log message will segfault client if no content produced
> ---
>
> Key: MINIFICPP-653
> URL: https://issues.apache.org/jira/browse/MINIFICPP-653
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Blocker
> Fix For: 0.6.0
>
>
> Log message will segfault client if no content produced



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFIREG-205) NiFi Registry DB gets out of sync with git repository, no apprent remediation

2018-11-06 Thread Bryan Bende (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFIREG-205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende resolved NIFIREG-205.
-
   Resolution: Fixed
Fix Version/s: 0.4.0

> NiFi Registry DB gets out of sync with git repository, no apprent remediation
> -
>
> Key: NIFIREG-205
> URL: https://issues.apache.org/jira/browse/NIFIREG-205
> Project: NiFi Registry
>  Issue Type: Bug
> Environment: Centos 7.5
>Reporter: Dye357
>Assignee: Koji Kawamura
>Priority: Major
> Fix For: 0.4.0
>
>
> I've observed a couple issues with the GitFlowPersistenceAdapter:
>  # When adding a new process group to NIFREG If for any reason the git 
> repository is in a "dirty" (untracked file) state the adding of the process 
> group fails. However an entry is still created in the DB with a version of 0. 
> Once in this state you cannot delete the flow from NIFIREG and you cannot 
> restart version control from nifi with the same name. I assume the only way 
> to fix this is to manually go into the DB and delete the record.
>  # When using Remote To Push, if the push fails the same behavior in #1 is 
> exhibited. It's not reasonable to expect that a push will always succeed. The 
> remote git repository could be offline for maintenance etc...
> Steps to reproduce:
>  # Start nifi registry with an empty db and clean git repo.
>  # add an untracked file to the git repo but do-not commit it.
>  # Start a processgroup under version control.
>  # Expect Failure in Nifi UI
>  # Expect Exception in Log saying untracked files in git repo.
>  # Delete flow from nifi-registry using Actions -> Delete.
>  # Expect Failure case, recieve error deleting flow message.
>  # Refresh nifi-registry UI - flow is still present, version is 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-205) NiFi Registry DB gets out of sync with git repository, no apprent remediation

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFIREG-205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677238#comment-16677238
 ] 

ASF GitHub Bot commented on NIFIREG-205:


Github user asfgit closed the pull request at:

https://github.com/apache/nifi-registry/pull/146


> NiFi Registry DB gets out of sync with git repository, no apprent remediation
> -
>
> Key: NIFIREG-205
> URL: https://issues.apache.org/jira/browse/NIFIREG-205
> Project: NiFi Registry
>  Issue Type: Bug
> Environment: Centos 7.5
>Reporter: Dye357
>Assignee: Koji Kawamura
>Priority: Major
> Fix For: 0.4.0
>
>
> I've observed a couple issues with the GitFlowPersistenceAdapter:
>  # When adding a new process group to NIFREG If for any reason the git 
> repository is in a "dirty" (untracked file) state the adding of the process 
> group fails. However an entry is still created in the DB with a version of 0. 
> Once in this state you cannot delete the flow from NIFIREG and you cannot 
> restart version control from nifi with the same name. I assume the only way 
> to fix this is to manually go into the DB and delete the record.
>  # When using Remote To Push, if the push fails the same behavior in #1 is 
> exhibited. It's not reasonable to expect that a push will always succeed. The 
> remote git repository could be offline for maintenance etc...
> Steps to reproduce:
>  # Start nifi registry with an empty db and clean git repo.
>  # add an untracked file to the git repo but do-not commit it.
>  # Start a processgroup under version control.
>  # Expect Failure in Nifi UI
>  # Expect Exception in Log saying untracked files in git repo.
>  # Delete flow from nifi-registry using Actions -> Delete.
>  # Expect Failure case, recieve error deleting flow message.
>  # Refresh nifi-registry UI - flow is still present, version is 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #146: NIFIREG-205: Allow Git repo to delete a flo...

2018-11-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-registry/pull/146


---


[jira] [Commented] (NIFIREG-205) NiFi Registry DB gets out of sync with git repository, no apprent remediation

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFIREG-205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677236#comment-16677236
 ] 

ASF GitHub Bot commented on NIFIREG-205:


Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/146
  
Looks good, going to merge, thanks!


> NiFi Registry DB gets out of sync with git repository, no apprent remediation
> -
>
> Key: NIFIREG-205
> URL: https://issues.apache.org/jira/browse/NIFIREG-205
> Project: NiFi Registry
>  Issue Type: Bug
> Environment: Centos 7.5
>Reporter: Dye357
>Assignee: Koji Kawamura
>Priority: Major
>
> I've observed a couple issues with the GitFlowPersistenceAdapter:
>  # When adding a new process group to NIFREG If for any reason the git 
> repository is in a "dirty" (untracked file) state the adding of the process 
> group fails. However an entry is still created in the DB with a version of 0. 
> Once in this state you cannot delete the flow from NIFIREG and you cannot 
> restart version control from nifi with the same name. I assume the only way 
> to fix this is to manually go into the DB and delete the record.
>  # When using Remote To Push, if the push fails the same behavior in #1 is 
> exhibited. It's not reasonable to expect that a push will always succeed. The 
> remote git repository could be offline for maintenance etc...
> Steps to reproduce:
>  # Start nifi registry with an empty db and clean git repo.
>  # add an untracked file to the git repo but do-not commit it.
>  # Start a processgroup under version control.
>  # Expect Failure in Nifi UI
>  # Expect Exception in Log saying untracked files in git repo.
>  # Delete flow from nifi-registry using Actions -> Delete.
>  # Expect Failure case, recieve error deleting flow message.
>  # Refresh nifi-registry UI - flow is still present, version is 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry issue #146: NIFIREG-205: Allow Git repo to delete a flow with ...

2018-11-06 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/146
  
Looks good, going to merge, thanks!


---


[jira] [Commented] (NIFI-5795) RedisDistributedMapCacheClientService put missing option

2018-11-06 Thread Alex (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677187#comment-16677187
 ] 

Alex commented on NIFI-5795:


Added pull request: https://github.com/apache/nifi/pull/3135

> RedisDistributedMapCacheClientService put missing option
> 
>
> Key: NIFI-5795
> URL: https://issues.apache.org/jira/browse/NIFI-5795
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Alex
>Priority: Major
>
> When you select on *PutDistributedMapCache CACHE_UPDATE_STRATEGY = 
> CACHE_UPDATE_REPLACE we execute "cache.put(cacheKey, cacheValue, 
> keySerializer, valueSerializer);"* 
> [LINK|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDistributedMapCache.java#L202]
> If you use redis as backend service this jumps to: 
> RedisDistributedMapCacheClientService.java -> 
> redisConnection.set(kv.getKey(), kv.getValue(), Expiration.seconds(ttl), 
> null); 
> [LINK|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-redis-bundle/nifi-redis-extensions/src/main/java/org/apache/nifi/redis/service/RedisDistributedMapCacheClientService.java#L191]
> Calling to spring-data/redis/ library, but we have a bug putting null as 
> Option parameter, causing an error "option cannot be null", because according 
> to library: "{{option}} - must not be null." [Library 
> Link|https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/connection/RedisStringCommands.html#set-byte:A-byte:A-org.springframework.data.redis.core.types.Expiration-org.springframework.data.redis.connection.RedisStringCommands.SetOption-]
> If we want to update strategy we should use: 
> [{{RedisStringCommands.SetOption.upsert()}}|https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/connection/RedisStringCommands.SetOption.html#upsert--]
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5795) RedisDistributedMapCacheClientService put missing option

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677186#comment-16677186
 ] 

ASF GitHub Bot commented on NIFI-5795:
--

GitHub user luup2k opened a pull request:

https://github.com/apache/nifi/pull/3135

[NIFI-5795] RedisDistributedMapCacheClientService put missing option

https://issues.apache.org/jira/browse/NIFI-5795

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/luup2k/nifi patch-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3135.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3135


commit be39b647a0ad02c3c0c338fe11488d47497d702e
Author: luup2k <3969130+luup2k@...>
Date:   2018-11-06T19:05:23Z

NIFI-5795 RedisDistributedMapCacheClientService put missing option

https://issues.apache.org/jira/browse/NIFI-5795




> RedisDistributedMapCacheClientService put missing option
> 
>
> Key: NIFI-5795
> URL: https://issues.apache.org/jira/browse/NIFI-5795
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Alex
>Priority: Major
>
> When you select on *PutDistributedMapCache CACHE_UPDATE_STRATEGY = 
> CACHE_UPDATE_REPLACE we execute "cache.put(cacheKey, cacheValue, 
> keySerializer, valueSerializer);"* 
> [LINK|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDistributedMapCache.java#L202]
> If you use redis as backend service this jumps to: 
> RedisDistributedMapCacheClientService.java -> 
> redisConnection.set(kv.getKey(), kv.getValue(), Expiration.seconds(ttl), 
> null); 
> [LINK|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-redis-bundle/nifi-redis-extensions/src/main/java/org/apache/nifi/redis/service/RedisDistributedMapCacheClientService.java#L191]
> Calling to spring-data/redis/ library, but we have a bug putting null as 
> Option parameter, causing an error "option cannot be null", because according 
> to library: "{{option}} - must not be null." [Library 
> Link|https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/connection/RedisStringCommands.html#set-byte:A-byte:A-org.springframework.data.redis.core.types.Expiration-org.springframework.data.redis.connection.RedisStringCommands.SetOption-]
> If we want to update strategy we should use: 
> [{{RedisStringCommands.SetOption.upsert()}}|https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/connection/RedisStringCommands.SetOption.html#upsert--]
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3135: [NIFI-5795] RedisDistributedMapCacheClientService p...

2018-11-06 Thread luup2k
GitHub user luup2k opened a pull request:

https://github.com/apache/nifi/pull/3135

[NIFI-5795] RedisDistributedMapCacheClientService put missing option

https://issues.apache.org/jira/browse/NIFI-5795

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/luup2k/nifi patch-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3135.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3135


commit be39b647a0ad02c3c0c338fe11488d47497d702e
Author: luup2k <3969130+luup2k@...>
Date:   2018-11-06T19:05:23Z

NIFI-5795 RedisDistributedMapCacheClientService put missing option

https://issues.apache.org/jira/browse/NIFI-5795




---


[jira] [Updated] (NIFI-5795) RedisDistributedMapCacheClientService put missing option

2018-11-06 Thread Alex (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex updated NIFI-5795:
---
Description: 
When you select on *PutDistributedMapCache CACHE_UPDATE_STRATEGY = 
CACHE_UPDATE_REPLACE we execute "cache.put(cacheKey, cacheValue, keySerializer, 
valueSerializer);"* 
[LINK|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDistributedMapCache.java#L202]

If you use redis as backend service this jumps to: 
RedisDistributedMapCacheClientService.java -> redisConnection.set(kv.getKey(), 
kv.getValue(), Expiration.seconds(ttl), null); 
[LINK|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-redis-bundle/nifi-redis-extensions/src/main/java/org/apache/nifi/redis/service/RedisDistributedMapCacheClientService.java#L191]

Calling to spring-data/redis/ library, but we have a bug putting null as Option 
parameter, causing an error "option cannot be null", because according to 
library: "{{option}} - must not be null." [Library 
Link|https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/connection/RedisStringCommands.html#set-byte:A-byte:A-org.springframework.data.redis.core.types.Expiration-org.springframework.data.redis.connection.RedisStringCommands.SetOption-]

If we want to update strategy we should use: 
[{{RedisStringCommands.SetOption.upsert()}}|https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/connection/RedisStringCommands.SetOption.html#upsert--]

 

 

 

 

  was:
When you select on PutD

 

 


> RedisDistributedMapCacheClientService put missing option
> 
>
> Key: NIFI-5795
> URL: https://issues.apache.org/jira/browse/NIFI-5795
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Alex
>Priority: Major
>
> When you select on *PutDistributedMapCache CACHE_UPDATE_STRATEGY = 
> CACHE_UPDATE_REPLACE we execute "cache.put(cacheKey, cacheValue, 
> keySerializer, valueSerializer);"* 
> [LINK|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDistributedMapCache.java#L202]
> If you use redis as backend service this jumps to: 
> RedisDistributedMapCacheClientService.java -> 
> redisConnection.set(kv.getKey(), kv.getValue(), Expiration.seconds(ttl), 
> null); 
> [LINK|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-redis-bundle/nifi-redis-extensions/src/main/java/org/apache/nifi/redis/service/RedisDistributedMapCacheClientService.java#L191]
> Calling to spring-data/redis/ library, but we have a bug putting null as 
> Option parameter, causing an error "option cannot be null", because according 
> to library: "{{option}} - must not be null." [Library 
> Link|https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/connection/RedisStringCommands.html#set-byte:A-byte:A-org.springframework.data.redis.core.types.Expiration-org.springframework.data.redis.connection.RedisStringCommands.SetOption-]
> If we want to update strategy we should use: 
> [{{RedisStringCommands.SetOption.upsert()}}|https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/connection/RedisStringCommands.SetOption.html#upsert--]
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5795) RedisDistributedMapCacheClientService put missing option

2018-11-06 Thread Alex (JIRA)
Alex created NIFI-5795:
--

 Summary: RedisDistributedMapCacheClientService put missing option
 Key: NIFI-5795
 URL: https://issues.apache.org/jira/browse/NIFI-5795
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.8.0
Reporter: Alex


When you select on PutD

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-664) Agent classes are required to be defined.

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677137#comment-16677137
 ] 

ASF GitHub Bot commented on MINIFICPP-664:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/434


> Agent classes are required to be defined. 
> --
>
> Key: MINIFICPP-664
> URL: https://issues.apache.org/jira/browse/MINIFICPP-664
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Blocker
>  Labels: DevOps, runtime
> Fix For: 0.6.0
>
>
> Agent classes are required to be defined. Prevent startup from occurring when 
> a class is not defined. 
> [~aldrin] do you disagree ? 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (MINIFICPP-664) Agent classes are required to be defined.

2018-11-06 Thread Aldrin Piri (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri resolved MINIFICPP-664.
---
   Resolution: Fixed
Fix Version/s: 0.6.0

> Agent classes are required to be defined. 
> --
>
> Key: MINIFICPP-664
> URL: https://issues.apache.org/jira/browse/MINIFICPP-664
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Blocker
>  Labels: DevOps, runtime
> Fix For: 0.6.0
>
>
> Agent classes are required to be defined. Prevent startup from occurring when 
> a class is not defined. 
> [~aldrin] do you disagree ? 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #434: MINIFICPP-664: Require C2 agent class to ...

2018-11-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/434


---


[GitHub] nifi-minifi-cpp issue #434: MINIFICPP-664: Require C2 agent class to be defi...

2018-11-06 Thread apiri
Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/434
  
code changes look good.  verified build, tests and expected functionality.  
will merge


---


[jira] [Commented] (MINIFICPP-664) Agent classes are required to be defined.

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677136#comment-16677136
 ] 

ASF GitHub Bot commented on MINIFICPP-664:
--

Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/434
  
code changes look good.  verified build, tests and expected functionality.  
will merge


> Agent classes are required to be defined. 
> --
>
> Key: MINIFICPP-664
> URL: https://issues.apache.org/jira/browse/MINIFICPP-664
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Blocker
>  Labels: DevOps, runtime
>
> Agent classes are required to be defined. Prevent startup from occurring when 
> a class is not defined. 
> [~aldrin] do you disagree ? 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-664) Agent classes are required to be defined.

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677003#comment-16677003
 ] 

ASF GitHub Bot commented on MINIFICPP-664:
--

Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/434
  
reviewing


> Agent classes are required to be defined. 
> --
>
> Key: MINIFICPP-664
> URL: https://issues.apache.org/jira/browse/MINIFICPP-664
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Blocker
>  Labels: DevOps, runtime
>
> Agent classes are required to be defined. Prevent startup from occurring when 
> a class is not defined. 
> [~aldrin] do you disagree ? 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp issue #434: MINIFICPP-664: Require C2 agent class to be defi...

2018-11-06 Thread apiri
Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/434
  
reviewing


---


[jira] [Commented] (MINIFICPP-648) add processor and add processor with linkage nomenclature is confusing

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676986#comment-16676986
 ] 

ASF GitHub Bot commented on MINIFICPP-648:
--

Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/432#discussion_r231202899
  
--- Diff: nanofi/src/cxx/Plan.cpp ---
@@ -162,6 +153,21 @@ bool 
ExecutionPlan::runNextProcessor(std::function current_session = 
std::make_shared(context);
   process_sessions_.push_back(current_session);
+  if (input_ff != nullptr) {
+auto content_repo = 
static_cast*>(input_ff->crp);
+std::shared_ptr claim = 
std::make_shared(input_ff->contentLocation, 
*content_repo);
+auto stream = (*content_repo)->read(claim);
--- End diff --

This copies the content of the incoming flow file. 
Naturally this leaves space for improvement, in case the content repos of 
the current flow and the incoming flow file are the same, the copy is needless. 
Given the size of the change I would prefer to do it later. 


> add processor and add processor with linkage nomenclature is confusing
> --
>
> Key: MINIFICPP-648
> URL: https://issues.apache.org/jira/browse/MINIFICPP-648
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Arpad Boda
>Priority: Blocker
>  Labels: CAPI
> Fix For: 0.6.0
>
>
> add_processor should be changed to always add a processor with linkage 
> without compelling documentation as why this exists.. As a result we will 
> need to add a create_processor function to create one without adding it to 
> the flow ( certain use cases where a flow isn't needed such as invokehttp or 
> listenhttp ) this can be moved to 0.7.0 if we tag before recent commits. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #432: MINIFICPP-648 - add processor and add pro...

2018-11-06 Thread arpadboda
Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/432#discussion_r231202899
  
--- Diff: nanofi/src/cxx/Plan.cpp ---
@@ -162,6 +153,21 @@ bool 
ExecutionPlan::runNextProcessor(std::function current_session = 
std::make_shared(context);
   process_sessions_.push_back(current_session);
+  if (input_ff != nullptr) {
+auto content_repo = 
static_cast*>(input_ff->crp);
+std::shared_ptr claim = 
std::make_shared(input_ff->contentLocation, 
*content_repo);
+auto stream = (*content_repo)->read(claim);
--- End diff --

This copies the content of the incoming flow file. 
Naturally this leaves space for improvement, in case the content repos of 
the current flow and the incoming flow file are the same, the copy is needless. 
Given the size of the change I would prefer to do it later. 


---


[jira] [Commented] (NIFI-5769) FlowController should prefer composition over inheritance

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676962#comment-16676962
 ] 

ASF GitHub Bot commented on NIFI-5769:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3132


> FlowController should prefer composition over inheritance
> -
>
> Key: NIFI-5769
> URL: https://issues.apache.org/jira/browse/NIFI-5769
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.9.0
>
>
> Currently, FlowController implements many different interfaces. At this time, 
> the class is several thousand lines of code, which makes rendering take quite 
> a while in IDE's and makes it more difficult to edit and maintain. Many of 
> these interfaces are unrelated and FlowController has become a bit of a 
> hodgepodge of functionality. We should refactor FlowController to externalize 
> a lot of this logic and let FlowController use composition rather than 
> inheritance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5774) Refresh available component types on the front-end

2018-11-06 Thread Bryan Bende (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-5774:
--
Fix Version/s: 1.9.0

> Refresh available component types on the front-end
> --
>
> Key: NIFI-5774
> URL: https://issues.apache.org/jira/browse/NIFI-5774
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Priority: Minor
> Fix For: 1.9.0
>
>
> Currently the application retrieves the available processors, controller 
> services, and reporting tasks during initial page load and caches them on the 
> client. This was fine because the types could never change without restarting 
> the application, but now NIFI-5673 introduces the ability to dynamically load 
> new NARs without restarting, so we'll need a way to reload the types on the 
> front end.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5769) FlowController should prefer composition over inheritance

2018-11-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676959#comment-16676959
 ] 

ASF subversion and git services commented on NIFI-5769:
---

Commit 931bb0bc3b1c0205b260261ce9730af87204e115 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=931bb0b ]

NIFI-5769: Refactored FlowController to use Composition over Inheritance
- Ensure that when root group is set, that we register its ID in FlowManager

This closes #3132.

Signed-off-by: Bryan Bende 


> FlowController should prefer composition over inheritance
> -
>
> Key: NIFI-5769
> URL: https://issues.apache.org/jira/browse/NIFI-5769
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.9.0
>
>
> Currently, FlowController implements many different interfaces. At this time, 
> the class is several thousand lines of code, which makes rendering take quite 
> a while in IDE's and makes it more difficult to edit and maintain. Many of 
> these interfaces are unrelated and FlowController has become a bit of a 
> hodgepodge of functionality. We should refactor FlowController to externalize 
> a lot of this logic and let FlowController use composition rather than 
> inheritance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5769) FlowController should prefer composition over inheritance

2018-11-06 Thread Bryan Bende (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende resolved NIFI-5769.
---
   Resolution: Fixed
Fix Version/s: 1.9.0

> FlowController should prefer composition over inheritance
> -
>
> Key: NIFI-5769
> URL: https://issues.apache.org/jira/browse/NIFI-5769
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.9.0
>
>
> Currently, FlowController implements many different interfaces. At this time, 
> the class is several thousand lines of code, which makes rendering take quite 
> a while in IDE's and makes it more difficult to edit and maintain. Many of 
> these interfaces are unrelated and FlowController has become a bit of a 
> hodgepodge of functionality. We should refactor FlowController to externalize 
> a lot of this logic and let FlowController use composition rather than 
> inheritance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3132: NIFI-5769: Refactored FlowController to use Composi...

2018-11-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3132


---


[jira] [Commented] (NIFI-5769) FlowController should prefer composition over inheritance

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676956#comment-16676956
 ] 

ASF GitHub Bot commented on NIFI-5769:
--

Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/3132
  
Code looks good and have been running this branch this morning and 
everything seems to work as expected, so going to merge shortly, thanks!


> FlowController should prefer composition over inheritance
> -
>
> Key: NIFI-5769
> URL: https://issues.apache.org/jira/browse/NIFI-5769
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>
> Currently, FlowController implements many different interfaces. At this time, 
> the class is several thousand lines of code, which makes rendering take quite 
> a while in IDE's and makes it more difficult to edit and maintain. Many of 
> these interfaces are unrelated and FlowController has become a bit of a 
> hodgepodge of functionality. We should refactor FlowController to externalize 
> a lot of this logic and let FlowController use composition rather than 
> inheritance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3132: NIFI-5769: Refactored FlowController to use Composition ov...

2018-11-06 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/3132
  
Code looks good and have been running this branch this morning and 
everything seems to work as expected, so going to merge shortly, thanks!


---


[jira] [Commented] (MINIFICPP-664) Agent classes are required to be defined.

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676951#comment-16676951
 ] 

ASF GitHub Bot commented on MINIFICPP-664:
--

GitHub user phrocker opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/434

MINIFICPP-664: Require C2 agent class to be defined

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-664

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/434.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #434


commit c5e088d736a0f33f2ea73df75c39dc969b1b27f3
Author: Marc Parisi 
Date:   2018-11-06T16:17:31Z

MINIFICPP-664: Require C2 agent class to be defined




> Agent classes are required to be defined. 
> --
>
> Key: MINIFICPP-664
> URL: https://issues.apache.org/jira/browse/MINIFICPP-664
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Blocker
>  Labels: DevOps, runtime
>
> Agent classes are required to be defined. Prevent startup from occurring when 
> a class is not defined. 
> [~aldrin] do you disagree ? 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #434: MINIFICPP-664: Require C2 agent class to ...

2018-11-06 Thread phrocker
GitHub user phrocker opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/434

MINIFICPP-664: Require C2 agent class to be defined

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-664

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/434.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #434


commit c5e088d736a0f33f2ea73df75c39dc969b1b27f3
Author: Marc Parisi 
Date:   2018-11-06T16:17:31Z

MINIFICPP-664: Require C2 agent class to be defined




---


[jira] [Commented] (MINIFICPP-664) Agent classes are required to be defined.

2018-11-06 Thread Aldrin Piri (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676933#comment-16676933
 ] 

Aldrin Piri commented on MINIFICPP-664:
---

Totally onboard with that

> Agent classes are required to be defined. 
> --
>
> Key: MINIFICPP-664
> URL: https://issues.apache.org/jira/browse/MINIFICPP-664
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Blocker
>  Labels: DevOps, runtime
>
> Agent classes are required to be defined. Prevent startup from occurring when 
> a class is not defined. 
> [~aldrin] do you disagree ? 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-664) Agent classes are required to be defined.

2018-11-06 Thread Mr TheSegfault (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676932#comment-16676932
 ] 

Mr TheSegfault commented on MINIFICPP-664:
--

Of course this is coupled with a change in the documentation...

> Agent classes are required to be defined. 
> --
>
> Key: MINIFICPP-664
> URL: https://issues.apache.org/jira/browse/MINIFICPP-664
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Blocker
>  Labels: DevOps, runtime
>
> Agent classes are required to be defined. Prevent startup from occurring when 
> a class is not defined. 
> [~aldrin] do you disagree ? 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-664) Agent classes are required to be defined.

2018-11-06 Thread Mr TheSegfault (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676931#comment-16676931
 ] 

Mr TheSegfault commented on MINIFICPP-664:
--

Sorry I thought I typed, "Agent classes are required to be defined when C2 is 
enabled" 

> Agent classes are required to be defined. 
> --
>
> Key: MINIFICPP-664
> URL: https://issues.apache.org/jira/browse/MINIFICPP-664
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Blocker
>  Labels: DevOps, runtime
>
> Agent classes are required to be defined. Prevent startup from occurring when 
> a class is not defined. 
> [~aldrin] do you disagree ? 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-664) Agent classes are required to be defined.

2018-11-06 Thread Aldrin Piri (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676929#comment-16676929
 ] 

Aldrin Piri commented on MINIFICPP-664:
---

What do you view as being the scope of this?  In general, no, I don't think it 
should preclude startup as it is very much a C2 construct.  Now, if C2 is 
enabled, I think there is a compelling case to be made for this to be a 
required property.

> Agent classes are required to be defined. 
> --
>
> Key: MINIFICPP-664
> URL: https://issues.apache.org/jira/browse/MINIFICPP-664
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Blocker
>  Labels: DevOps, runtime
>
> Agent classes are required to be defined. Prevent startup from occurring when 
> a class is not defined. 
> [~aldrin] do you disagree ? 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-664) Agent classes are required to be defined.

2018-11-06 Thread Mr TheSegfault (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mr TheSegfault updated MINIFICPP-664:
-
Description: 
Agent classes are required to be defined. Prevent startup from occurring when a 
class is not defined. 

[~aldrin] do you disagree ? 

 

  was:Agent classes are required to be defined. Prevent startup from occurring 
when a class is not defined. 


> Agent classes are required to be defined. 
> --
>
> Key: MINIFICPP-664
> URL: https://issues.apache.org/jira/browse/MINIFICPP-664
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Blocker
>  Labels: DevOps, runtime
>
> Agent classes are required to be defined. Prevent startup from occurring when 
> a class is not defined. 
> [~aldrin] do you disagree ? 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-664) Agent classes are required to be defined.

2018-11-06 Thread Mr TheSegfault (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mr TheSegfault updated MINIFICPP-664:
-
Description: Agent classes are required to be defined. Prevent startup from 
occurring when a class is not defined.   (was: Agent classes are required to be 
defined. )

> Agent classes are required to be defined. 
> --
>
> Key: MINIFICPP-664
> URL: https://issues.apache.org/jira/browse/MINIFICPP-664
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Blocker
>  Labels: DevOps, runtime
>
> Agent classes are required to be defined. Prevent startup from occurring when 
> a class is not defined. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-664) Agent classes are required to be defined.

2018-11-06 Thread Mr TheSegfault (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mr TheSegfault updated MINIFICPP-664:
-
Labels: DevOps runtime  (was: )

> Agent classes are required to be defined. 
> --
>
> Key: MINIFICPP-664
> URL: https://issues.apache.org/jira/browse/MINIFICPP-664
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Blocker
>  Labels: DevOps, runtime
>
> Agent classes are required to be defined. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-5794) ConsumeKafka and PublishKafka should allow empty string demarcator

2018-11-06 Thread Pierre Villard (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard reassigned NIFI-5794:


Assignee: Pierre Villard

> ConsumeKafka and PublishKafka should allow empty string demarcator
> --
>
> Key: NIFI-5794
> URL: https://issues.apache.org/jira/browse/NIFI-5794
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> ConsumeKafka(_*) and PublishKafka(_*) processors should allow "empty string" 
> as a message demarcator. This would allow consuming Avro data without the 
> serialization/de-serialization cost while still allowing the use of Record 
> processors once the data is in NiFi.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-664) Agent classes are required to be defined.

2018-11-06 Thread Mr TheSegfault (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mr TheSegfault updated MINIFICPP-664:
-
Priority: Blocker  (was: Major)

> Agent classes are required to be defined. 
> --
>
> Key: MINIFICPP-664
> URL: https://issues.apache.org/jira/browse/MINIFICPP-664
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Blocker
>
> Agent classes are required to be defined. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (MINIFICPP-664) Agent classes are required to be defined.

2018-11-06 Thread Mr TheSegfault (JIRA)
Mr TheSegfault created MINIFICPP-664:


 Summary: Agent classes are required to be defined. 
 Key: MINIFICPP-664
 URL: https://issues.apache.org/jira/browse/MINIFICPP-664
 Project: NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Mr TheSegfault
Assignee: Mr TheSegfault


Agent classes are required to be defined. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5794) ConsumeKafka and PublishKafka should allow empty string demarcator

2018-11-06 Thread Pierre Villard (JIRA)
Pierre Villard created NIFI-5794:


 Summary: ConsumeKafka and PublishKafka should allow empty string 
demarcator
 Key: NIFI-5794
 URL: https://issues.apache.org/jira/browse/NIFI-5794
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Pierre Villard


ConsumeKafka(_*) and PublishKafka(_*) processors should allow "empty string" as 
a message demarcator. This would allow consuming Avro data without the 
serialization/de-serialization cost while still allowing the use of Record 
processors once the data is in NiFi.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5793) Remove CLI README for NiFi 1.10

2018-11-06 Thread Pierre Villard (JIRA)
Pierre Villard created NIFI-5793:


 Summary: Remove CLI README for NiFi 1.10
 Key: NIFI-5793
 URL: https://issues.apache.org/jira/browse/NIFI-5793
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Documentation  Website
Affects Versions: 1.9.0
Reporter: Pierre Villard


With NIFI-5767 we added a dedicated documentation page for the toolkit. We 
don't need the CLI README anymore and should be removed for NiFi 1.10 (once 
NiFi 1.9.0 is released).

https://github.com/apache/nifi/blob/master/nifi-toolkit/nifi-toolkit-cli/README.md



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5788) Introduce batch size limit in PutDatabaseRecord processor

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676856#comment-16676856
 ] 

ASF GitHub Bot commented on NIFI-5788:
--

Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3128#discussion_r231153599
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java
 ---
@@ -669,11 +685,20 @@ private void executeDML(ProcessContext context, 
ProcessSession session, FlowFile
 }
 }
 ps.addBatch();
+if (++currentBatchSize == batchSize) {
--- End diff --

True, I missed that override before, but I see it now. So definitely less 
valuable, the only thing it would provide would be troubleshooting guidance, 
"your bad data is roughly in this part of the file". Probably not worth it. 
Thanks!


> Introduce batch size limit in PutDatabaseRecord processor
> -
>
> Key: NIFI-5788
> URL: https://issues.apache.org/jira/browse/NIFI-5788
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
> Environment: Teradata DB
>Reporter: Vadim
>Priority: Major
>  Labels: pull-request-available
>
> Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
> prepared SQL statements. Specifically, Teradata JDBC driver 
> ([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would 
> fail SQL statement when the batch overflows the internal limits.
> Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
> work around the issue in certain scenarios, but generally, this solution is 
> not perfect because the SQL statements would be executed in different 
> transaction contexts and data integrity would not be preserved.
> The solution suggests the following:
>  * introduce a new optional parameter in *PutDatabaseRecord* processor, 
> *max_batch_size* which defines the maximum batch size in INSERT/UPDATE 
> statement; the default value zero (INFINITY) preserves the old behavior
>  * divide the input into batches of the specified size and invoke 
> PreparedStatement.executeBatch()  for each batch
> Pull request: [https://github.com/apache/nifi/pull/3128]
>  
> [EDIT] Changed batch_size to max_batch_size. The default value would be zero 
> (INFINITY) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3128: NIFI-5788: Introduce batch size limit in PutDatabas...

2018-11-06 Thread patricker
Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3128#discussion_r231153599
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java
 ---
@@ -669,11 +685,20 @@ private void executeDML(ProcessContext context, 
ProcessSession session, FlowFile
 }
 }
 ps.addBatch();
+if (++currentBatchSize == batchSize) {
--- End diff --

True, I missed that override before, but I see it now. So definitely less 
valuable, the only thing it would provide would be troubleshooting guidance, 
"your bad data is roughly in this part of the file". Probably not worth it. 
Thanks!


---


[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676854#comment-16676854
 ] 

ASF GitHub Bot commented on NIFI-5790:
--

Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3133#discussion_r231152202
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java
 ---
@@ -164,6 +161,71 @@ public ValidationResult validate(final String subject, 
final String input, final
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
+public static final PropertyDescriptor MIN_IDLE = new 
PropertyDescriptor.Builder()
+.name("Minimum Idle Connections")
+.description("The minimum number of connections that can 
remain idle in the pool, without extra ones being " +
+"created, or zero to create none.")
+.defaultValue("0")
+.required(true)
+.addValidator(StandardValidators.INTEGER_VALIDATOR)
+.sensitive(false)
+.build();
+
+public static final PropertyDescriptor MAX_IDLE = new 
PropertyDescriptor.Builder()
+.name("Max Idle Connections")
+.description("The maximum number of connections that can 
remain idle in the pool, without extra ones being " +
+"released, or negative for no limit.")
+.defaultValue("8")
--- End diff --

@mattyb149 If you have a second, I'd appreciate your thoughts on this.


> DBCPConnectionPool configuration should expose underlying connection idle and 
> eviction configuration
> 
>
> Key: NIFI-5790
> URL: https://issues.apache.org/jira/browse/NIFI-5790
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Colin Dean
>Priority: Major
>  Labels: DBCP, database
>
> While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool 
> documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html]
>  that NiFi appears _not_ to have controller service configuration options 
> associated with [Apache 
> Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html]
>  {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I 
> think should be both set to 0 in my particular use case. 
> Alternatively, I think I could set {{maxConnLifetimeMillis}} to something 
> even in the minutes range and satisfy my use case (a connection need not be 
> released _immediately_ but within a reasonable period of time), but this 
> option is also not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPCo...

2018-11-06 Thread patricker
Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3133#discussion_r231152202
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java
 ---
@@ -164,6 +161,71 @@ public ValidationResult validate(final String subject, 
final String input, final
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
+public static final PropertyDescriptor MIN_IDLE = new 
PropertyDescriptor.Builder()
+.name("Minimum Idle Connections")
+.description("The minimum number of connections that can 
remain idle in the pool, without extra ones being " +
+"created, or zero to create none.")
+.defaultValue("0")
+.required(true)
+.addValidator(StandardValidators.INTEGER_VALIDATOR)
+.sensitive(false)
+.build();
+
+public static final PropertyDescriptor MAX_IDLE = new 
PropertyDescriptor.Builder()
+.name("Max Idle Connections")
+.description("The maximum number of connections that can 
remain idle in the pool, without extra ones being " +
+"released, or negative for no limit.")
+.defaultValue("8")
--- End diff --

@mattyb149 If you have a second, I'd appreciate your thoughts on this.


---


[jira] [Commented] (NIFI-5767) Documentation of the NiFi Toolkit

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676811#comment-16676811
 ] 

ASF GitHub Bot commented on NIFI-5767:
--

Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/3124
  
I'm good with removing the README once 1.9.0 is released, or changing it to 
a link to the new toolkit docs.


> Documentation of the NiFi Toolkit
> -
>
> Key: NIFI-5767
> URL: https://issues.apache.org/jira/browse/NIFI-5767
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation  Website
>Reporter: Pierre Villard
>Assignee: Andrew Lim
>Priority: Major
> Fix For: 1.9.0
>
>
> The NiFi toolkit should have its own documentation in a dedicated page, 
> probably just under "Admin guide".
> The documentation should have a paragraph about each tool:
>  * CLI - 
> https://github.com/apache/nifi/blob/master/nifi-toolkit/nifi-toolkit-cli/README.md
>  * Configuration encryption - 
> https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#encrypt-config_tool
>  * File manager - 
> https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#file-manager
>  * Flow analyzer
>  * Node manager - 
> https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#node-manager
>  * Notify
>  * S2S
>  * TLS Toolkit - 
> https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#tls_generation_toolkit
>  * ZooKeeper migrator - 
> https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#zookeeper_migrator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3124: NIFI-5767 Added NiFi Toolkit Guide to docs

2018-11-06 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/3124
  
I'm good with removing the README once 1.9.0 is released, or changing it to 
a link to the new toolkit docs.


---


[jira] [Commented] (NIFIREG-205) NiFi Registry DB gets out of sync with git repository, no apprent remediation

2018-11-06 Thread Kevin Doran (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFIREG-205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676795#comment-16676795
 ] 

Kevin Doran commented on NIFIREG-205:
-

Thanks [~ijokarumawak], that proposal sounds good to me.

> NiFi Registry DB gets out of sync with git repository, no apprent remediation
> -
>
> Key: NIFIREG-205
> URL: https://issues.apache.org/jira/browse/NIFIREG-205
> Project: NiFi Registry
>  Issue Type: Bug
> Environment: Centos 7.5
>Reporter: Dye357
>Assignee: Koji Kawamura
>Priority: Major
>
> I've observed a couple issues with the GitFlowPersistenceAdapter:
>  # When adding a new process group to NIFREG If for any reason the git 
> repository is in a "dirty" (untracked file) state the adding of the process 
> group fails. However an entry is still created in the DB with a version of 0. 
> Once in this state you cannot delete the flow from NIFIREG and you cannot 
> restart version control from nifi with the same name. I assume the only way 
> to fix this is to manually go into the DB and delete the record.
>  # When using Remote To Push, if the push fails the same behavior in #1 is 
> exhibited. It's not reasonable to expect that a push will always succeed. The 
> remote git repository could be offline for maintenance etc...
> Steps to reproduce:
>  # Start nifi registry with an empty db and clean git repo.
>  # add an untracked file to the git repo but do-not commit it.
>  # Start a processgroup under version control.
>  # Expect Failure in Nifi UI
>  # Expect Exception in Log saying untracked files in git repo.
>  # Delete flow from nifi-registry using Actions -> Delete.
>  # Expect Failure case, recieve error deleting flow message.
>  # Refresh nifi-registry UI - flow is still present, version is 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-5735) Record-oriented processors/services do not properly support Avro Unions

2018-11-06 Thread Alex Savitsky (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675735#comment-16675735
 ] 

Alex Savitsky edited comment on NIFI-5735 at 11/6/18 1:36 PM:
--

Attached is a patch against the master NiFi branch that fixes the issue.

General idea: convertToAvroObject now returns a pair of the original conversion 
result and the number of fields that failed the conversion for the underlying 
record type, if any (0 otherwise).

The only place where the second pair element is used, is in the lambda passed 
to convertUnionFieldValue.

Instead of simply returning the converted Avro object, the lambda now inspects 
the number of failed fields, throwing an exception if this number is not zero.

This signals the schema conversion error to the caller, allowing 
convertUnionFieldValue to continue iterating union schemas, until one is found 
that has all the fields recognized.

[^NIFI-5735.patch]


was (Author: alex_savitsky):
Attached is a patch against the master NiFi branch that fixes the issue. 
General idea: convertToAvroObject now returns a pair of the original conversion 
result and the number of fields that failed the conversion for the underlying 
record type, if any (0 otherwise). The only place where the second pair element 
is used, is in the lambda passed to convertUnionFieldValue. Instead of simply 
returning the converted Avro object, the lambda now inspects the number of 
failed fields, throwing an exception if this number is not zero. This signals 
the schema conversion error to the caller, allowing convertUnionFieldValue to 
continue iterating union schemas, until one is found that has all the fields 
recognized.

[^NIFI-5735.patch]

> Record-oriented processors/services do not properly support Avro Unions
> ---
>
> Key: NIFI-5735
> URL: https://issues.apache.org/jira/browse/NIFI-5735
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions
>Affects Versions: 1.7.1
>Reporter: Daniel Solow
>Priority: Major
>  Labels: AVRO, avro
> Attachments: 
> 0001-NIFI-5735-added-preliminary-support-for-union-resolu.patch, 
> NIFI-5735.patch
>
>
> The [Avro spec|https://avro.apache.org/docs/1.8.2/spec.html#Unions] states:
> {quote}Unions may not contain more than one schema with the same type, 
> *except for the named types* record, fixed and enum. For example, unions 
> containing two array types or two map types are not permitted, but two types 
> with different names are permitted. (Names permit efficient resolution when 
> reading and writing unions.)
> {quote}
> However record oriented processors/services in Nifi do not support multiple 
> named types per union. This is a problem, for example, with the following 
> schema:
> {code:javascript}
> {
> "type": "record",
> "name": "root",
> "fields": [
> {
> "name": "children",
> "type": {
> "type": "array",
> "items": [
> {
> "type": "record",
> "name": "left",
> "fields": [
> {
> "name": "f1",
> "type": "string"
> }
> ]
> },
> {
> "type": "record",
> "name": "right",
> "fields": [
> {
> "name": "f2",
> "type": "int"
> }
> ]
> }
> ]
> }
> }
> ]
> }
> {code}
>  This schema contains a field name "children" which is array of type union. 
> The union type contains two possible record types. Currently the Nifi avro 
> utilities will fail to process records of this schema with "children" arrays 
> that contain both "left" and "right" record types.
> I've traced this bug to the [AvroTypeUtils 
> class|https://github.com/apache/nifi/blob/rel/nifi-1.7.1/nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java].
> Specifically there are bugs in the convertUnionFieldValue method and in the 
> buildAvroSchema method. Both of these methods make the assumption that an 
> Avro union can only contain one child type of each type. As stated in the 
> spec, this is true for primitive types and non-named complex types but not 
> for named types.
>  There may be related bugs elsewhere, but I haven't been able to locate them 
> yet.
>  
>  



--
This message was sent by 

[jira] [Commented] (NIFI-4362) Prometheus Reporting Task

2018-11-06 Thread Sudeep Kumar Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676748#comment-16676748
 ] 

Sudeep Kumar Garg commented on NIFI-4362:
-

Hi [~dseifert] i am able to get more metrics after starting pushgateway agent. 
But as per grafana metrics i am still not gettin any value for below metrics 
"process_group_amount_flowfiles_total" 

"process_group_amount_bytes_total"

"process_group_size_content_total". Can you please help me into that or let me 
know if need more information about the same.

Thanks,
Sudeep

> Prometheus Reporting Task
> -
>
> Key: NIFI-4362
> URL: https://issues.apache.org/jira/browse/NIFI-4362
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: matt price
>Assignee: matt price
>Priority: Minor
>  Labels: features, newbie
> Attachments: image-2018-11-06-15-25-42-486.png, 
> nifi-prometheus-nar-1.7.1.nar
>
>
> Right now Datadog is one of the few external monitoring systems that is 
> supported by Nifi via a reporting task.  We are building a Prometheus 
> reporting task that will report similar metrics as Datadog/processor status 
> history and wanted to contribute this back to the community.
> This is my first contribution to Nifi so please correct me if I'm doing 
> something incorrectly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5791) Add Apache Daffodil parse/unparse processor

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676746#comment-16676746
 ] 

ASF GitHub Bot commented on NIFI-5791:
--

Github user stevedlawrence commented on the issue:

https://github.com/apache/nifi/pull/3130
  
I'm not too familiar with the Reader/Writer idiom. From what I can tell, a 
Reader converts data to a Record, and a Writer converts those Records back to 
the data format? Is that accurate? My one concern is that although 
DFDL/Daffodil can handle record oriented data, often times it is much more 
complex. For example, the [daffodil 
examples](https://daffodil.apache.org/examples/) page shows two examples of how 
DFDL can convert data to XML. The first example is CSV data and is clearly 
record oriented. But the second example is PCAP (used in the above template) 
which could be seen as records, but is a complex nesting and there's a global 
header that isn't really a record. Most data formats we've seen DFDL used for a 
more like the latter. 

The SchemaRegistry concept that looks to be used by Readers/Writers seems 
like a really nice way to provide a DFDL schema (if possible without using the 
Records?), but I'm not sure how well the Record concept fits in. Does it make 
sense to keep the XML/JSON output as in the PR, and then if a particular 
infoset does map well to a Record the user could add a JSON/XMLReader to 
convert to a record?


> Add Apache Daffodil parse/unparse processor
> ---
>
> Key: NIFI-5791
> URL: https://issues.apache.org/jira/browse/NIFI-5791
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Steve Lawrence
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3130: NIFI-5791: Add Apache Daffodil (incubating) bundle

2018-11-06 Thread stevedlawrence
Github user stevedlawrence commented on the issue:

https://github.com/apache/nifi/pull/3130
  
I'm not too familiar with the Reader/Writer idiom. From what I can tell, a 
Reader converts data to a Record, and a Writer converts those Records back to 
the data format? Is that accurate? My one concern is that although 
DFDL/Daffodil can handle record oriented data, often times it is much more 
complex. For example, the [daffodil 
examples](https://daffodil.apache.org/examples/) page shows two examples of how 
DFDL can convert data to XML. The first example is CSV data and is clearly 
record oriented. But the second example is PCAP (used in the above template) 
which could be seen as records, but is a complex nesting and there's a global 
header that isn't really a record. Most data formats we've seen DFDL used for a 
more like the latter. 

The SchemaRegistry concept that looks to be used by Readers/Writers seems 
like a really nice way to provide a DFDL schema (if possible without using the 
Records?), but I'm not sure how well the Record concept fits in. Does it make 
sense to keep the XML/JSON output as in the PR, and then if a particular 
infoset does map well to a Record the user could add a JSON/XMLReader to 
convert to a record?


---


[jira] [Updated] (NIFI-5788) Introduce batch size limit in PutDatabaseRecord processor

2018-11-06 Thread Vadim (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim updated NIFI-5788:

Description: 
Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
prepared SQL statements. Specifically, Teradata JDBC driver 
([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would fail 
SQL statement when the batch overflows the internal limits.

Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
work around the issue in certain scenarios, but generally, this solution is not 
perfect because the SQL statements would be executed in different transaction 
contexts and data integrity would not be preserved.

The solution suggests the following:
 * introduce a new optional parameter in *PutDatabaseRecord* processor, 
*max_batch_size* which defines the maximum batch size in INSERT/UPDATE 
statement; the default value zero (INFINITY) preserves the old behavior
 * divide the input into batches of the specified size and invoke 
PreparedStatement.executeBatch()  for each batch

Pull request: [https://github.com/apache/nifi/pull/3128]

 

[EDIT] Changed batch_size to max_batch_size. The default value would be zero 
(INFINITY) 

  was:
Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
prepared SQL statements. Specifically, Teradata JDBC driver 
([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would fail 
SQL statement when the batch overflows the internal limits.

Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
work around the issue in certain scenarios, but generally, this solution is not 
perfect because the SQL statements would be executed in different transaction 
contexts and data integrity would not be preserved.

The solution suggests the following:
 * introduce a new optional parameter in *PutDatabaseRecord* processor, 
*batch_size* which defines the maximum size of the bulk in INSERT/UPDATE 
statement; its default value is -1 (INFINITY) preserves the old behavior
 * divide the input into batches of the specified size and invoke 
PreparedStatement.executeBatch()  for each batch

Pull request: [https://github.com/apache/nifi/pull/3128]

 


> Introduce batch size limit in PutDatabaseRecord processor
> -
>
> Key: NIFI-5788
> URL: https://issues.apache.org/jira/browse/NIFI-5788
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
> Environment: Teradata DB
>Reporter: Vadim
>Priority: Major
>  Labels: pull-request-available
>
> Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
> prepared SQL statements. Specifically, Teradata JDBC driver 
> ([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would 
> fail SQL statement when the batch overflows the internal limits.
> Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
> work around the issue in certain scenarios, but generally, this solution is 
> not perfect because the SQL statements would be executed in different 
> transaction contexts and data integrity would not be preserved.
> The solution suggests the following:
>  * introduce a new optional parameter in *PutDatabaseRecord* processor, 
> *max_batch_size* which defines the maximum batch size in INSERT/UPDATE 
> statement; the default value zero (INFINITY) preserves the old behavior
>  * divide the input into batches of the specified size and invoke 
> PreparedStatement.executeBatch()  for each batch
> Pull request: [https://github.com/apache/nifi/pull/3128]
>  
> [EDIT] Changed batch_size to max_batch_size. The default value would be zero 
> (INFINITY) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-648) add processor and add processor with linkage nomenclature is confusing

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676672#comment-16676672
 ] 

ASF GitHub Bot commented on MINIFICPP-648:
--

Github user arpadboda commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/432
  
> @arpadboda is this good? I'm good with this otherwise.

Good, but not complete, I will amend some stuff soon. (today)


> add processor and add processor with linkage nomenclature is confusing
> --
>
> Key: MINIFICPP-648
> URL: https://issues.apache.org/jira/browse/MINIFICPP-648
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Arpad Boda
>Priority: Blocker
>  Labels: CAPI
> Fix For: 0.6.0
>
>
> add_processor should be changed to always add a processor with linkage 
> without compelling documentation as why this exists.. As a result we will 
> need to add a create_processor function to create one without adding it to 
> the flow ( certain use cases where a flow isn't needed such as invokehttp or 
> listenhttp ) this can be moved to 0.7.0 if we tag before recent commits. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp issue #432: MINIFICPP-648 - add processor and add processor ...

2018-11-06 Thread arpadboda
Github user arpadboda commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/432
  
> @arpadboda is this good? I'm good with this otherwise.

Good, but not complete, I will amend some stuff soon. (today)


---


[jira] [Commented] (NIFI-5788) Introduce batch size limit in PutDatabaseRecord processor

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676660#comment-16676660
 ] 

ASF GitHub Bot commented on NIFI-5788:
--

Github user vadimar commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3128#discussion_r231089816
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java
 ---
@@ -265,6 +265,17 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
--- End diff --

Oh. I see it now. The display label is "Bulk Size". I'll fix it to be 
"Maximum Batch Size". Thanks


> Introduce batch size limit in PutDatabaseRecord processor
> -
>
> Key: NIFI-5788
> URL: https://issues.apache.org/jira/browse/NIFI-5788
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
> Environment: Teradata DB
>Reporter: Vadim
>Priority: Major
>  Labels: pull-request-available
>
> Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
> prepared SQL statements. Specifically, Teradata JDBC driver 
> ([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would 
> fail SQL statement when the batch overflows the internal limits.
> Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
> work around the issue in certain scenarios, but generally, this solution is 
> not perfect because the SQL statements would be executed in different 
> transaction contexts and data integrity would not be preserved.
> The solution suggests the following:
>  * introduce a new optional parameter in *PutDatabaseRecord* processor, 
> *batch_size* which defines the maximum size of the bulk in INSERT/UPDATE 
> statement; its default value is -1 (INFINITY) preserves the old behavior
>  * divide the input into batches of the specified size and invoke 
> PreparedStatement.executeBatch()  for each batch
> Pull request: [https://github.com/apache/nifi/pull/3128]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3128: NIFI-5788: Introduce batch size limit in PutDatabas...

2018-11-06 Thread vadimar
Github user vadimar commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3128#discussion_r231089816
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java
 ---
@@ -265,6 +265,17 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
--- End diff --

Oh. I see it now. The display label is "Bulk Size". I'll fix it to be 
"Maximum Batch Size". Thanks


---


[jira] [Commented] (NIFI-5788) Introduce batch size limit in PutDatabaseRecord processor

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676649#comment-16676649
 ] 

ASF GitHub Bot commented on NIFI-5788:
--

Github user vadimar commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3128#discussion_r231088684
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java
 ---
@@ -669,11 +685,20 @@ private void executeDML(ProcessContext context, 
ProcessSession session, FlowFile
 }
 }
 ps.addBatch();
+if (++currentBatchSize == batchSize) {
--- End diff --

I'm not sure this would be benefitial. PutDatabaseRecord works without 
autoCommit. It's all or nothing.


> Introduce batch size limit in PutDatabaseRecord processor
> -
>
> Key: NIFI-5788
> URL: https://issues.apache.org/jira/browse/NIFI-5788
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
> Environment: Teradata DB
>Reporter: Vadim
>Priority: Major
>  Labels: pull-request-available
>
> Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
> prepared SQL statements. Specifically, Teradata JDBC driver 
> ([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would 
> fail SQL statement when the batch overflows the internal limits.
> Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
> work around the issue in certain scenarios, but generally, this solution is 
> not perfect because the SQL statements would be executed in different 
> transaction contexts and data integrity would not be preserved.
> The solution suggests the following:
>  * introduce a new optional parameter in *PutDatabaseRecord* processor, 
> *batch_size* which defines the maximum size of the bulk in INSERT/UPDATE 
> statement; its default value is -1 (INFINITY) preserves the old behavior
>  * divide the input into batches of the specified size and invoke 
> PreparedStatement.executeBatch()  for each batch
> Pull request: [https://github.com/apache/nifi/pull/3128]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3128: NIFI-5788: Introduce batch size limit in PutDatabas...

2018-11-06 Thread vadimar
Github user vadimar commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3128#discussion_r231088684
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java
 ---
@@ -669,11 +685,20 @@ private void executeDML(ProcessContext context, 
ProcessSession session, FlowFile
 }
 }
 ps.addBatch();
+if (++currentBatchSize == batchSize) {
--- End diff --

I'm not sure this would be benefitial. PutDatabaseRecord works without 
autoCommit. It's all or nothing.


---


[jira] [Commented] (NIFI-5788) Introduce batch size limit in PutDatabaseRecord processor

2018-11-06 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676639#comment-16676639
 ] 

ASF GitHub Bot commented on NIFI-5788:
--

Github user vadimar commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3128#discussion_r231087439
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java
 ---
@@ -265,6 +265,17 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("put-db-record-batch-size")
+.displayName("Bulk Size")
+.description("Specifies batch size for INSERT and UPDATE 
statements. This parameter has no effect for other statements specified in 
'Statement Type'."
++ " Non-positive value has the effect of infinite bulk 
size.")
+.defaultValue("-1")
--- End diff --

I'll change the default to be zero and the validator to 
NONNEGATIVE_INTEGER_VALIDATOR


> Introduce batch size limit in PutDatabaseRecord processor
> -
>
> Key: NIFI-5788
> URL: https://issues.apache.org/jira/browse/NIFI-5788
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
> Environment: Teradata DB
>Reporter: Vadim
>Priority: Major
>  Labels: pull-request-available
>
> Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
> prepared SQL statements. Specifically, Teradata JDBC driver 
> ([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would 
> fail SQL statement when the batch overflows the internal limits.
> Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
> work around the issue in certain scenarios, but generally, this solution is 
> not perfect because the SQL statements would be executed in different 
> transaction contexts and data integrity would not be preserved.
> The solution suggests the following:
>  * introduce a new optional parameter in *PutDatabaseRecord* processor, 
> *batch_size* which defines the maximum size of the bulk in INSERT/UPDATE 
> statement; its default value is -1 (INFINITY) preserves the old behavior
>  * divide the input into batches of the specified size and invoke 
> PreparedStatement.executeBatch()  for each batch
> Pull request: [https://github.com/apache/nifi/pull/3128]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >