[jira] [Updated] (NIFI-3561) JettyWebSocketServer uses original requested port, instead of the forwarded port

2017-03-06 Thread Koji Kawamura (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-3561:

Description: 
Situation:
We have port forwarding set up in such a way where port 25000 is forwarded to 
port 15000.  ListenWebSocket processor listens on this 15000 port.


Requests that hit the JettyWebSocketServer are considered as attempting to 
connect to the original port (25000) rather than the forwarded port of 15000.  
The same was reproduced with different ports and on different forwarding layers 
(i.e. NAT and iptables).  I've included a stack trace below.


>From Koji:
-I think it's more of a Jetty side issue, after it upgrade HTTP connection to 
TCP, it still uses the original requested port (which is a port forwarding 
request to the real port) to find a request handler assigned to that port.-
(updated) Excuse me, but I was wrong about this diagnose. It's a problem in 
NiFi JettyWebSocketServer I wrote. The ControllerService lookups server 
instance by a port number that the CS listens to. When a request made through 
port forwarding, a port passed by ServletUpgradeRequest is different than CS is 
bounded to. Then it fails to lookup a server instance. I will check if original 
port can be retrieved from a request, if not, I will add a configuration 
property to the CS to specify forwarding port numbers.


java.lang.RuntimeException: No controller service is bound with port: 25000
at 
org.apache.nifi.websocket.jetty.JettyWebSocketServer$JettyWebSocketServlet.createWebSocket(JettyWebSocketServer.java:134)
 ~[nifi-websocket-services-jetty-1.1.0.2.1.2.0-10.jar:1.1.0.2.1.2.0-10]
at 
org.eclipse.jetty.websocket.server.WebSocketServerFactory.acceptWebSocket(WebSocketServerFactory.java:187)
 ~[websocket-server-9.3.13.v20161014.jar:9.3.13.v20161014]
at 
org.eclipse.jetty.websocket.server.WebSocketServerFactory.acceptWebSocket(WebSocketServerFactory.java:172)
 ~[websocket-server-9.3.13.v20161014.jar:9.3.13.v20161014]
at 
org.eclipse.jetty.websocket.servlet.WebSocketServlet.service(WebSocketServlet.java:155)
 ~[websocket-servlet-9.3.13.v20161014.jar:9.3.13.v20161014]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) 
~[javax.servlet-api-3.1.0.jar:3.1.0]
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:845) 
~[jetty-servlet-9.3.9.v20160517.jar:9.3.9.v20160517]
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583) 
[jetty-servlet-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1174)
 [jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511) 
[jetty-servlet-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1106)
 [jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) 
[jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
 [jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
[jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at org.eclipse.jetty.server.Server.handle(Server.java:524) 
[jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319) 
[jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253) 
[jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
 [jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517]
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) 
[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) 
[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
 [jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 [jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
 [jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 [jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) 
[jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]

  was:
Situation:
We have port 

[jira] [Assigned] (NIFI-3561) JettyWebSocketServer uses original requested port, instead of the forwarded port

2017-03-06 Thread Koji Kawamura (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura reassigned NIFI-3561:
---

Assignee: Koji Kawamura

> JettyWebSocketServer uses original requested port, instead of the forwarded 
> port
> 
>
> Key: NIFI-3561
> URL: https://issues.apache.org/jira/browse/NIFI-3561
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.1.0
>Reporter: Edgar Orendain
>Assignee: Koji Kawamura
>  Labels: jetty, websocket
>
> Situation:
> We have port forwarding set up in such a way where port 25000 is forwarded to 
> port 15000.  ListenWebSocket processor listens on this 15000 port.
> Requests that hit the JettyWebSocketServer are considered as attempting to 
> connect to the original port (25000) rather than the forwarded port of 15000. 
>  The same was reproduced with different ports and on different forwarding 
> layers (i.e. NAT and iptables).  I've included a stack trace below.
> From Koji:
> -I think it's more of a Jetty side issue, after it upgrade HTTP connection to 
> TCP, it still uses the original requested port (which is a port forwarding 
> request to the real port) to find a request handler assigned to that port.-
> (updated) Excuse me, but I was wrong about this diagnose. It's a problem in 
> NiFi JettyWebSocketServer I wrote. The ControllerService lookups server 
> instance by a port number that the CS listens to. When a request made through 
> port forwarding, a port passed by ServletUpgradeRequest is different than CS 
> is bounded to. Then it fails to lookup a server instance. I will check if 
> original port can be retrieved from a request, if not, I will add a 
> configuration property to the CS to specify forwarding port numbers.
> java.lang.RuntimeException: No controller service is bound with port: 25000
> at 
> org.apache.nifi.websocket.jetty.JettyWebSocketServer$JettyWebSocketServlet.createWebSocket(JettyWebSocketServer.java:134)
>  ~[nifi-websocket-services-jetty-1.1.0.2.1.2.0-10.jar:1.1.0.2.1.2.0-10]
> at 
> org.eclipse.jetty.websocket.server.WebSocketServerFactory.acceptWebSocket(WebSocketServerFactory.java:187)
>  ~[websocket-server-9.3.13.v20161014.jar:9.3.13.v20161014]
> at 
> org.eclipse.jetty.websocket.server.WebSocketServerFactory.acceptWebSocket(WebSocketServerFactory.java:172)
>  ~[websocket-server-9.3.13.v20161014.jar:9.3.13.v20161014]
> at 
> org.eclipse.jetty.websocket.servlet.WebSocketServlet.service(WebSocketServlet.java:155)
>  ~[websocket-servlet-9.3.13.v20161014.jar:9.3.13.v20161014]
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) 
> ~[javax.servlet-api-3.1.0.jar:3.1.0]
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:845) 
> ~[jetty-servlet-9.3.9.v20160517.jar:9.3.9.v20160517]
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583) 
> [jetty-servlet-9.3.9.v20160517.jar:9.3.9.v20160517]
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1174)
>  [jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
> at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511) 
> [jetty-servlet-9.3.9.v20160517.jar:9.3.9.v20160517]
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1106)
>  [jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) 
> [jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>  [jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>  [jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
> at org.eclipse.jetty.server.Server.handle(Server.java:524) 
> [jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319) 
> [jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253) 
> [jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>  [jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517]
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) 
> [jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517]
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>  [jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517]
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>  [jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517]
> at 

[jira] [Commented] (NIFI-3163) Flow Fingerprint should include new RPG configurations

2017-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898882#comment-15898882
 ] 

ASF GitHub Bot commented on NIFI-3163:
--

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/1332
  
@alopresto It'd be convenient if a node can fix stale flow.xml.gz 
automatically and join a cluster by removing its flow.xml.gz and fetch the 
latest one from primary node. I think we need to add a nifi.property to enable 
that logic (disable by default). As you wrote, that is beyond the scope of this 
PR. It may be worth for another JIRA to keep discussion going on.


> Flow Fingerprint should include new RPG configurations
> --
>
> Key: NIFI-3163
> URL: https://issues.apache.org/jira/browse/NIFI-3163
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> NiFi calculates fingerprint of a flow.xml using attributes related to data 
> processing. Since 1.0.0, some RemoteProcessGroup configurations have been 
> added, but not being taken into account for fingerprint.
> These new configurations should be revisited and added accordingly.
> - Transport protocol
> - Multiple target URIs
> - Proxy settings



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi issue #1332: NIFI-3163: Flow Fingerprint should include new RPG configu...

2017-03-06 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/1332
  
@alopresto It'd be convenient if a node can fix stale flow.xml.gz 
automatically and join a cluster by removing its flow.xml.gz and fetch the 
latest one from primary node. I think we need to add a nifi.property to enable 
that logic (disable by default). As you wrote, that is beyond the scope of this 
PR. It may be worth for another JIRA to keep discussion going on.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-3517) If HandleHttpResponse cannot write response, remove entry from HttpContextMap

2017-03-06 Thread Joe Skora (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Skora updated NIFI-3517:

Affects Version/s: 1.1.1
   Status: Patch Available  (was: Open)

> If HandleHttpResponse cannot write response, remove entry from HttpContextMap
> -
>
> Key: NIFI-3517
> URL: https://issues.apache.org/jira/browse/NIFI-3517
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.1.1, 0.7.1
>Reporter: Mark Payne
>Assignee: Joe Skora
>
> Currently, if HandleHttpResponse cannot write to the OutputStream, it catches 
> the general Exception and routes to 'failure' without removing the entry from 
> the map. If clients often timeout, though, we will fail to write to the 
> OutputStreams and then leave the entry in the map, preventing new connections.
> If a ProcessException is caught when calling ProcessSession.exportTo, we 
> should remove the entry from the map before routing to failure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi-minifi-cpp issue #62: MINIFI-231: Add Flow Persistent, Using id instead...

2017-03-06 Thread benqiu2016
Github user benqiu2016 commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/62
  
@phrocker Add unitest, please review and approve. I would like to get it 
merge before your big name space change. Thanks a lot.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3517) If HandleHttpResponse cannot write response, remove entry from HttpContextMap

2017-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898689#comment-15898689
 ] 

ASF GitHub Bot commented on NIFI-3517:
--

GitHub user jskora opened a pull request:

https://github.com/apache/nifi/pull/1567

NIFI-3517 If HandleHttpResponse cannot write response remove entry from 
HttpContextMap

* Add test for failure not clear context map.
* Add handler to remove context map entry if ProcessException occurs during 
while exporting response.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jskora/nifi NIFI-3517-1.x

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1567.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1567


commit 3aabd1eb23efb84d842489adcff55463633885f3
Author: Joe Skora 
Date:   2017-03-07T01:25:06Z

NIFI-3517 If HandleHttpResponse cannot write response remove entry from 
HttpContextMap.
* Add test for failure not clear context map.
* Add handler to remove context map entry if ProcessException occurs during 
while exporting response.




> If HandleHttpResponse cannot write response, remove entry from HttpContextMap
> -
>
> Key: NIFI-3517
> URL: https://issues.apache.org/jira/browse/NIFI-3517
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 0.7.1
>Reporter: Mark Payne
>Assignee: Joe Skora
>
> Currently, if HandleHttpResponse cannot write to the OutputStream, it catches 
> the general Exception and routes to 'failure' without removing the entry from 
> the map. If clients often timeout, though, we will fail to write to the 
> OutputStreams and then leave the entry in the map, preventing new connections.
> If a ProcessException is caught when calling ProcessSession.exportTo, we 
> should remove the entry from the map before routing to failure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi pull request #1567: NIFI-3517 If HandleHttpResponse cannot write respon...

2017-03-06 Thread jskora
GitHub user jskora opened a pull request:

https://github.com/apache/nifi/pull/1567

NIFI-3517 If HandleHttpResponse cannot write response remove entry from 
HttpContextMap

* Add test for failure not clear context map.
* Add handler to remove context map entry if ProcessException occurs during 
while exporting response.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jskora/nifi NIFI-3517-1.x

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1567.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1567


commit 3aabd1eb23efb84d842489adcff55463633885f3
Author: Joe Skora 
Date:   2017-03-07T01:25:06Z

NIFI-3517 If HandleHttpResponse cannot write response remove entry from 
HttpContextMap.
* Add test for failure not clear context map.
* Add handler to remove context map entry if ProcessException occurs during 
while exporting response.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-3561) JettyWebSocketServer uses original requested port, instead of the forwarded port

2017-03-06 Thread Edgar Orendain (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edgar Orendain updated NIFI-3561:
-
Description: 
Situation:
We have port forwarding set up in such a way where port 25000 is forwarded to 
port 15000.  ListenWebSocket processor listens on this 15000 port.


Requests that hit the JettyWebSocketServer are considered as attempting to 
connect to the original port (25000) rather than the forwarded port of 15000.  
The same was reproduced with different ports and on different forwarding layers 
(i.e. NAT and iptables).  I've included a stack trace below.


>From Koji:
I think it's more of a Jetty side issue, after it upgrade HTTP connection to 
TCP, it still uses the original requested port (which is a port forwarding 
request to the real port) to find a request handler assigned to that port.



java.lang.RuntimeException: No controller service is bound with port: 25000
at 
org.apache.nifi.websocket.jetty.JettyWebSocketServer$JettyWebSocketServlet.createWebSocket(JettyWebSocketServer.java:134)
 ~[nifi-websocket-services-jetty-1.1.0.2.1.2.0-10.jar:1.1.0.2.1.2.0-10]
at 
org.eclipse.jetty.websocket.server.WebSocketServerFactory.acceptWebSocket(WebSocketServerFactory.java:187)
 ~[websocket-server-9.3.13.v20161014.jar:9.3.13.v20161014]
at 
org.eclipse.jetty.websocket.server.WebSocketServerFactory.acceptWebSocket(WebSocketServerFactory.java:172)
 ~[websocket-server-9.3.13.v20161014.jar:9.3.13.v20161014]
at 
org.eclipse.jetty.websocket.servlet.WebSocketServlet.service(WebSocketServlet.java:155)
 ~[websocket-servlet-9.3.13.v20161014.jar:9.3.13.v20161014]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) 
~[javax.servlet-api-3.1.0.jar:3.1.0]
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:845) 
~[jetty-servlet-9.3.9.v20160517.jar:9.3.9.v20160517]
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583) 
[jetty-servlet-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1174)
 [jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511) 
[jetty-servlet-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1106)
 [jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) 
[jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
 [jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
[jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at org.eclipse.jetty.server.Server.handle(Server.java:524) 
[jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319) 
[jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253) 
[jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
 [jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517]
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) 
[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) 
[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
 [jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 [jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
 [jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 [jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) 
[jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]

  was:
Situation:
We have port forwarding set up in such a way where port 25000 is forwarded to 
port 15000.  ListenWebSocket processor listens on this 15000 port.

Requests that hit the JettyWebSocketServer are considered as attempting to 
connect to the original port (25000) rather than the forwarded port of 15000.  
The same was reproduced with different ports and on different forwarding layers 
(i.e. NAT and iptables).  I've included a stack trace below.

>From Koji:
I think it's more of a Jetty side issue, after it upgrade HTTP connection to 
TCP, 

[jira] [Created] (NIFI-3561) JettyWebSocketServer uses original requested port, instead of the forwarded port

2017-03-06 Thread Edgar Orendain (JIRA)
Edgar Orendain created NIFI-3561:


 Summary: JettyWebSocketServer uses original requested port, 
instead of the forwarded port
 Key: NIFI-3561
 URL: https://issues.apache.org/jira/browse/NIFI-3561
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.1.0
Reporter: Edgar Orendain


Situation:
We have port forwarding set up in such a way where port 25000 is forwarded to 
port 15000.  ListenWebSocket processor listens on this 15000 port.

Requests that hit the JettyWebSocketServer are considered as attempting to 
connect to the original port (25000) rather than the forwarded port of 15000.  
The same was reproduced with different ports and on different forwarding layers 
(i.e. NAT and iptables).  I've included a stack trace below.

>From Koji:
I think it's more of a Jetty side issue, after it upgrade HTTP connection to 
TCP, it still uses the original requested port (which is a port forwarding 
request to the real port) to find a request handler assigned to that port.


java.lang.RuntimeException: No controller service is bound with port: 25000
at 
org.apache.nifi.websocket.jetty.JettyWebSocketServer$JettyWebSocketServlet.createWebSocket(JettyWebSocketServer.java:134)
 ~[nifi-websocket-services-jetty-1.1.0.2.1.2.0-10.jar:1.1.0.2.1.2.0-10]
at 
org.eclipse.jetty.websocket.server.WebSocketServerFactory.acceptWebSocket(WebSocketServerFactory.java:187)
 ~[websocket-server-9.3.13.v20161014.jar:9.3.13.v20161014]
at 
org.eclipse.jetty.websocket.server.WebSocketServerFactory.acceptWebSocket(WebSocketServerFactory.java:172)
 ~[websocket-server-9.3.13.v20161014.jar:9.3.13.v20161014]
at 
org.eclipse.jetty.websocket.servlet.WebSocketServlet.service(WebSocketServlet.java:155)
 ~[websocket-servlet-9.3.13.v20161014.jar:9.3.13.v20161014]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) 
~[javax.servlet-api-3.1.0.jar:3.1.0]
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:845) 
~[jetty-servlet-9.3.9.v20160517.jar:9.3.9.v20160517]
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583) 
[jetty-servlet-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1174)
 [jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511) 
[jetty-servlet-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1106)
 [jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) 
[jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
 [jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
[jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at org.eclipse.jetty.server.Server.handle(Server.java:524) 
[jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319) 
[jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253) 
[jetty-server-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
 [jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517]
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) 
[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) 
[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
 [jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 [jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
 [jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 [jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) 
[jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi issue #1332: NIFI-3163: Flow Fingerprint should include new RPG configu...

2017-03-06 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/1332
  
I have a question about this process (may be beyond the scope of this PR 
though) -- why not allow the DFM to remove the flow.xml.gz from the node with 
outdated flow via the REST API and UI from any connected node? The issue is not 
limited or blocked connectivity between the nodes, so there could be logic on 
the node itself to remove its flow definition when given a properly-authorized 
command from a remote node that it knows is connected to the cluster. The 
authentication should not be an issue as the DFM permissions should be the same 
across the cluster. 

I at first considered allowing the node to perform this logic on its own by 
archiving its current flow and then deleting it to attempt to rejoin the 
cluster, but I can understand how admins would be uncomfortable that the level 
of autonomous activity could lead to data loss. 

Thoughts?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3163) Flow Fingerprint should include new RPG configurations

2017-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898598#comment-15898598
 ] 

ASF GitHub Bot commented on NIFI-3163:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/1332
  
I have a question about this process (may be beyond the scope of this PR 
though) -- why not allow the DFM to remove the flow.xml.gz from the node with 
outdated flow via the REST API and UI from any connected node? The issue is not 
limited or blocked connectivity between the nodes, so there could be logic on 
the node itself to remove its flow definition when given a properly-authorized 
command from a remote node that it knows is connected to the cluster. The 
authentication should not be an issue as the DFM permissions should be the same 
across the cluster. 

I at first considered allowing the node to perform this logic on its own by 
archiving its current flow and then deleting it to attempt to rejoin the 
cluster, but I can understand how admins would be uncomfortable that the level 
of autonomous activity could lead to data loss. 

Thoughts?


> Flow Fingerprint should include new RPG configurations
> --
>
> Key: NIFI-3163
> URL: https://issues.apache.org/jira/browse/NIFI-3163
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> NiFi calculates fingerprint of a flow.xml using attributes related to data 
> processing. Since 1.0.0, some RemoteProcessGroup configurations have been 
> added, but not being taken into account for fingerprint.
> These new configurations should be revisited and added accordingly.
> - Transport protocol
> - Multiple target URIs
> - Proxy settings



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3490) TLS Toolkit - define SAN in standalone mode

2017-03-06 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-3490:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> TLS Toolkit - define SAN in standalone mode
> ---
>
> Key: NIFI-3490
> URL: https://issues.apache.org/jira/browse/NIFI-3490
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
>  Labels: tls-toolkit
> Fix For: 1.2.0
>
>
> Following NIFI-3331, it would be useful to have the same option (add Subject 
> Alternative Names in certificates) when using the TLS toolkit in standalone 
> mode.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi issue #1530: NIFI-3490 added SAN option for TLS toolkit in standalone m...

2017-03-06 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/1530
  
Reviewed, verified contrib-check and all tests, and ran tool. Merged. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3490) TLS Toolkit - define SAN in standalone mode

2017-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898486#comment-15898486
 ] 

ASF GitHub Bot commented on NIFI-3490:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/1530
  
Reviewed, verified contrib-check and all tests, and ran tool. Merged. 


> TLS Toolkit - define SAN in standalone mode
> ---
>
> Key: NIFI-3490
> URL: https://issues.apache.org/jira/browse/NIFI-3490
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
>  Labels: tls-toolkit
> Fix For: 1.2.0
>
>
> Following NIFI-3331, it would be useful to have the same option (add Subject 
> Alternative Names in certificates) when using the TLS toolkit in standalone 
> mode.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3490) TLS Toolkit - define SAN in standalone mode

2017-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898485#comment-15898485
 ] 

ASF GitHub Bot commented on NIFI-3490:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1530


> TLS Toolkit - define SAN in standalone mode
> ---
>
> Key: NIFI-3490
> URL: https://issues.apache.org/jira/browse/NIFI-3490
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
>  Labels: tls-toolkit
> Fix For: 1.2.0
>
>
> Following NIFI-3331, it would be useful to have the same option (add Subject 
> Alternative Names in certificates) when using the TLS toolkit in standalone 
> mode.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3490) TLS Toolkit - define SAN in standalone mode

2017-03-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898482#comment-15898482
 ] 

ASF subversion and git services commented on NIFI-3490:
---

Commit bf112d065434ed536fff10b7aaa5eb3b70bc4b9d in nifi's branch 
refs/heads/master from [~pvillard]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=bf112d0 ]

NIFI-3490 added SAN option for TLS toolkit in standalone mode

This closes #1530.

Signed-off-by: Andy LoPresto 


> TLS Toolkit - define SAN in standalone mode
> ---
>
> Key: NIFI-3490
> URL: https://issues.apache.org/jira/browse/NIFI-3490
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
>  Labels: tls-toolkit
> Fix For: 1.2.0
>
>
> Following NIFI-3331, it would be useful to have the same option (add Subject 
> Alternative Names in certificates) when using the TLS toolkit in standalone 
> mode.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi pull request #1530: NIFI-3490 added SAN option for TLS toolkit in stand...

2017-03-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1530


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3490) TLS Toolkit - define SAN in standalone mode

2017-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898477#comment-15898477
 ] 

ASF GitHub Bot commented on NIFI-3490:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/1530
  
@pvillard31 I reviewed this again. I like it; my only concern is that if 
you run with `-SAN` it will still process, it won't show any error and a user 
might expect it to generate a SAN in the keystore though it won't. As no one 
other than the commenters on this PR have experienced this, however, I do not 
think it should stop the inclusion of this feature. 

```

hw12203:...assembly/target/nifi-toolkit-1.2.0-SNAPSHOT-bin/nifi-toolkit-1.2.0-SNAPSHOT
 (pr1530) alopresto
 1224747s @ 16:32:49 $ ./bin/tls-toolkit.sh standalone -n localhost -O -S 
password -SAN hostname.com
2017/03/06 16:33:04 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandaloneCommandLine: No 
nifiPropertiesFile specified, using embedded one.
2017/03/06 16:33:04 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Running standalone 
certificate generation with output directory ../nifi-toolkit-1.2.0-SNAPSHOT
2017/03/06 16:33:04 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Using existing CA 
certificate ../nifi-toolkit-1.2.0-SNAPSHOT/nifi-cert.pem and key 
../nifi-toolkit-1.2.0-SNAPSHOT/nifi-key.key
2017/03/06 16:33:04 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Overwriting any 
existing ssl configuration in ../nifi-toolkit-1.2.0-SNAPSHOT/localhost
2017/03/06 16:33:04 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Successfully 
generated TLS configuration for localhost 1 in 
../nifi-toolkit-1.2.0-SNAPSHOT/localhost
2017/03/06 16:33:04 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: No clientCertDn 
specified, not generating any client certificates.
2017/03/06 16:33:04 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: tls-toolkit 
standalone completed successfully

hw12203:...assembly/target/nifi-toolkit-1.2.0-SNAPSHOT-bin/nifi-toolkit-1.2.0-SNAPSHOT
 (pr1530) alopresto
 1224763s @ 16:33:05 $ keytool -list -v -keystore localhost/keystore.jks 
-storepass password

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

Alias name: nifi-key
Creation date: Mar 6, 2017
Entry type: PrivateKeyEntry
Certificate chain length: 2
Certificate[1]:
Owner: CN=localhost, OU=NIFI
Issuer: CN=localhost, OU=NIFI
Serial number: 15aa62f097a
Valid from: Mon Mar 06 16:33:04 PST 2017 until: Thu Mar 05 16:33:04 PST 2020
Certificate fingerprints:
 MD5:  52:CF:22:54:02:AB:22:8E:DE:AC:C8:2E:3F:8C:1B:2C
 SHA1: 55:87:B5:20:8F:1F:03:F3:D2:68:85:F5:4E:49:85:D5:53:6A:27:11
 SHA256: 
E1:6E:F6:89:73:70:26:31:57:CB:8B:E6:44:DA:32:0B:77:39:22:1D:EA:5E:B8:3E:2D:4F:24:4C:68:2A:A4:3F
 Signature algorithm name: SHA256withRSA
 Version: 3

Extensions:

#1: ObjectId: 2.5.29.35 Criticality=false
AuthorityKeyIdentifier [
KeyIdentifier [
: D0 B6 50 54 15 F2 64 EA   AA EB D4 82 A0 07 B4 2D  ..PT..d-
0010: 28 AC 66 CF(.f.
]
]

#2: ObjectId: 2.5.29.19 Criticality=false
BasicConstraints:[
  CA:false
  PathLen: undefined
]

#3: ObjectId: 2.5.29.37 Criticality=false
ExtendedKeyUsages [
  clientAuth
  serverAuth
]

#4: ObjectId: 2.5.29.15 Criticality=true
KeyUsage [
  DigitalSignature
  Non_repudiation
  Key_Encipherment
  Data_Encipherment
  Key_Agreement
]

#5: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
: D0 DC A2 03 C9 18 03 B8   B1 1B 0E 11 BC A5 A7 CF  
0010: 1F 68 3A 63.h:c
]
]

Certificate[2]:
Owner: CN=localhost, OU=NIFI
Issuer: CN=localhost, OU=NIFI
Serial number: 15aa62eb59a
Valid from: Mon Mar 06 16:32:43 PST 2017 until: Thu Mar 05 16:32:43 PST 2020
Certificate fingerprints:
 MD5:  44:15:FC:42:BE:A3:A5:7E:C3:86:AF:82:50:51:E3:E4
 SHA1: 38:D3:42:19:6C:71:5C:02:BF:8E:A1:02:DE:A3:D8:0C:D4:73:8D:B7
 SHA256: 
13:71:F8:1A:22:A3:43:93:29:1D:2F:41:F8:C0:1E:25:79:E2:D7:5D:28:53:5C:21:97:A0:68:6C:AD:39:18:62
 Signature algorithm name: SHA256withRSA
 Version: 3

Extensions:

#1: ObjectId: 2.5.29.35 Criticality=false
AuthorityKeyIdentifier [
KeyIdentifier [
: D0 B6 50 54 15 F2 64 EA   AA EB D4 82 A0 07 B4 2D  ..PT..d-
0010: 28 AC 66 CF(.f.
]
]


[GitHub] nifi issue #1530: NIFI-3490 added SAN option for TLS toolkit in standalone m...

2017-03-06 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/1530
  
@pvillard31 I reviewed this again. I like it; my only concern is that if 
you run with `-SAN` it will still process, it won't show any error and a user 
might expect it to generate a SAN in the keystore though it won't. As no one 
other than the commenters on this PR have experienced this, however, I do not 
think it should stop the inclusion of this feature. 

```

hw12203:...assembly/target/nifi-toolkit-1.2.0-SNAPSHOT-bin/nifi-toolkit-1.2.0-SNAPSHOT
 (pr1530) alopresto
🔓 1224747s @ 16:32:49 $ ./bin/tls-toolkit.sh standalone -n localhost -O 
-S password -SAN hostname.com
2017/03/06 16:33:04 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandaloneCommandLine: No 
nifiPropertiesFile specified, using embedded one.
2017/03/06 16:33:04 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Running standalone 
certificate generation with output directory ../nifi-toolkit-1.2.0-SNAPSHOT
2017/03/06 16:33:04 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Using existing CA 
certificate ../nifi-toolkit-1.2.0-SNAPSHOT/nifi-cert.pem and key 
../nifi-toolkit-1.2.0-SNAPSHOT/nifi-key.key
2017/03/06 16:33:04 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Overwriting any 
existing ssl configuration in ../nifi-toolkit-1.2.0-SNAPSHOT/localhost
2017/03/06 16:33:04 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Successfully 
generated TLS configuration for localhost 1 in 
../nifi-toolkit-1.2.0-SNAPSHOT/localhost
2017/03/06 16:33:04 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: No clientCertDn 
specified, not generating any client certificates.
2017/03/06 16:33:04 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: tls-toolkit 
standalone completed successfully

hw12203:...assembly/target/nifi-toolkit-1.2.0-SNAPSHOT-bin/nifi-toolkit-1.2.0-SNAPSHOT
 (pr1530) alopresto
🔓 1224763s @ 16:33:05 $ keytool -list -v -keystore 
localhost/keystore.jks -storepass password

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

Alias name: nifi-key
Creation date: Mar 6, 2017
Entry type: PrivateKeyEntry
Certificate chain length: 2
Certificate[1]:
Owner: CN=localhost, OU=NIFI
Issuer: CN=localhost, OU=NIFI
Serial number: 15aa62f097a
Valid from: Mon Mar 06 16:33:04 PST 2017 until: Thu Mar 05 16:33:04 PST 2020
Certificate fingerprints:
 MD5:  52:CF:22:54:02:AB:22:8E:DE:AC:C8:2E:3F:8C:1B:2C
 SHA1: 55:87:B5:20:8F:1F:03:F3:D2:68:85:F5:4E:49:85:D5:53:6A:27:11
 SHA256: 
E1:6E:F6:89:73:70:26:31:57:CB:8B:E6:44:DA:32:0B:77:39:22:1D:EA:5E:B8:3E:2D:4F:24:4C:68:2A:A4:3F
 Signature algorithm name: SHA256withRSA
 Version: 3

Extensions:

#1: ObjectId: 2.5.29.35 Criticality=false
AuthorityKeyIdentifier [
KeyIdentifier [
: D0 B6 50 54 15 F2 64 EA   AA EB D4 82 A0 07 B4 2D  ..PT..d-
0010: 28 AC 66 CF(.f.
]
]

#2: ObjectId: 2.5.29.19 Criticality=false
BasicConstraints:[
  CA:false
  PathLen: undefined
]

#3: ObjectId: 2.5.29.37 Criticality=false
ExtendedKeyUsages [
  clientAuth
  serverAuth
]

#4: ObjectId: 2.5.29.15 Criticality=true
KeyUsage [
  DigitalSignature
  Non_repudiation
  Key_Encipherment
  Data_Encipherment
  Key_Agreement
]

#5: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
: D0 DC A2 03 C9 18 03 B8   B1 1B 0E 11 BC A5 A7 CF  
0010: 1F 68 3A 63.h:c
]
]

Certificate[2]:
Owner: CN=localhost, OU=NIFI
Issuer: CN=localhost, OU=NIFI
Serial number: 15aa62eb59a
Valid from: Mon Mar 06 16:32:43 PST 2017 until: Thu Mar 05 16:32:43 PST 2020
Certificate fingerprints:
 MD5:  44:15:FC:42:BE:A3:A5:7E:C3:86:AF:82:50:51:E3:E4
 SHA1: 38:D3:42:19:6C:71:5C:02:BF:8E:A1:02:DE:A3:D8:0C:D4:73:8D:B7
 SHA256: 
13:71:F8:1A:22:A3:43:93:29:1D:2F:41:F8:C0:1E:25:79:E2:D7:5D:28:53:5C:21:97:A0:68:6C:AD:39:18:62
 Signature algorithm name: SHA256withRSA
 Version: 3

Extensions:

#1: ObjectId: 2.5.29.35 Criticality=false
AuthorityKeyIdentifier [
KeyIdentifier [
: D0 B6 50 54 15 F2 64 EA   AA EB D4 82 A0 07 B4 2D  ..PT..d-
0010: 28 AC 66 CF(.f.
]
]

#2: ObjectId: 2.5.29.19 Criticality=false
BasicConstraints:[
  CA:true
  PathLen:2147483647
]

#3: ObjectId: 2.5.29.37 Criticality=false
ExtendedKeyUsages [
  clientAuth
  serverAuth
]


[GitHub] nifi issue #1332: NIFI-3163: Flow Fingerprint should include new RPG configu...

2017-03-06 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/1332
  
@markap14 Would you be able to review this PR? It's there for a while and I 
had to update this PR couple of times to resolve conflicts with the latest 
master. This PR is required to move NIFI-1202 (#1306) which adds batch related 
configurations at RPG Port for better control over load distribution. Thanks a 
lot in advance!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3163) Flow Fingerprint should include new RPG configurations

2017-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898429#comment-15898429
 ] 

ASF GitHub Bot commented on NIFI-3163:
--

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/1332
  
@markap14 Would you be able to review this PR? It's there for a while and I 
had to update this PR couple of times to resolve conflicts with the latest 
master. This PR is required to move NIFI-1202 (#1306) which adds batch related 
configurations at RPG Port for better control over load distribution. Thanks a 
lot in advance!


> Flow Fingerprint should include new RPG configurations
> --
>
> Key: NIFI-3163
> URL: https://issues.apache.org/jira/browse/NIFI-3163
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> NiFi calculates fingerprint of a flow.xml using attributes related to data 
> processing. Since 1.0.0, some RemoteProcessGroup configurations have been 
> added, but not being taken into account for fingerprint.
> These new configurations should be revisited and added accordingly.
> - Transport protocol
> - Multiple target URIs
> - Proxy settings



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3334) Processor Run Duration does not take effect if clicking Apply from tab other than Scheduling

2017-03-06 Thread James Wing (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898428#comment-15898428
 ] 

James Wing commented on NIFI-3334:
--

Run duration is indeed selectively saved based on the visibility of the slider 
control.  This was added in 
[b19ff7c|https://github.com/apache/nifi/commit/b19ff7cf37162934cafabdf46dedd4afc46f2a82#diff-eec951084f43962cd777a891da3ffd88].
  When saving the properties, the code checks if the slider control is visible 
(from nf-processor-configuration.js, line ~336):

{code}
// run duration
if ($('#run-duration-setting-container').is(':visible')) {
var runDurationIndex = $('#run-duration-slider').slider('value');
processorConfigDto['runDurationMillis'] = 
RUN_DURATION_VALUES[runDurationIndex];
}
{code}

The check for visible evaluates false when the tab is not active, and it does 
not save.

[~mcgilman], do you remember why this is so?  It appears that the intent was to 
only display and save the slider data for processors that support batching.  
Would it be OK to change it to {{if (processor.supportsBatching === true)}}, 
which is used to initialize the slider visibility (line ~680)?

{code}
// set the run duration if applicable
if (processor.supportsBatching === true) {
$('#run-duration-setting-container').show();
{code}

> Processor Run Duration does not take effect if clicking Apply from tab other 
> than Scheduling
> 
>
> Key: NIFI-3334
> URL: https://issues.apache.org/jira/browse/NIFI-3334
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Reporter: Mark Payne
>
> If I create a processor and change the Run Duration in the Scheduling tab, 
> and then I click Apply while still on the Scheduling tab, then I go back into 
> the configuration dialog, I see that the Run Duration is still properly set. 
> However, if I change the Run Duration, then switch to the Settings or 
> Properties tab, and then click Apply from that tab, the settings do not take 
> affect -- in that case, when I click configure again, I see Run Duration set 
> to the default of 0 ms.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-2699) Improve handling of response timeouts in cluster

2017-03-06 Thread Dima Kovalyov (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898390#comment-15898390
 ] 

Dima Kovalyov commented on NIFI-2699:
-

I hit this problem all the time when i work with large NiFi flows, 6 groups, 
up-to 8 processors in each with 3+ flow files in queue.

> Improve handling of response timeouts in cluster
> 
>
> Key: NIFI-2699
> URL: https://issues.apache.org/jira/browse/NIFI-2699
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Core UI
>Reporter: Jeff Storck
>Priority: Minor
>
> When running as a cluster, if a node is unable to respond within the socket 
> timeout (eg, hitting a breakpoint while debugging), an 
> IllegalClusterStateException will be thrown that causes the UI to show the 
> "check config and fix errors" page.  Once the node is communicating with the 
> cluster again (i.e., breakpoint in the code is passed), the UI can be 
> reloaded and the cluster recovers from the timeout without any user 
> intervention at the service level. However, user experience could be 
> improved.  If a user initiates a replicated request to a node that is unable 
> to respond within the socket timeout duration, the user might think NiFi 
> crashed, when it in fact didn't.
> Here is the stack trace that was encountered during testing:
> {code}
> 2016-08-29 11:36:59,041 DEBUG [NiFi Web Server-22] 
> o.a.n.w.a.c.IllegalClusterStateExceptionMapper
> org.apache.nifi.cluster.manager.exception.IllegalClusterStateException: Node 
> localhost:8443 is unable to fulfill this request due to: Unexpected Response 
> Code 500
> at 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$2.onCompletion(ThreadPoolRequestReplicator.java:471)
>  ~[nifi-framework-cluster-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:729)
>  ~[nifi-framework-cluster-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_92]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_92]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_92]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_92]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_92]
> Caused by: com.sun.jersey.api.client.ClientHandlerException: 
> java.net.SocketTimeoutException: Read timed out
> at 
> com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155)
>  ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.handle(Client.java:652) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682) 
> ~[jersey-client-1.19.jar:1.19]
> at 
> com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) 
> ~[jersey-client-1.19.jar:1.19]
> at 
> com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:560) 
> ~[jersey-client-1.19.jar:1.19]
> at 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:537)
>  ~[nifi-framework-cluster-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:720)
>  ~[nifi-framework-cluster-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> ... 5 common frames omitted
> Caused by: java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method) 
> ~[na:1.8.0_92]
> at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) 
> ~[na:1.8.0_92]
> at java.net.SocketInputStream.read(SocketInputStream.java:170) 
> ~[na:1.8.0_92]
> at java.net.SocketInputStream.read(SocketInputStream.java:141) 
> ~[na:1.8.0_92]
> at sun.security.ssl.InputRecord.readFully(InputRecord.java:465) 
> ~[na:1.8.0_92]
> at sun.security.ssl.InputRecord.read(InputRecord.java:503) 
> ~[na:1.8.0_92]
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973) 
> ~[na:1.8.0_92]
> at 
> sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:930) 
> ~[na:1.8.0_92]
> at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) 
> ~[na:1.8.0_92]
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) 
> ~[na:1.8.0_92]
> at 

[jira] [Comment Edited] (NIFI-2699) Improve handling of response timeouts in cluster

2017-03-06 Thread Dima Kovalyov (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898390#comment-15898390
 ] 

Dima Kovalyov edited comment on NIFI-2699 at 3/6/17 11:40 PM:
--

I hit this problem all the time when I work with large NiFi flows, 6 groups, 
up-to 8 processors in each with 3+ flow files in queue.


was (Author: dima_k):
I hit this problem all the time when i work with large NiFi flows, 6 groups, 
up-to 8 processors in each with 3+ flow files in queue.

> Improve handling of response timeouts in cluster
> 
>
> Key: NIFI-2699
> URL: https://issues.apache.org/jira/browse/NIFI-2699
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Core UI
>Reporter: Jeff Storck
>Priority: Minor
>
> When running as a cluster, if a node is unable to respond within the socket 
> timeout (eg, hitting a breakpoint while debugging), an 
> IllegalClusterStateException will be thrown that causes the UI to show the 
> "check config and fix errors" page.  Once the node is communicating with the 
> cluster again (i.e., breakpoint in the code is passed), the UI can be 
> reloaded and the cluster recovers from the timeout without any user 
> intervention at the service level. However, user experience could be 
> improved.  If a user initiates a replicated request to a node that is unable 
> to respond within the socket timeout duration, the user might think NiFi 
> crashed, when it in fact didn't.
> Here is the stack trace that was encountered during testing:
> {code}
> 2016-08-29 11:36:59,041 DEBUG [NiFi Web Server-22] 
> o.a.n.w.a.c.IllegalClusterStateExceptionMapper
> org.apache.nifi.cluster.manager.exception.IllegalClusterStateException: Node 
> localhost:8443 is unable to fulfill this request due to: Unexpected Response 
> Code 500
> at 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$2.onCompletion(ThreadPoolRequestReplicator.java:471)
>  ~[nifi-framework-cluster-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:729)
>  ~[nifi-framework-cluster-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_92]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_92]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_92]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_92]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_92]
> Caused by: com.sun.jersey.api.client.ClientHandlerException: 
> java.net.SocketTimeoutException: Read timed out
> at 
> com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155)
>  ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.handle(Client.java:652) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682) 
> ~[jersey-client-1.19.jar:1.19]
> at 
> com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) 
> ~[jersey-client-1.19.jar:1.19]
> at 
> com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:560) 
> ~[jersey-client-1.19.jar:1.19]
> at 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:537)
>  ~[nifi-framework-cluster-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:720)
>  ~[nifi-framework-cluster-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> ... 5 common frames omitted
> Caused by: java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method) 
> ~[na:1.8.0_92]
> at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) 
> ~[na:1.8.0_92]
> at java.net.SocketInputStream.read(SocketInputStream.java:170) 
> ~[na:1.8.0_92]
> at java.net.SocketInputStream.read(SocketInputStream.java:141) 
> ~[na:1.8.0_92]
> at sun.security.ssl.InputRecord.readFully(InputRecord.java:465) 
> ~[na:1.8.0_92]
> at sun.security.ssl.InputRecord.read(InputRecord.java:503) 
> ~[na:1.8.0_92]
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973) 
> ~[na:1.8.0_92]
> at 
> sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:930) 
> ~[na:1.8.0_92]
> at 

[jira] [Assigned] (NIFI-3517) If HandleHttpResponse cannot write response, remove entry from HttpContextMap

2017-03-06 Thread Joe Skora (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Skora reassigned NIFI-3517:
---

Assignee: Joe Skora

> If HandleHttpResponse cannot write response, remove entry from HttpContextMap
> -
>
> Key: NIFI-3517
> URL: https://issues.apache.org/jira/browse/NIFI-3517
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 0.7.1
>Reporter: Mark Payne
>Assignee: Joe Skora
>
> Currently, if HandleHttpResponse cannot write to the OutputStream, it catches 
> the general Exception and routes to 'failure' without removing the entry from 
> the map. If clients often timeout, though, we will fail to write to the 
> OutputStreams and then leave the entry in the map, preventing new connections.
> If a ProcessException is caught when calling ProcessSession.exportTo, we 
> should remove the entry from the map before routing to failure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3559) Improve S2S load-balancing

2017-03-06 Thread Koji Kawamura (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898374#comment-15898374
 ] 

Koji Kawamura commented on NIFI-3559:
-

[~msclarke] I've been working on NIFI-1202, which adds S2S batch configuration 
to remote port. It's mostly done as I summarized status on [Github PR 
1306|https://github.com/apache/nifi/pull/1306].

Same as NIFI-2987, do you think NIFI-1202 will complete this improvement 
request, or is there any additional enhancement required?

> Improve S2S load-balancing
> --
>
> Key: NIFI-3559
> URL: https://issues.apache.org/jira/browse/NIFI-3559
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.1.1
>Reporter: Matthew Clarke
>Assignee: Koji Kawamura
>
> The current implementation of S2S sends data continuously to the destination 
> NiFi node for 0.5 seconds before closing the connection and opening a new 
> connection to another node.
> When the source FlowFile are all very small (0 byte in case of list based 
> processors), the entire queue can end up getting sent to only one of the 
> target NiFi cluster nodes.
> Another common use case for S2S is to have a RPG pointed back at same cluster 
> where the RPG was added.  Since FlowFiles are likely to transfer to the same 
> node where the data originates (Think Primary node data redistribution within 
> a cluster) much faster then transfers to other nodes, the primary node is 
> likely to always end up with more FlowFiles then any other node.
> There needs to be an additional load-balancing strategy that compliments the 
> existing 0.5 second to improve upon the load-balancing in such cases.  The 
> RPG know how many target nodes there are and how many FlowFiles exist in the 
> queue at run time, so perhaps using that info to more even split the queue 
> amongst all nodes smartly would help.
> This is related to existing Jira: NiFI-2987



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (NIFI-3559) Improve S2S load-balancing

2017-03-06 Thread Koji Kawamura (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura reassigned NIFI-3559:
---

Assignee: Koji Kawamura

> Improve S2S load-balancing
> --
>
> Key: NIFI-3559
> URL: https://issues.apache.org/jira/browse/NIFI-3559
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.1.1
>Reporter: Matthew Clarke
>Assignee: Koji Kawamura
>
> The current implementation of S2S sends data continuously to the destination 
> NiFi node for 0.5 seconds before closing the connection and opening a new 
> connection to another node.
> When the source FlowFile are all very small (0 byte in case of list based 
> processors), the entire queue can end up getting sent to only one of the 
> target NiFi cluster nodes.
> Another common use case for S2S is to have a RPG pointed back at same cluster 
> where the RPG was added.  Since FlowFiles are likely to transfer to the same 
> node where the data originates (Think Primary node data redistribution within 
> a cluster) much faster then transfers to other nodes, the primary node is 
> likely to always end up with more FlowFiles then any other node.
> There needs to be an additional load-balancing strategy that compliments the 
> existing 0.5 second to improve upon the load-balancing in such cases.  The 
> RPG know how many target nodes there are and how many FlowFiles exist in the 
> queue at run time, so perhaps using that info to more even split the queue 
> amongst all nodes smartly would help.
> This is related to existing Jira: NiFI-2987



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-2987) RPG does not do load-balancing well when getting FlowFiles from output ports

2017-03-06 Thread Koji Kawamura (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898369#comment-15898369
 ] 

Koji Kawamura commented on NIFI-2987:
-

[~msclarke] I've been working on NIFI-1202, which adds S2S batch configuration 
to remote port. It's mostly done as I summarized status on [Github PR 
1306|https://github.com/apache/nifi/pull/1306]. Do you think NIFI-1202 will 
complete this new feature request, or is there any additional enhancement 
required?

> RPG does not do load-balancing well when getting FlowFiles from output ports
> 
>
> Key: NIFI-2987
> URL: https://issues.apache.org/jira/browse/NIFI-2987
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Matthew Clarke
>Assignee: Koji Kawamura
>
> When a RPG connects to a destination system's output port, it retrieves every 
> FlowFile queued at the time of the connection.  If the source system with the 
> RPG is a NiFi cluster, only one node in the cluster receives all the 
> FlowFiles from that output port.  If there is a steady stream of FlowFiles to 
> the output port, there is still no true balanced delivery of data.
> We need to be able to limit the number of FlowFiles per connection when the 
> source of the FlowFiles is an output port.
> When the destination system with the output port is a cluster and the source 
> system is a cluster with a RPG, the first node to connect will pull all data 
> from the node with the highest queue.  Next node will pull from next highest 
> queued destination node and so on. There is also no guarantee that avery node 
> in the source cluster will get any data.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (NIFI-1202) Allow user to configure Batch Size for site-to-site

2017-03-06 Thread Koji Kawamura (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura reassigned NIFI-1202:
---

Assignee: Koji Kawamura  (was: Mark Payne)

> Allow user to configure Batch Size for site-to-site
> ---
>
> Key: NIFI-1202
> URL: https://issues.apache.org/jira/browse/NIFI-1202
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Core UI, Documentation & Website
>Reporter: Mark Payne
>Assignee: Koji Kawamura
> Attachments: configure-remote-port-window.png, 
> WIP-added-batch-settings-ui.png
>
>
> Currently, there is no way for a user to specify the batch size that 
> Site-to-Site will use. The framework decides this for you. However, if we 
> want to use the List/Fetch Pattern, it will be helpful to specify a small 
> batch size so that a small number of things that are listed are still well 
> distributed across the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (NIFI-2987) RPG does not do load-balancing well when getting FlowFiles from output ports

2017-03-06 Thread Koji Kawamura (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura reassigned NIFI-2987:
---

Assignee: Koji Kawamura

> RPG does not do load-balancing well when getting FlowFiles from output ports
> 
>
> Key: NIFI-2987
> URL: https://issues.apache.org/jira/browse/NIFI-2987
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Matthew Clarke
>Assignee: Koji Kawamura
>
> When a RPG connects to a destination system's output port, it retrieves every 
> FlowFile queued at the time of the connection.  If the source system with the 
> RPG is a NiFi cluster, only one node in the cluster receives all the 
> FlowFiles from that output port.  If there is a steady stream of FlowFiles to 
> the output port, there is still no true balanced delivery of data.
> We need to be able to limit the number of FlowFiles per connection when the 
> source of the FlowFiles is an output port.
> When the destination system with the output port is a cluster and the source 
> system is a cluster with a RPG, the first node to connect will pull all data 
> from the node with the highest queue.  Next node will pull from next highest 
> queued destination node and so on. There is also no guarantee that avery node 
> in the source cluster will get any data.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi issue #1537: Add table attributes to flow files

2017-03-06 Thread jvwing
Github user jvwing commented on the issue:

https://github.com/apache/nifi/pull/1537
  
No, there doesn't seem to be a convention to guide us.  If you prefer 
`tablename`, I'm OK with it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1537: Add table attributes to flow files

2017-03-06 Thread ambud
Github user ambud commented on the issue:

https://github.com/apache/nifi/pull/1537
  
1) Agreed, I overlooked it and will add that

2) Ideally I would just do tablename (one word) without delimiters for 
simplicity of downstream processing. Let me know if there's another preferred 
convention here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3560) Improvement needed for all List Processors

2017-03-06 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898214#comment-15898214
 ] 

Joseph Witt commented on NIFI-3560:
---

You mention all List processors are needing to support listing to one or more 
source systems at once.  Besides ListSFTP what others did you have in mind?

Also, alternatively, why not add 100 ListFTP processors onto the flow to pull 
data from each source specifically?  Can you describe the downside you see of 
that approach?  The upsides are it lets you specifically control behavior to a 
given source, specific scheduling, specific credentials, lifecycle 
(start/stop).  The downside is that it is "lots of things on the graph" but the 
alternative is "lots of things in a config file" so I'd like to more fully 
understand the concern.

Thanks


> Improvement needed for all List Processors
> --
>
> Key: NIFI-3560
> URL: https://issues.apache.org/jira/browse/NIFI-3560
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.1.1
>Reporter: Vamsi Gummalla
>
> Current Implementation of List processor is such that they can be run in a 
> loop. 
> Example Use case:- If we need to monitor 100 FTP servers, we have to create 
> 100 ListFTP processor rather having one ListFTP processor which can get 
> required fields as input and loop it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3560) Improvement needed for all List Processors

2017-03-06 Thread Vamsi Gummalla (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vamsi Gummalla updated NIFI-3560:
-
Summary: Improvement needed for all List Processors  (was: Improve needed 
for all List Processors)

> Improvement needed for all List Processors
> --
>
> Key: NIFI-3560
> URL: https://issues.apache.org/jira/browse/NIFI-3560
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.1.1
>Reporter: Vamsi Gummalla
>
> Current Implementation of List processor is such that they can be run in a 
> loop. 
> Example Use case:- If we need to monitor 100 FTP servers, we have to create 
> 100 ListFTP processor rather having one ListFTP processor which can get 
> required fields as input and loop it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3560) Improve needed for all List Processors

2017-03-06 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-3560:
--
Component/s: (was: Core Framework)
 Extensions

> Improve needed for all List Processors
> --
>
> Key: NIFI-3560
> URL: https://issues.apache.org/jira/browse/NIFI-3560
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.1.1
>Reporter: Vamsi Gummalla
>
> Current Implementation of List processor is such that they can be run in a 
> loop. 
> Example Use case:- If we need to monitor 100 FTP servers, we have to create 
> 100 ListFTP processor rather having one ListFTP processor which can get 
> required fields as input and loop it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (NIFI-3560) Improve needed for all List Processors

2017-03-06 Thread Vamsi Gummalla (JIRA)
Vamsi Gummalla created NIFI-3560:


 Summary: Improve needed for all List Processors
 Key: NIFI-3560
 URL: https://issues.apache.org/jira/browse/NIFI-3560
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.1.1
Reporter: Vamsi Gummalla


Current Implementation of List processor is such that they can be run in a 
loop. 

Example Use case:- If we need to monitor 100 FTP servers, we have to create 100 
ListFTP processor rather having one ListFTP processor which can get required 
fields as input and loop it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (NIFI-3559) Improve S2S load-balancing

2017-03-06 Thread Matthew Clarke (JIRA)
Matthew Clarke created NIFI-3559:


 Summary: Improve S2S load-balancing
 Key: NIFI-3559
 URL: https://issues.apache.org/jira/browse/NIFI-3559
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.1.1
Reporter: Matthew Clarke


The current implementation of S2S sends data continuously to the destination 
NiFi node for 0.5 seconds before closing the connection and opening a new 
connection to another node.

When the source FlowFile are all very small (0 byte in case of list based 
processors), the entire queue can end up getting sent to only one of the target 
NiFi cluster nodes.

Another common use case for S2S is to have a RPG pointed back at same cluster 
where the RPG was added.  Since FlowFiles are likely to transfer to the same 
node where the data originates (Think Primary node data redistribution within a 
cluster) much faster then transfers to other nodes, the primary node is likely 
to always end up with more FlowFiles then any other node.

There needs to be an additional load-balancing strategy that compliments the 
existing 0.5 second to improve upon the load-balancing in such cases.  The RPG 
know how many target nodes there are and how many FlowFiles exist in the queue 
at run time, so perhaps using that info to more even split the queue amongst 
all nodes smartly would help.

This is related to existing Jira: NiFI-2987



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi-minifi-cpp pull request #63: MINIFI-217: Implement namespaces.

2017-03-06 Thread phrocker
GitHub user phrocker opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/63

MINIFI-217: Implement namespaces. 

Updates namespaces and removes use of raw pointers for user facing API.

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFI-217-NAMESPACE

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/63.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #63


commit 1eadc0817eae277821061f3c8ae2a6fa98c337d9
Author: Marc Parisi 
Date:   2017-02-27T19:02:58Z

MINIFI-217: First commit. Updates namespaces and removes
use of raw pointers for user facing API.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-3483) StandardHttpResponseMapperSpec.MergeResponses fails under varying locales

2017-03-06 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-3483:
-
 Assignee: Pierre Villard  (was: Jeff Storck)
Fix Version/s: 1.2.0
   Status: Patch Available  (was: Open)

> StandardHttpResponseMapperSpec.MergeResponses fails under varying locales
> -
>
> Key: NIFI-3483
> URL: https://issues.apache.org/jira/browse/NIFI-3483
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.2.0
>Reporter: Aldrin Piri
>Assignee: Pierre Villard
>Priority: Minor
> Fix For: 1.2.0
>
>
> With the efforts to provide multiple locales for our Travis builds, there 
> seem to be issues with StandardHttpResponseMapperSpec.MergeResponses and 
> possibly others.  In this case, differing locales have varying inclusion of 
> commas for numbers.
> A sample Travis build report is available in full at: 
> https://api.travis-ci.org/jobs/201598074/log.txt?deansi=true
> The snippet in question is 
> {code}
> Tests run: 15, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.542 sec 
> <<< FAILURE! - in 
> org.apache.nifi.cluster.coordination.http.StandardHttpResponseMapperSpec
> MergeResponses: 3 HTTP 200 get responses for 
> nifi-api/connections/e760637d-1086-44ed-aacf-4f1580182725(org.apache.nifi.cluster.coordination.http.StandardHttpResponseMapperSpec)
>   Time elapsed: 0.177 sec  <<< FAILURE!
> org.spockframework.runtime.SpockComparisonFailure: Condition not satisfied:
> returnedJson == expectedJson
> ||  |
> ||  
> {"id":"1","permissions":{"canRead":false,"canWrite":false},"status":{"aggregateSnapshot":{"flowFilesIn":0,"bytesIn":1000,"input":"0
>  (1,000 bytes)","flowFilesOut":0,"bytesOut":0,"output":"0 (0 
> bytes)","flowFilesQueued":0,"bytesQueued":0,"queued":"0 (0 
> bytes)","queuedSize":"0 bytes","queuedCount":"0"}}}
> |false
> |1 difference (99% similarity)
> |
> {"id":"1","permissions":{"canRead":false,"canWrite":false},"status":{"aggregateSnapshot":{"flowFilesIn":0,"bytesIn":1000,"input":"0
>  (1( )000 bytes)","flowFilesOut":0,"bytesOut":0,"output":"0 (0 
> bytes)","flowFilesQueued":0,"bytesQueued":0,"queued":"0 (0 
> bytes)","queuedSize":"0 bytes","queuedCount":"0"}}}
> |
> {"id":"1","permissions":{"canRead":false,"canWrite":false},"status":{"aggregateSnapshot":{"flowFilesIn":0,"bytesIn":1000,"input":"0
>  (1(,)000 bytes)","flowFilesOut":0,"bytesOut":0,"output":"0 (0 
> bytes)","flowFilesQueued":0,"bytesQueued":0,"queued":"0 (0 
> bytes)","queuedSize":"0 bytes","queuedCount":"0"}}}
> {"id":"1","permissions":{"canRead":false,"canWrite":false},"status":{"aggregateSnapshot":{"flowFilesIn":0,"bytesIn":1000,"input":"0
>  (1 000 bytes)","flowFilesOut":0,"bytesOut":0,"output":"0 (0 
> bytes)","flowFilesQueued":0,"bytesQueued":0,"queued":"0 (0 
> bytes)","queuedSize":"0 bytes","queuedCount":"0"}}}
>   at 
> org.apache.nifi.cluster.coordination.http.StandardHttpResponseMapperSpec.MergeResponses:
>  #responseEntities.size() HTTP 200 #httpMethod responses for 
> #requestUriPart(StandardHttpResponseMapperSpec.groovy:122)
> Running 
> org.apache.nifi.cluster.coordination.http.endpoints.StatusHistoryEndpointMergerSpec
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.024 sec - 
> in 
> org.apache.nifi.cluster.coordination.http.endpoints.StatusHistoryEndpointMergerSpec
> Running 
> org.apache.nifi.cluster.coordination.http.endpoints.TestProcessorEndpointMerger
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec - 
> in 
> org.apache.nifi.cluster.coordination.http.endpoints.TestProcessorEndpointMerger
> Running 
> org.apache.nifi.cluster.coordination.http.endpoints.TestStatusHistoryEndpointMerger
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.006 sec - 
> in 
> org.apache.nifi.cluster.coordination.http.endpoints.TestStatusHistoryEndpointMerger
> Running org.apache.nifi.cluster.coordination.node.TestNodeClusterCoordinator
> Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.151 sec - 
> in org.apache.nifi.cluster.coordination.node.TestNodeClusterCoordinator
> Running 
> org.apache.nifi.cluster.coordination.heartbeat.TestAbstractHeartbeatMonitor
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.057 sec - 
> in org.apache.nifi.cluster.coordination.heartbeat.TestAbstractHeartbeatMonitor
> Running org.apache.nifi.cluster.coordination.flow.TestPopularVoteFlowElection
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.347 sec - 
> in org.apache.nifi.cluster.coordination.flow.TestPopularVoteFlowElection
> Running org.apache.nifi.cluster.firewall.impl.FileBasedClusterNodeFirewallTest
> Tests run: 8, 

[jira] [Resolved] (NIFI-2683) Tests rely on locale

2017-03-06 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-2683.
--
Resolution: Duplicate
  Assignee: Pierre Villard

Closing in favor of NIFI-3483.

> Tests rely on locale
> 
>
> Key: NIFI-2683
> URL: https://issues.apache.org/jira/browse/NIFI-2683
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
> Environment: Apache Maven 3.3.9 
> (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T17:41:47+01:00)
> Java version: 1.8.0_74, vendor: Oracle Corporation
> Default locale: fr_FR, platform encoding: Cp1252
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "dos"
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
>
> The following test fails because of the locale language.
> {noformat}
> Failed tests:
>   StandardHttpResponseMergerSpec.MergeResponses: #responseEntities.size() 
> HTTP 200 #httpMethod responses for #requestUriPart:123 Condition not 
> satisfied:
> returnedJson == expectedJson
> ||  |
> ||  
> {"id":"1","permissions":{"canRead":false,"canWrite":false},"status":{"aggregateSnapshot":{"flowFilesIn":0,"bytesIn":1000,"input":"0
>  (1,000 bytes)","flowFilesOut":0,"bytesOut":0,"output":"
> 0 (0 bytes)","flowFilesQueued":0,"bytesQueued":0,"queued":"0 (0 
> bytes)","queuedSize":"0 bytes","queuedCount":"0"}}}
> |false
> |1 difference (99% similarity)
> |
> {"id":"1","permissions":{"canRead":false,"canWrite":false},"status":{"aggregateSnapshot":{"flowFilesIn":0,"bytesIn":1000,"input":"0
>  (1( )000 bytes)","flowFilesOut":0,"bytesOut":0,"output":"0
>  (0 bytes)","flowFilesQueued":0,"bytesQueued":0,"queued":"0 (0 
> bytes)","queuedSize":"0 bytes","queuedCount":"0"}}}
> |
> {"id":"1","permissions":{"canRead":false,"canWrite":false},"status":{"aggregateSnapshot":{"flowFilesIn":0,"bytesIn":1000,"input":"0
>  (1(,)000 bytes)","flowFilesOut":0,"bytesOut":0,"output":"0
>  (0 bytes)","flowFilesQueued":0,"bytesQueued":0,"queued":"0 (0 
> bytes)","queuedSize":"0 bytes","queuedCount":"0"}}}
> {"id":"1","permissions":{"canRead":false,"canWrite":false},"status":{"aggregateSnapshot":{"flowFilesIn":0,"bytesIn":1000,"input":"0
>  (1 000 bytes)","flowFilesOut":0,"bytesOut":0,"output":"0 (0 bytes)","fl
> owFilesQueued":0,"bytesQueued":0,"q
> {noformat}
> An easy option is to have a little change in the main pom.xml:
> {noformat}
> 
> org.apache.maven.plugins
> maven-surefire-plugin
> 2.18
> 
> 
> **/*Test.class
> **/Test*.class
> **/*Spec.class
> 
> 
> true
> -Xmx1G 
> -Djava.net.preferIPv4Stack=true -Duser.language=en -Duser.region=US
> 
> 
> 
> 
> org.apache.maven.surefire
> surefire-junit4
> 2.18
> 
> 
> 
> {noformat}
> otherwise the test needs to be changed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3483) StandardHttpResponseMapperSpec.MergeResponses fails under varying locales

2017-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897918#comment-15897918
 ] 

ASF GitHub Bot commented on NIFI-3483:
--

GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/1566

NIFI-3483 - fixed one unit test relying on locale

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [X] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [X] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [X] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-3483

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1566.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1566


commit f41385e6a1ae1aebf12d8055660053007b535bd7
Author: Pierre Villard 
Date:   2017-03-06T19:46:40Z

NIFI-3483 - fixed one unit test relying on locale




> StandardHttpResponseMapperSpec.MergeResponses fails under varying locales
> -
>
> Key: NIFI-3483
> URL: https://issues.apache.org/jira/browse/NIFI-3483
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.2.0
>Reporter: Aldrin Piri
>Assignee: Jeff Storck
>Priority: Minor
>
> With the efforts to provide multiple locales for our Travis builds, there 
> seem to be issues with StandardHttpResponseMapperSpec.MergeResponses and 
> possibly others.  In this case, differing locales have varying inclusion of 
> commas for numbers.
> A sample Travis build report is available in full at: 
> https://api.travis-ci.org/jobs/201598074/log.txt?deansi=true
> The snippet in question is 
> {code}
> Tests run: 15, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.542 sec 
> <<< FAILURE! - in 
> org.apache.nifi.cluster.coordination.http.StandardHttpResponseMapperSpec
> MergeResponses: 3 HTTP 200 get responses for 
> nifi-api/connections/e760637d-1086-44ed-aacf-4f1580182725(org.apache.nifi.cluster.coordination.http.StandardHttpResponseMapperSpec)
>   Time elapsed: 0.177 sec  <<< FAILURE!
> org.spockframework.runtime.SpockComparisonFailure: Condition not satisfied:
> returnedJson == expectedJson
> ||  |
> ||  
> {"id":"1","permissions":{"canRead":false,"canWrite":false},"status":{"aggregateSnapshot":{"flowFilesIn":0,"bytesIn":1000,"input":"0
>  (1,000 bytes)","flowFilesOut":0,"bytesOut":0,"output":"0 (0 
> bytes)","flowFilesQueued":0,"bytesQueued":0,"queued":"0 (0 
> bytes)","queuedSize":"0 bytes","queuedCount":"0"}}}
> |false
> |1 difference (99% similarity)
> |
> {"id":"1","permissions":{"canRead":false,"canWrite":false},"status":{"aggregateSnapshot":{"flowFilesIn":0,"bytesIn":1000,"input":"0
>  (1( )000 bytes)","flowFilesOut":0,"bytesOut":0,"output":"0 (0 
> bytes)","flowFilesQueued":0,"bytesQueued":0,"queued":"0 (0 
> bytes)","queuedSize":"0 bytes","queuedCount":"0"}}}
> |
> 

[GitHub] nifi pull request #1566: NIFI-3483 - fixed one unit test relying on locale

2017-03-06 Thread pvillard31
GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/1566

NIFI-3483 - fixed one unit test relying on locale

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [X] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [X] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [X] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-3483

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1566.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1566


commit f41385e6a1ae1aebf12d8055660053007b535bd7
Author: Pierre Villard 
Date:   2017-03-06T19:46:40Z

NIFI-3483 - fixed one unit test relying on locale




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-3520) HDFS processors experiencing Kerberos "impersonate" errors

2017-03-06 Thread Jeff Storck (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-3520:
--
Status: Patch Available  (was: Open)

> HDFS processors experiencing Kerberos "impersonate" errors 
> ---
>
> Key: NIFI-3520
> URL: https://issues.apache.org/jira/browse/NIFI-3520
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.1, 1.1.1, 1.1.0, 1.0.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>
> When multiple Kerberos principals are used between multiple HDFS processors, 
> the processor instances will be able to login to Kerberos with their 
> configured principals initially, but will not properly relogin.  
> For example, if there are two PutHDFS processors, one configured as 
> us...@example.com, and the other as us...@example.com, they will both login 
> with the KDC correctly and be able to transfer files to HDFS.  Once one of 
> the PutHDFS processors attempts to relogin, it may end up being logged in as 
> the principal from the other PutHDFS processor.  The principal contexts end 
> up getting switched, and the hadoop client used by the processor will attempt 
> to proxy requests from one user through another, resulting in the following 
> exception:
> {panel}Failed to write to HDFS due to 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  User: us...@example.com is not allowed to impersonate 
> us...@example.com{panel}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi issue #1537: Add table attributes to flow files

2017-03-06 Thread jvwing
Github user jvwing commented on the issue:

https://github.com/apache/nifi/pull/1537
  
Thanks for the update @ambud, the tests are passing.  I have two other 
comments:

1.) Similar attributes, like RESULT_ROW_COUNT are defined as static final 
member variables, so they can be referenced consistently (mostly in tests).  I 
recommend we do the same for table.name.

2.) I looked for an established attribute naming convention that would 
apply to `table.name`.  
* The ListDatabaseTables processor uses `db.table.name`, but it has a 
consistent `db.*` pattern for all of its attributes that QueryDatabaseTable 
does not.  
* QueryDatabaseTable has `querydbtable.row.count`, which might suggest a 
`querydbtable.*` pattern like `querydbtable.table.name`, except that there is 
only one such attribute (the fragment attributes are idiomatic and might not 
apply, `maxvalue.*` is the only other existing attribute written).  

Having done this research, I'm now more confused than when I started :), so 
I don't have the concrete recommendation I was hoping to arrive at.  Have you 
looked into this, and how did you arrive at `table.name`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-3558) Visual Status and Troubleshooting Tool

2017-03-06 Thread Andy LoPresto (JIRA)
Andy LoPresto created NIFI-3558:
---

 Summary: Visual Status and Troubleshooting Tool
 Key: NIFI-3558
 URL: https://issues.apache.org/jira/browse/NIFI-3558
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Core Framework, Core UI
Reporter: Andy LoPresto


There has been intermittent discussion on the mailing list of a visual status 
dashboard and troubleshooting tool to bring the institutional knowledge of the 
entire community closer to the end user. The mailing lists frequently receive 
the same classes of question (thread starvation, open file handles, heap size, 
etc.) and the users have to wait for another community member to ask a series 
of questions and manually diagnose the issue. Few users comprehensively read 
the Admin Guide or Best Practices documents, and quickly identifying possible 
issues that are easily fixed will improve the user experience and reduce the 
unnecessary strain on the mailing lists. 

Use this ticket to collect suggested issue/configuration identification, 
messaging, suggested solutions, and thoughts about how to display this 
information in a digestible manner to the users. 

Example from Peter Wicks:

{quote}
In my case I had not seen the issue until I added 7 new 
QueryDatabaseProcessor's. All seven of them kicked off against the same SQL 
database on restart and took 10 to 15 minutes to come back.  During that time 
my default 10 threads was running with only 3 to spare, which were being shared 
across a lot of other jobs.  I bumped it up considerably and have not had 
issues since then.
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-1682) Processor to do Rolling Window calculations using FlowFile attributes

2017-03-06 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-1682:
-
Component/s: Extensions

> Processor to do Rolling Window calculations using FlowFile attributes
> -
>
> Key: NIFI-1682
> URL: https://issues.apache.org/jira/browse/NIFI-1682
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Joseph Percivall
>Assignee: Joseph Percivall
> Fix For: 1.2.0
>
>
> Using state it is now possible to store a map of key value pairs up to 1mb. 
> Taking into account storing a timestamp string and a double converted to a 
> string this is on the order of 5000 values. This enables a processor that can 
> store a rolling window of values to calculate things such as a rolling mean.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-1682) Processor to do Rolling Window calculations using FlowFile attributes

2017-03-06 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-1682:
-
   Resolution: Fixed
Fix Version/s: 1.2.0
   Status: Resolved  (was: Patch Available)

> Processor to do Rolling Window calculations using FlowFile attributes
> -
>
> Key: NIFI-1682
> URL: https://issues.apache.org/jira/browse/NIFI-1682
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Joseph Percivall
>Assignee: Joseph Percivall
> Fix For: 1.2.0
>
>
> Using state it is now possible to store a map of key value pairs up to 1mb. 
> Taking into account storing a timestamp string and a double converted to a 
> string this is on the order of 5000 values. This enables a processor that can 
> store a rolling window of values to calculate things such as a rolling mean.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-2987) RPG does not do load-balancing well when getting FlowFiles from output ports

2017-03-06 Thread Matthew Clarke (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897821#comment-15897821
 ] 

Matthew Clarke commented on NIFI-2987:
--

Same condition occurs when the RPG is used to push FlowFiles to a target NiFi 
cluster.   Consider the following common flow:

SplitText  ---> RPG (pointing at destination NiFi cluster) 

The SplitText produces all its splits at the same time (lets assume 10,000 
splits produced).  All 10,000 resulting FlowFiles end up on only one node of 
the target NiFi rather then load-balanced amongst all nodes.  Of course the 
next batch of split FlowFiles will go to a different node but that may be 
infrequent.  So the result is that one downstream node gets hammered each time 
the SplitText runs.

The RPG should expose a property that allows the user to decide the max number 
of FlowFile per transferred batch.

> RPG does not do load-balancing well when getting FlowFiles from output ports
> 
>
> Key: NIFI-2987
> URL: https://issues.apache.org/jira/browse/NIFI-2987
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Matthew Clarke
>
> When a RPG connects to a destination system's output port, it retrieves every 
> FlowFile queued at the time of the connection.  If the source system with the 
> RPG is a NiFi cluster, only one node in the cluster receives all the 
> FlowFiles from that output port.  If there is a steady stream of FlowFiles to 
> the output port, there is still no true balanced delivery of data.
> We need to be able to limit the number of FlowFiles per connection when the 
> source of the FlowFiles is an output port.
> When the destination system with the output port is a cluster and the source 
> system is a cluster with a RPG, the first node to connect will pull all data 
> from the node with the highest queue.  Next node will pull from next highest 
> queued destination node and so on. There is also no guarantee that avery node 
> in the source cluster will get any data.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-1682) Processor to do Rolling Window calculations using FlowFile attributes

2017-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897804#comment-15897804
 ] 

ASF GitHub Bot commented on NIFI-1682:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1328


> Processor to do Rolling Window calculations using FlowFile attributes
> -
>
> Key: NIFI-1682
> URL: https://issues.apache.org/jira/browse/NIFI-1682
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Joseph Percivall
>Assignee: Joseph Percivall
>
> Using state it is now possible to store a map of key value pairs up to 1mb. 
> Taking into account storing a timestamp string and a double converted to a 
> string this is on the order of 5000 values. This enables a processor that can 
> store a rolling window of values to calculate things such as a rolling mean.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi pull request #1328: NIFI-1682 Adding RollingWindowOperation processor

2017-03-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1328


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1682) Processor to do Rolling Window calculations using FlowFile attributes

2017-03-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897802#comment-15897802
 ] 

ASF subversion and git services commented on NIFI-1682:
---

Commit b7f946e8472271597db165beec69e50ae139b420 in nifi's branch 
refs/heads/master from jpercivall
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=b7f946e ]

NIFI-1682 Adding AttributeRollingWindow processor

Signed-off-by: Pierre Villard 

This closes #1328.


> Processor to do Rolling Window calculations using FlowFile attributes
> -
>
> Key: NIFI-1682
> URL: https://issues.apache.org/jira/browse/NIFI-1682
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Joseph Percivall
>Assignee: Joseph Percivall
>
> Using state it is now possible to store a map of key value pairs up to 1mb. 
> Taking into account storing a timestamp string and a double converted to a 
> string this is on the order of 5000 values. This enables a processor that can 
> store a rolling window of values to calculate things such as a rolling mean.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-1682) Processor to do Rolling Window calculations using FlowFile attributes

2017-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897800#comment-15897800
 ] 

ASF GitHub Bot commented on NIFI-1682:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1328
  
I'm a , will merge @JPercivall 


> Processor to do Rolling Window calculations using FlowFile attributes
> -
>
> Key: NIFI-1682
> URL: https://issues.apache.org/jira/browse/NIFI-1682
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Joseph Percivall
>Assignee: Joseph Percivall
>
> Using state it is now possible to store a map of key value pairs up to 1mb. 
> Taking into account storing a timestamp string and a double converted to a 
> string this is on the order of 5000 values. This enables a processor that can 
> store a rolling window of values to calculate things such as a rolling mean.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi issue #1328: NIFI-1682 Adding RollingWindowOperation processor

2017-03-06 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1328
  
I'm a 👍, will merge @JPercivall 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-385) Add Kerberos support in nifi-kite-nar

2017-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897776#comment-15897776
 ] 

ASF GitHub Bot commented on NIFI-385:
-

Github user WilliamNouet commented on the issue:

https://github.com/apache/nifi/pull/1565
  
From @jtstorck:
FYI, some issues have been noticed with Hadoop and Kerberos in the NiFi 
HDFS processors. There's JIRA that I'm working on to fix a scenario where 
Kerberos principals end up getting switched between multiple processors. It's 
due to the UserGroupInformation class being shared by two instances of 
processors from the same NAR. The fix involves not depending on 
nifi-hadoop-libraries-nar, and including the hadoop-client dependency directly 
in the nifi-hdfs-processors pom.

Once the Kite processors support Kerberos, they'll most likely have the 
same issue, and a similar change will have to be made to the POMs.

For reference, the JIRA is: https://issues.apache.org/jira/browse/NIFI-3520


> Add Kerberos support in nifi-kite-nar
> -
>
> Key: NIFI-385
> URL: https://issues.apache.org/jira/browse/NIFI-385
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Ryan Blue
>
> Kite should be able to connect to a Kerberized Hadoop cluster to store data. 
> Kite's Flume connector has working code. The Kite dataset needs to be 
> instantiated in a {{doPrivileged}} block and its internal {{FileSystem}} 
> object will hold the credentials after that.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-385) Add Kerberos support in nifi-kite-nar

2017-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897778#comment-15897778
 ] 

ASF GitHub Bot commented on NIFI-385:
-

Github user WilliamNouet commented on the issue:

https://github.com/apache/nifi/pull/1528
  
Closing this PR and opening a cleaner one at 
https://github.com/apache/nifi/pull/1565


> Add Kerberos support in nifi-kite-nar
> -
>
> Key: NIFI-385
> URL: https://issues.apache.org/jira/browse/NIFI-385
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Ryan Blue
>
> Kite should be able to connect to a Kerberized Hadoop cluster to store data. 
> Kite's Flume connector has working code. The Kite dataset needs to be 
> instantiated in a {{doPrivileged}} block and its internal {{FileSystem}} 
> object will hold the credentials after that.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-385) Add Kerberos support in nifi-kite-nar

2017-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897775#comment-15897775
 ] 

ASF GitHub Bot commented on NIFI-385:
-

GitHub user WilliamNouet opened a pull request:

https://github.com/apache/nifi/pull/1565

NIFI-385 Add Kerberos Support to Kite

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

For all changes:

[Y] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?

[Y] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

[Y] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

[Y] Is your initial contribution a single, squashed commit?

For code changes:

[Y] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
[Y] Have you written or updated unit tests to verify your changes?
[N] If adding new dependencies to the code, are these dependencies licensed 
in a way that is compatible for inclusion under ASF 2.0?
[N/A] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
[N/A] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
[N/A] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?
For documentation related changes:

[N/A] Have you ensured that format looks appropriate for the output in 
which it is rendered?
Note:

Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/WilliamNouet/nifi NIFI-385-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1565.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1565


commit 8434d54b1fda626118090d1d206f2c62f2f39e85
Author: WilliamNouet 
Date:   2017-03-06T18:21:48Z

Add Kerberos Support to Kite




> Add Kerberos support in nifi-kite-nar
> -
>
> Key: NIFI-385
> URL: https://issues.apache.org/jira/browse/NIFI-385
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Ryan Blue
>
> Kite should be able to connect to a Kerberized Hadoop cluster to store data. 
> Kite's Flume connector has working code. The Kite dataset needs to be 
> instantiated in a {{doPrivileged}} block and its internal {{FileSystem}} 
> object will hold the credentials after that.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi pull request #1565: NIFI-385 Add Kerberos Support to Kite

2017-03-06 Thread WilliamNouet
GitHub user WilliamNouet opened a pull request:

https://github.com/apache/nifi/pull/1565

NIFI-385 Add Kerberos Support to Kite

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

For all changes:

[Y] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?

[Y] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

[Y] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

[Y] Is your initial contribution a single, squashed commit?

For code changes:

[Y] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
[Y] Have you written or updated unit tests to verify your changes?
[N] If adding new dependencies to the code, are these dependencies licensed 
in a way that is compatible for inclusion under ASF 2.0?
[N/A] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
[N/A] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
[N/A] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?
For documentation related changes:

[N/A] Have you ensured that format looks appropriate for the output in 
which it is rendered?
Note:

Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/WilliamNouet/nifi NIFI-385-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1565.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1565


commit 8434d54b1fda626118090d1d206f2c62f2f39e85
Author: WilliamNouet 
Date:   2017-03-06T18:21:48Z

Add Kerberos Support to Kite




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1528: NIFI-385 Add Kerberos support in nifi-kite-nar

2017-03-06 Thread WilliamNouet
Github user WilliamNouet commented on the issue:

https://github.com/apache/nifi/pull/1528
  
Closing this PR and opening a cleaner one at 
https://github.com/apache/nifi/pull/1565


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1565: NIFI-385 Add Kerberos Support to Kite

2017-03-06 Thread WilliamNouet
Github user WilliamNouet commented on the issue:

https://github.com/apache/nifi/pull/1565
  
From @jtstorck:
FYI, some issues have been noticed with Hadoop and Kerberos in the NiFi 
HDFS processors. There's JIRA that I'm working on to fix a scenario where 
Kerberos principals end up getting switched between multiple processors. It's 
due to the UserGroupInformation class being shared by two instances of 
processors from the same NAR. The fix involves not depending on 
nifi-hadoop-libraries-nar, and including the hadoop-client dependency directly 
in the nifi-hdfs-processors pom.

Once the Kite processors support Kerberos, they'll most likely have the 
same issue, and a similar change will have to be made to the POMs.

For reference, the JIRA is: https://issues.apache.org/jira/browse/NIFI-3520


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1528: NIFI-385 Add Kerberos support in nifi-kite-nar

2017-03-06 Thread WilliamNouet
Github user WilliamNouet closed the pull request at:

https://github.com/apache/nifi/pull/1528


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-385) Add Kerberos support in nifi-kite-nar

2017-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897759#comment-15897759
 ] 

ASF GitHub Bot commented on NIFI-385:
-

Github user WilliamNouet closed the pull request at:

https://github.com/apache/nifi/pull/1528


> Add Kerberos support in nifi-kite-nar
> -
>
> Key: NIFI-385
> URL: https://issues.apache.org/jira/browse/NIFI-385
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Ryan Blue
>
> Kite should be able to connect to a Kerberized Hadoop cluster to store data. 
> Kite's Flume connector has working code. The Kite dataset needs to be 
> instantiated in a {{doPrivileged}} block and its internal {{FileSystem}} 
> object will hold the credentials after that.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi pull request #1450: NIFI-3339b Add getDataSource() to DBCPService, seco...

2017-03-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1450#discussion_r104459486
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/pom.xml
 ---
@@ -81,5 +81,17 @@
 hamcrest-all
 1.3
 
+
+org.springframework
--- End diff --

Do we need to bring in Spring JDBC for testing? Seems like we should be 
able to test this functionality without the added dependency?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-3541) Site to Site Client should allow indication of local binding address

2017-03-06 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri updated NIFI-3541:
--
Component/s: Core Framework

> Site to Site Client should allow indication of local binding address
> 
>
> Key: NIFI-3541
> URL: https://issues.apache.org/jira/browse/NIFI-3541
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Joseph Witt
>Assignee: Mark Payne
> Fix For: 1.2.0
>
>
> For systems with multiple nics to choose from we need a way to tell the 
> operating system from our site to site client which local address (nic) we 
> want to bind to.  The java socket class makes this possible but we do not 
> make this available via site to site yet.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3541) Site to Site Client should allow indication of local binding address

2017-03-06 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri updated NIFI-3541:
--
Fix Version/s: 1.2.0

> Site to Site Client should allow indication of local binding address
> 
>
> Key: NIFI-3541
> URL: https://issues.apache.org/jira/browse/NIFI-3541
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Joseph Witt
>Assignee: Mark Payne
> Fix For: 1.2.0
>
>
> For systems with multiple nics to choose from we need a way to tell the 
> operating system from our site to site client which local address (nic) we 
> want to bind to.  The java socket class makes this possible but we do not 
> make this available via site to site yet.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3541) Site to Site Client should allow indication of local binding address

2017-03-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897573#comment-15897573
 ] 

ASF subversion and git services commented on NIFI-3541:
---

Commit 16bde02ed079150c1fbf80806537137613ae2d10 in nifi's branch 
refs/heads/master from [~mcgilman]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=16bde02 ]

NIFI-3541: - Allowing the user to specify the network interface to send/receive 
data for a Remote Process Group.

This closes #1550.

Signed-off-by: Mark Payne 
Signed-off-by: Aldrin Piri 


> Site to Site Client should allow indication of local binding address
> 
>
> Key: NIFI-3541
> URL: https://issues.apache.org/jira/browse/NIFI-3541
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Witt
>Assignee: Mark Payne
>
> For systems with multiple nics to choose from we need a way to tell the 
> operating system from our site to site client which local address (nic) we 
> want to bind to.  The java socket class makes this possible but we do not 
> make this available via site to site yet.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3541) Site to Site Client should allow indication of local binding address

2017-03-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897572#comment-15897572
 ] 

ASF subversion and git services commented on NIFI-3541:
---

Commit 9e68f02f1fb7eb442f6c9580b46255f713d8b191 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=9e68f02 ]

NIFI-3541: Add local network interface capability to site-to-site client and 
remote group and ports


> Site to Site Client should allow indication of local binding address
> 
>
> Key: NIFI-3541
> URL: https://issues.apache.org/jira/browse/NIFI-3541
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Witt
>Assignee: Mark Payne
>
> For systems with multiple nics to choose from we need a way to tell the 
> operating system from our site to site client which local address (nic) we 
> want to bind to.  The java socket class makes this possible but we do not 
> make this available via site to site yet.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi pull request #1550: Nifi 3541

2017-03-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1550


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1550: Nifi 3541

2017-03-06 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1550
  
@apiri that's a good call. I just pushed a new commit that updates the User 
Guide, where these properties are explained. Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-3557) Make DatabaseAdapter interface available to extensions

2017-03-06 Thread Matt Burgess (JIRA)
Matt Burgess created NIFI-3557:
--

 Summary: Make DatabaseAdapter interface available to extensions
 Key: NIFI-3557
 URL: https://issues.apache.org/jira/browse/NIFI-3557
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Matt Burgess


The DatabaseAdapter interface, used to generate SQL queries and perform other 
database-specific operations, is only available to processors in the 
nifi-standard-bundle. However users have been asking if their custom processors 
can leverage this interface.

I propose either a new nifi-db-adapter-api module (and corresponding JAR), or 
to move DatabaseAdapter into the nifi-dbcp-service-api module. I realize the 
latter implies it only corresponds to the DBCPService controller service 
interface, but they are very often used together and I don't think it would be 
confusing or difficult to find (and less so, versus its current location).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi-minifi-cpp pull request #62: MINIFI-231: Add Flow Persistent, Using id ...

2017-03-06 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/62#discussion_r104433421
  
--- Diff: libminifi/include/Connection.h ---
@@ -180,7 +184,8 @@ class Connection
std::atomic _maxQueueDataSize;
//! Flow File Expiration Duration in= MilliSeconds
std::atomic _expiredDuration;
-
+   //! UUID string
+   std::string _uuidStr;
--- End diff --

Again this isn't a big deal but will be changed. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #62: MINIFI-231: Add Flow Persistent, Using id ...

2017-03-06 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/62#discussion_r104433184
  
--- Diff: libminifi/src/Repository.cpp ---
@@ -0,0 +1,140 @@
+/**
+ * @file Repository.cpp
+ * Repository implemenatation 
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include 
+#include 
+#include 
+#include "io/DataStream.h"
+#include "io/Serializable.h"
+#include "Relationship.h"
+#include "Logger.h"
+#include "FlowController.h"
+#include "Repository.h"
+#include "Provenance.h"
+#include "FlowFileRepository.h"
+
+const char *Repository::RepositoryTypeStr[MAX_REPO_TYPE] = {"Provenace 
Repository", "FlowFile Repository"};
+uint64_t Repository::_repoSize[MAX_REPO_TYPE] = {0, 0}; 
+
+void Repository::start() {
+   if (!_enable)
+   return;
+   if (this->_purgePeriod <= 0)
+   return;
+   if (_running)
+   return;
+   _running = true;
+   logger_->log_info("%s Repository Monitor Thread Start", 
RepositoryTypeStr[_type]);
+   _thread = new std::thread(run, this);
+   _thread->detach();
+}
+
+void Repository::stop() {
+   if (!_running)
+   return;
+   _running = false;
+   logger_->log_info("%s Repository Monitor Thread Stop", 
RepositoryTypeStr[_type]);
+}
+
+void Repository::run(Repository *repo) {
+   // threshold for purge
--- End diff --

I think a comment should be placed to reflect this. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #62: MINIFI-231: Add Flow Persistent, Using id ...

2017-03-06 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/62#discussion_r104432956
  
--- Diff: libminifi/include/FlowFileRepository.h ---
@@ -0,0 +1,208 @@
+/**
+ * @file FlowFileRepository 
+ * Flow file repository class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef __FLOWFILE_REPOSITORY_H__
+#define __FLOWFILE_REPOSITORY_H__
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "Configure.h"
+#include "Connection.h"
+#include "FlowFileRecord.h"
+#include "Logger.h"
+#include "Property.h"
+#include "ResourceClaim.h"
+#include "io/Serializable.h"
+#include "utils/TimeUtil.h"
+#include "Repository.h"
+
+class FlowFileRepository;
+
+//! FlowFile Event Record
+class FlowFileEventRecord : protected Serializable
+{
+public:
+   friend class ProcessSession;
+public:
+   //! Constructor
+   /*!
+* Create a new provenance event record
+*/
+   FlowFileEventRecord()
+   : _entryDate(0), _lineageStartDate(0), _size(0), _offset(0)  
+   {
+   _eventTime = getTimeMillis();
+   logger_ = Logger::getLogger();
+   }
+
+   //! Destructor
+   virtual ~FlowFileEventRecord() {
+   }
+   //! Get Attributes
+   std::map getAttributes() {
--- End diff --

what do you mean somehow? can you explain a path? Why would you be worrying 
about an event record being deleted? If that is the case we have bigger 
problems. perhaps we should discuss this. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #62: MINIFI-231: Add Flow Persistent, Using id ...

2017-03-06 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/62#discussion_r104432714
  
--- Diff: libminifi/include/Connection.h ---
@@ -180,7 +184,8 @@ class Connection
std::atomic _maxQueueDataSize;
//! Flow File Expiration Duration in= MilliSeconds
std::atomic _expiredDuration;
-
+   //! UUID string
+   std::string _uuidStr;
--- End diff --

We're making a transition to the google code style. I don't think 
consistency in a class is a concern now since we're in such flux. Doing that 
will help me avoid having to do new members. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-3147) Build processor to parse CCDA into attributes and JSON

2017-03-06 Thread Kedar Chitale (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kedar Chitale updated NIFI-3147:

Labels: attributes ccda healthcare parser  (was: attributes ccda healthcare 
json parser)

> Build processor to parse CCDA into attributes and JSON
> --
>
> Key: NIFI-3147
> URL: https://issues.apache.org/jira/browse/NIFI-3147
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Kedar Chitale
>Assignee: Joseph Witt
>  Labels: attributes, ccda, healthcare, parser
> Fix For: 1.2.0
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Accept a CCDA document, Parse the document to create individual attributes 
> for example code.codeSystemName=LOINC



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3147) Build processor to parse CCDA into attributes and JSON

2017-03-06 Thread Kedar Chitale (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kedar Chitale updated NIFI-3147:

Description: Accept a CCDA document, Parse the document to create 
individual attributes for example code.codeSystemName=LOINC  (was: Accept a 
CCDA document, Parse the document to create JSON text and individual attributes 
for example code.codeSystemName=LOINC)

> Build processor to parse CCDA into attributes and JSON
> --
>
> Key: NIFI-3147
> URL: https://issues.apache.org/jira/browse/NIFI-3147
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Kedar Chitale
>Assignee: Joseph Witt
>  Labels: attributes, ccda, healthcare, parser
> Fix For: 1.2.0
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Accept a CCDA document, Parse the document to create individual attributes 
> for example code.codeSystemName=LOINC



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3545) Let M FlowFilews pass through once N signals arrive

2017-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897402#comment-15897402
 ] 

ASF GitHub Bot commented on NIFI-3545:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1554


> Let M FlowFilews pass through once N signals arrive
> ---
>
> Key: NIFI-3545
> URL: https://issues.apache.org/jira/browse/NIFI-3545
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
> Fix For: 1.2.0
>
>
> If Wait processor can:
> "Let M flow files pass through once N notify signals arrived for key K"
> we can support more variety type of use-cases. Currently, it only support 
> "Let 1 flow file pass through once N notify signals arrived for key K"
> h3. How it works? Simulation
> For example, let's say there are 50 incoming flow files at the beginning, f1 
> to f50.
> N=3, M=100
> It can be read as "Wait processor is allowed to convert 3 signals to get 100 
> pass tickets."
> 1. There's no signal for K, all flow files are waiting
> 2. Notify sends a signal. K( N=1 ) doesn't meet Wait condition, Wait 
> processor is still waiting
> 3. Notify sends another two signals. Now K( N=3 ) matches Wait condition
> 4. Wait processor starts consuming flow files, f1 to f50, update K( N=3, 
> M=50), where M denotes remaining number of flow files those can go through
> 5. Another 30 flow files arrive, Wait processor consumes f51 to f80, update 
> K( N=0, M=20)
> 6. Another 30 flow files arrive, Wait processor consumes f81 to f100. K is 
> now K( N=0, M=0 ). Since all N and M is used, Wait processor removes K.  f101 
> to f110 are waiting for signals, same state as #1.
> h4. Alternative path after 6
> 7a. If Notify sends additional signals, then f101 to f110 can go through
> 7b. If Notify doesn't send any more signal, then f101 to f110 will be routed 
> to expired
> h4. Alternative path after 5
> 6a. If Notify sends additional signal at this point, K would be K( N=1, 
> M=20). Wait processor can process 20 flow files because it still has M=20.
> 6b. If Notify sends additional three signals, K would be K(N=3, M=20). Wait 
> processor consumes 20 flow files, and when 21th flow file comes, it 
> immediately convert N to M, meaning consume N(3) to create M(100) pass, then 
> K(N=0, M=100)
> Additionally, we can let user configure M=0. Meaning, Wait can release any 
> number of incoming flow files as long as N meets the condition.
> With this, Notify +1 can behave as if it opens a GATE, and Notify –1 will 
> close it.
> h4. Another possible use-case, 'Limit data flow rate at cluster wide'
> It's more complex than just supporting GATE open/close state. However, if we 
> support M flow files to go through, it can also provide rate limit across 
> cluster.
> Example use case, NiFi A push data via S2S to NiFi B, and want to limit 100 
> flow files per 5 min.
> On NiFi A:
> Notify part of flow: GenerateFlowFile(5 min, on primary) -> Notify(K, N=+1)
> Wait part of flow: Some ingested data -> Wait(K, N=1, M=100)
> Since Wait/Notify state is managed globally via DistributedCache, we can 
> limit throughput cluster wide.
> If use case requires to limit rate exactly, then they can design Notify part 
> as:
> GenerateFlowFile(5 min, on primary) -> Notify(K, N=0) -> Notify(K, N=+1)
> It avoids N to be added up when there's no traffic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (NIFI-3545) Let M FlowFilews pass through once N signals arrive

2017-03-06 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri resolved NIFI-3545.
---
   Resolution: Done
Fix Version/s: 1.2.0

> Let M FlowFilews pass through once N signals arrive
> ---
>
> Key: NIFI-3545
> URL: https://issues.apache.org/jira/browse/NIFI-3545
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
> Fix For: 1.2.0
>
>
> If Wait processor can:
> "Let M flow files pass through once N notify signals arrived for key K"
> we can support more variety type of use-cases. Currently, it only support 
> "Let 1 flow file pass through once N notify signals arrived for key K"
> h3. How it works? Simulation
> For example, let's say there are 50 incoming flow files at the beginning, f1 
> to f50.
> N=3, M=100
> It can be read as "Wait processor is allowed to convert 3 signals to get 100 
> pass tickets."
> 1. There's no signal for K, all flow files are waiting
> 2. Notify sends a signal. K( N=1 ) doesn't meet Wait condition, Wait 
> processor is still waiting
> 3. Notify sends another two signals. Now K( N=3 ) matches Wait condition
> 4. Wait processor starts consuming flow files, f1 to f50, update K( N=3, 
> M=50), where M denotes remaining number of flow files those can go through
> 5. Another 30 flow files arrive, Wait processor consumes f51 to f80, update 
> K( N=0, M=20)
> 6. Another 30 flow files arrive, Wait processor consumes f81 to f100. K is 
> now K( N=0, M=0 ). Since all N and M is used, Wait processor removes K.  f101 
> to f110 are waiting for signals, same state as #1.
> h4. Alternative path after 6
> 7a. If Notify sends additional signals, then f101 to f110 can go through
> 7b. If Notify doesn't send any more signal, then f101 to f110 will be routed 
> to expired
> h4. Alternative path after 5
> 6a. If Notify sends additional signal at this point, K would be K( N=1, 
> M=20). Wait processor can process 20 flow files because it still has M=20.
> 6b. If Notify sends additional three signals, K would be K(N=3, M=20). Wait 
> processor consumes 20 flow files, and when 21th flow file comes, it 
> immediately convert N to M, meaning consume N(3) to create M(100) pass, then 
> K(N=0, M=100)
> Additionally, we can let user configure M=0. Meaning, Wait can release any 
> number of incoming flow files as long as N meets the condition.
> With this, Notify +1 can behave as if it opens a GATE, and Notify –1 will 
> close it.
> h4. Another possible use-case, 'Limit data flow rate at cluster wide'
> It's more complex than just supporting GATE open/close state. However, if we 
> support M flow files to go through, it can also provide rate limit across 
> cluster.
> Example use case, NiFi A push data via S2S to NiFi B, and want to limit 100 
> flow files per 5 min.
> On NiFi A:
> Notify part of flow: GenerateFlowFile(5 min, on primary) -> Notify(K, N=+1)
> Wait part of flow: Some ingested data -> Wait(K, N=1, M=100)
> Since Wait/Notify state is managed globally via DistributedCache, we can 
> limit throughput cluster wide.
> If use case requires to limit rate exactly, then they can design Notify part 
> as:
> GenerateFlowFile(5 min, on primary) -> Notify(K, N=0) -> Notify(K, N=+1)
> It avoids N to be added up when there's no traffic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi pull request #1554: NIFI-3545: Release M FlowFilews once N signals arri...

2017-03-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1554


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3545) Let M FlowFilews pass through once N signals arrive

2017-03-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897401#comment-15897401
 ] 

ASF subversion and git services commented on NIFI-3545:
---

Commit 000414e7eaa4c4f459779e73c331f3021f9a5049 in nifi's branch 
refs/heads/master from [~ijokarumawak]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=000414e ]

NIFI-3545: Release M FlowFilews once N signals arrive

- Support multiplle incoming FlowFiles to Wait processor, up to Wait
  Buffer Count
- Added Releasable FlowFile Count, which controls how many FlowFiles can
  be released when wait condition is met
- Added special meaning to Notify delta Zero(0) to clear a signal
  counter back to zero

  This closes #1554

Signed-off-by: Aldrin Piri 


> Let M FlowFilews pass through once N signals arrive
> ---
>
> Key: NIFI-3545
> URL: https://issues.apache.org/jira/browse/NIFI-3545
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> If Wait processor can:
> "Let M flow files pass through once N notify signals arrived for key K"
> we can support more variety type of use-cases. Currently, it only support 
> "Let 1 flow file pass through once N notify signals arrived for key K"
> h3. How it works? Simulation
> For example, let's say there are 50 incoming flow files at the beginning, f1 
> to f50.
> N=3, M=100
> It can be read as "Wait processor is allowed to convert 3 signals to get 100 
> pass tickets."
> 1. There's no signal for K, all flow files are waiting
> 2. Notify sends a signal. K( N=1 ) doesn't meet Wait condition, Wait 
> processor is still waiting
> 3. Notify sends another two signals. Now K( N=3 ) matches Wait condition
> 4. Wait processor starts consuming flow files, f1 to f50, update K( N=3, 
> M=50), where M denotes remaining number of flow files those can go through
> 5. Another 30 flow files arrive, Wait processor consumes f51 to f80, update 
> K( N=0, M=20)
> 6. Another 30 flow files arrive, Wait processor consumes f81 to f100. K is 
> now K( N=0, M=0 ). Since all N and M is used, Wait processor removes K.  f101 
> to f110 are waiting for signals, same state as #1.
> h4. Alternative path after 6
> 7a. If Notify sends additional signals, then f101 to f110 can go through
> 7b. If Notify doesn't send any more signal, then f101 to f110 will be routed 
> to expired
> h4. Alternative path after 5
> 6a. If Notify sends additional signal at this point, K would be K( N=1, 
> M=20). Wait processor can process 20 flow files because it still has M=20.
> 6b. If Notify sends additional three signals, K would be K(N=3, M=20). Wait 
> processor consumes 20 flow files, and when 21th flow file comes, it 
> immediately convert N to M, meaning consume N(3) to create M(100) pass, then 
> K(N=0, M=100)
> Additionally, we can let user configure M=0. Meaning, Wait can release any 
> number of incoming flow files as long as N meets the condition.
> With this, Notify +1 can behave as if it opens a GATE, and Notify –1 will 
> close it.
> h4. Another possible use-case, 'Limit data flow rate at cluster wide'
> It's more complex than just supporting GATE open/close state. However, if we 
> support M flow files to go through, it can also provide rate limit across 
> cluster.
> Example use case, NiFi A push data via S2S to NiFi B, and want to limit 100 
> flow files per 5 min.
> On NiFi A:
> Notify part of flow: GenerateFlowFile(5 min, on primary) -> Notify(K, N=+1)
> Wait part of flow: Some ingested data -> Wait(K, N=1, M=100)
> Since Wait/Notify state is managed globally via DistributedCache, we can 
> limit throughput cluster wide.
> If use case requires to limit rate exactly, then they can design Notify part 
> as:
> GenerateFlowFile(5 min, on primary) -> Notify(K, N=0) -> Notify(K, N=+1)
> It avoids N to be added up when there's no traffic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi issue #1554: NIFI-3545: Release M FlowFilews once N signals arrive

2017-03-06 Thread apiri
Github user apiri commented on the issue:

https://github.com/apache/nifi/pull/1554
  
Was able to test this in a variety of situations and using your supplied 
templates.  Works as anticipated and thanks for all the tests.  Will get this 
merged in!  Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3545) Let M FlowFilews pass through once N signals arrive

2017-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897379#comment-15897379
 ] 

ASF GitHub Bot commented on NIFI-3545:
--

Github user apiri commented on the issue:

https://github.com/apache/nifi/pull/1554
  
Was able to test this in a variety of situations and using your supplied 
templates.  Works as anticipated and thanks for all the tests.  Will get this 
merged in!  Thanks!


> Let M FlowFilews pass through once N signals arrive
> ---
>
> Key: NIFI-3545
> URL: https://issues.apache.org/jira/browse/NIFI-3545
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> If Wait processor can:
> "Let M flow files pass through once N notify signals arrived for key K"
> we can support more variety type of use-cases. Currently, it only support 
> "Let 1 flow file pass through once N notify signals arrived for key K"
> h3. How it works? Simulation
> For example, let's say there are 50 incoming flow files at the beginning, f1 
> to f50.
> N=3, M=100
> It can be read as "Wait processor is allowed to convert 3 signals to get 100 
> pass tickets."
> 1. There's no signal for K, all flow files are waiting
> 2. Notify sends a signal. K( N=1 ) doesn't meet Wait condition, Wait 
> processor is still waiting
> 3. Notify sends another two signals. Now K( N=3 ) matches Wait condition
> 4. Wait processor starts consuming flow files, f1 to f50, update K( N=3, 
> M=50), where M denotes remaining number of flow files those can go through
> 5. Another 30 flow files arrive, Wait processor consumes f51 to f80, update 
> K( N=0, M=20)
> 6. Another 30 flow files arrive, Wait processor consumes f81 to f100. K is 
> now K( N=0, M=0 ). Since all N and M is used, Wait processor removes K.  f101 
> to f110 are waiting for signals, same state as #1.
> h4. Alternative path after 6
> 7a. If Notify sends additional signals, then f101 to f110 can go through
> 7b. If Notify doesn't send any more signal, then f101 to f110 will be routed 
> to expired
> h4. Alternative path after 5
> 6a. If Notify sends additional signal at this point, K would be K( N=1, 
> M=20). Wait processor can process 20 flow files because it still has M=20.
> 6b. If Notify sends additional three signals, K would be K(N=3, M=20). Wait 
> processor consumes 20 flow files, and when 21th flow file comes, it 
> immediately convert N to M, meaning consume N(3) to create M(100) pass, then 
> K(N=0, M=100)
> Additionally, we can let user configure M=0. Meaning, Wait can release any 
> number of incoming flow files as long as N meets the condition.
> With this, Notify +1 can behave as if it opens a GATE, and Notify –1 will 
> close it.
> h4. Another possible use-case, 'Limit data flow rate at cluster wide'
> It's more complex than just supporting GATE open/close state. However, if we 
> support M flow files to go through, it can also provide rate limit across 
> cluster.
> Example use case, NiFi A push data via S2S to NiFi B, and want to limit 100 
> flow files per 5 min.
> On NiFi A:
> Notify part of flow: GenerateFlowFile(5 min, on primary) -> Notify(K, N=+1)
> Wait part of flow: Some ingested data -> Wait(K, N=1, M=100)
> Since Wait/Notify state is managed globally via DistributedCache, we can 
> limit throughput cluster wide.
> If use case requires to limit rate exactly, then they can design Notify part 
> as:
> GenerateFlowFile(5 min, on primary) -> Notify(K, N=0) -> Notify(K, N=+1)
> It avoids N to be added up when there's no traffic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3204) delete hdfs processor throws an error stating transfer relationship not specified even when all relationships are present

2017-03-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/NIFI-3204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

François Prunier updated NIFI-3204:
---

Thanks! I'm away from this office at the moment, I'll have a look at your code 
by the end of the week.




> delete hdfs processor throws an error stating transfer relationship not 
> specified even when all relationships are present
> -
>
> Key: NIFI-3204
> URL: https://issues.apache.org/jira/browse/NIFI-3204
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Arpit Gupta
>Assignee: Joseph Witt
> Attachments: 
> 0001-NIFI-3204-applies-after-pr1561-aeab1feeb3c5f47858793.patch
>
>
> Following flow was setup
> get file -> extract text -> delete hdfs
> A bunch of files were written each having one line which was the path to 
> delete. Some of these path's were files, some were directories and some were 
> patterns. Extract text would extract the line and assign to an attribute 
> which delete hdfs would use to populate the path to delete.
> However the processor would run into an error when ever it tried to process 
> the path which was a pattern matching multiple paths.
> {code}
> 2016-12-14 11:32:43,335 ERROR [Timer-Driven Process Thread-7] 
> o.a.nifi.processors.hadoop.DeleteHDFS 
> DeleteHDFS[id=fed0acf6-0158-1000-b7ab-8cc724e4142d] 
> DeleteHDFS[id=fed0acf6-0158-1000-b7ab-8cc724e4142d] failed to process session 
> due to org.apache.nifi.processor.exception
> .FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=af8be94a-e527-4203-bb87-a0115f84e582,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1481656798897-1, container=default, 
> section=1], offset=6518, length=75],offset=0,name=noyg3p7km8.txt,size=75] tr
> ansfer relationship not specified: 
> org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=af8be94a-e527-4203-bb87-a0115f84e582,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1481656798897-1, container=default, 
> sectio
> n=1], offset=6518, length=75],offset=0,name=noyg3p7km8.txt,size=75] transfer 
> relationship not specified
> 2016-12-14 11:32:43,335 ERROR [Timer-Driven Process Thread-7] 
> o.a.nifi.processors.hadoop.DeleteHDFS
> org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=af8be94a-e527-4203-bb87-a0115f84e582,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1481656798897-1, container=default, 
> section=1], offset=6518, length=75],offse
> t=0,name=noyg3p7km8.txt,size=75] transfer relationship not specified
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:234)
>  ~[nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:304)
>  ~[nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
>  ~[nifi-api-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
>  ~[nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  [nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>  [nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_92]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_92]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_92]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_92]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_92]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_92]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_92]
> 2016-12-14 11:32:43,335 WARN [Timer-Driven Process Thread-7] 
> o.a.nifi.processors.hadoop.DeleteHDFS 
> DeleteHDFS[id=fed0acf6-0158-1000-b7ab-8cc724e4142d] Processor 
> Administratively Yielded for 1 sec due to processing failure
> {code}



--
This message was