Re: How to integrate a custom protocol with Apache Nifi

2016-12-07 Thread Andy LoPresto
If the only communication protocol allowed between the two nodes is CQL3, then 
yes, you can place NiFi on Node 1 and have the HTTP micro service communicate 
to the same host. Then NiFi will communicate to Node 2 via CQL. Regardless of 
which node it is residing on, for this scenario you only need one deployment of 
NiFi.

Multiple instances of NiFi come into play in the following scenarios:

* clustering [1]
* site-to-site [2] (a NiFi-specific protocol for sending NiFi flowfiles in 
their native format between multiple instances of NiFi)
- see also, MiNiFi [3]

[1] 
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#clustering
[2] https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#site-to-site
[3] https://nifi.apache.org/minifi/index.html

Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Dec 7, 2016, at 7:20 PM, kant kodali  wrote:
> 
> Hi Andy,
> 
> I am a bit confused again. In my example my deployment is fixed right
> 
> Node1 has my HTTP service and my Node 2 has Cassandra.
> The communication between Node1 and Node2 is through CQL3.
> A client will initiate the request to the my service on Node 1 via HTTP and 
> you can assume a client is a browser.
> 
> so if you are saying place Nifi on Node 2 and receive incoming HTTP request ? 
> it doesn't quite make sense to me because the only communication protocol 
> that is allowed between Node1 and Node 2 is CQL3  so I am not sure how one 
> can receive HTTP requests in node 2 ?
> 
> Thanks!
> 
> 
> 
> 
> 
> 
> On Wed, Dec 7, 2016 at 5:58 PM, Andy LoPresto  > wrote:
> Hi Kant,
> 
> Before we get too much farther, I think the Overview [1] and Getting Started 
> [2] guide and Joe Witt’s OSCON talk [3] might be valuable because they really 
> underline the problem(s) NiFi solves and the behaviors it uses to do that.
> 
> In the scenario you described, you only need one Nifi node (forgive the ASCII 
> diagram).
> 
> Node 1Node 2
> micro service ---HTTP-—>  NiFi ListenHTTP processor
>success cxn
>   NiFi PutCassandraQL processor ---CQL--> Cassandra 
> instance
> 
> NiFi will receive the HTTP messages from your micro service and ingest each 
> incoming request, generating a flowfile for each request, placing the request 
> body into the content claim, and various header data into attributes. Then 
> you will create a connection between the ListenHTTP (or HandleHTTPRequest 
> processor if it’s more complex) and a PutCassandraQL processor. This 
> processor will write the contents of the flowfile to the Cassandra instance 
> over CQL3. Now, the PutCassandraQL processor expects the content of the 
> incoming flowfile to be a CQL command, so let’s use the example where the 
> command is a simple INSERT statement but the values depend on what is being 
> generated by the micro service. In that case, you could use a ReplaceText 
> processor between ListenHTTP and PutCassandraQL to format the query 
> correctly. If the HTTP content was JSON, for example, you could use an 
> EvaluateJsonPath processor to extract the values you care about from the 
> content to attributes, and ReplaceText will populate the templated expression 
> with the proper values using the NiFi Expression Language [4].
> 
> For more examples, I’d suggest you look at the NiFi Template Registry [5] or 
> watch some of the demo videos [6]. The User Guide [7] is also valuable. 
> Hopefully all of these resources and our continuing conversations will help 
> you understand and apply NiFi to solve your data flow challenges.
> 
> [1] https://nifi.apache.org/docs/nifi-docs/html/overview.html 
> 
> [2] https://nifi.apache.org/docs/nifi-docs/html/getting-started.html 
> 
> [3] https://youtu.be/sQCgtCoZyFQ 
> [4] 
> https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html 
> 
> [5] 
> https://cwiki.apache.org/confluence/display/NIFI/Example+Dataflow+Templates 
> 
> [6] https://nifi.apache.org/videos.html 
> [7] https://nifi.apache.org/docs/nifi-docs/html/user-guide.html 
> 
> 
> Andy LoPresto
> alopre...@apache.org 
> alopresto.apa...@gmail.com 
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
> 
>> On Dec 7, 2016, at 5:31 PM, kant kodali > > wrote:
>> 
>> Hi Koji,
>> 
>> 
>> Thanks again for your 

Deleting cluster nodes using UI - what permissions are required

2016-12-07 Thread Andre
All,

Asking here as I think it will benefit the user base:

What are the permissions required to delete cluster nodes using the UI or
REST API?

Cheers


Re: problem creating simple cluster

2016-12-07 Thread Matthew Clarke
In the NiFi.properties file is a property just above your http.port=8080
for the http.host= . If that field is left blank. UI requested that are
replicated between node's may end up using localhost. Make sure this
property is set to the Nodes hostname or IP addresses on every node. A
restart will be needed before any changes to the NiFi.properties file to
take affect.

Thanks,
Matt

On Dec 7, 2016 8:21 PM, "Koji Kawamura"  wrote:

> Sorry, I overlooked the nifi.properties settings you shared.
> Would you share what you can see on the NiFi "Cluster window", from
> right top Hamburger menu, especially the 'Node Address' column?
>
> Thanks,
> Koji
>
> On Thu, Dec 8, 2016 at 10:10 AM, Koji Kawamura 
> wrote:
> > Hi Brian,
> >
> > Are those three node running on a same host using different port? Or
> > running on different hosts?
> > nifi.properties has nifi.cluster.node.address configuration, which is
> > used by a NiFi node to tell how other NiFi nodes should access the
> > node.
> >
> > If the property is blank, NiFi uses 'localhost' as node hostname.
> > I think that's why the node tried to replicate the request to
> 'localhost:8080'.
> >
> > If so, the property should be set with a hostname that is accessible
> > from other nodes.
> >
> > Or, if there's any firewall among nodes,
> > nifi.web.http.port
> > nifi.cluster.node.protocol.port
> > should be opened.
> >
> > I sometimes forget this on AWS with security group setting then get
> > timeout error.
> >
> > Thanks,
> > Koji
> >
> > On Thu, Dec 8, 2016 at 2:59 AM, Brian Jeltema 
> wrote:
> >> The cluster is not running securely, so I don’t believe that file is
> >> relavent. In the stack trace,
> >> the reference to /nifi-api/flow/current-user is misleading - I think any
> >> nifi-api call has problems.
> >>
> >> On Dec 7, 2016, at 12:37 PM, James Wing  wrote:
> >>
> >> Brian,
> >>
> >> Did you add entries for the node DNs in the conf/authorizers.xml file?
> >> Something like:
> >>
> >> 
> >> CN=node1.nifi, ...
> >> CN=node2.nifi, ...
> >> ...
> >>
> >> Thanks,
> >>
> >> James
> >>
> >> On Wed, Dec 7, 2016 at 8:28 AM, Brian Jeltema 
> wrote:
> >>>
> >>> I’m trying to create my first cluster using NiFi 1.1.0. It’s a simple
> >>> 3-node unsecure configuration with each node running embedded
> >>> zookeeper. The instances all come up and the zookeeper quarum is
> >>> reached.
> >>>
> >>> If I bring up the UI for the node that is elected as the
> >>> cluster coordinator, it works as expected, and shows that 3 nodes
> >>> are participating in the cluster.
> >>>
> >>> However, if I attempt to display the UI on the non-coordinator nodes,
> >>> after a delay of about 10 seconds an error page is returned. The
> >>> logs contains a stream of exceptions similar to the following:
> >>>
> >>> 2016-12-07 11:08:17,914 WARN [Replicate Request Thread-2]
> >>> o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request
> GET
> >>> /nifi-api/flow/current-user to localhost:8080 due to {}
> >>> com.sun.jersey.api.client.ClientHandlerException:
> >>> java.net.SocketTimeoutException: Read timed out
> >>> at
> >>> com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(
> URLConnectionClientHandler.java:155)
> >>> ~[jersey-client-1.19.jar:1.19]
> >>> at com.sun.jersey.api.client.Client.handle(Client.java:652)
> >>> ~[jersey-client-1.19.jar:1.19]
> >>> at
> >>> com.sun.jersey.api.client.filter.GZIPContentEncodingFilter.handle(
> GZIPContentEncodingFilter.java:123)
> >>> ~[jersey-client-1.19.jar:1.19]
> >>> at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682)
> >>> ~[jersey-client-1.19.jar:1.19]
> >>> at com.sun.jersey.api.client.WebResource.access$200(
> WebResource.java:74)
> >>> ~[jersey-client-1.19.jar:1.19]
> >>> at com.sun.jersey.api.client.WebResource$Builder.get(
> WebResource.java:509)
> >>> ~[jersey-client-1.19.jar:1.19]
> >>> at
> >>> org.apache.nifi.cluster.coordination.http.replication.
> ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.
> java:578)
> >>> ~[nifi-framework-cluster-1.1.0.jar:1.1.0]
> >>> at
> >>> org.apache.nifi.cluster.coordination.http.replication.
> ThreadPoolRequestReplicator$NodeHttpRequest.run(
> ThreadPoolRequestReplicator.java:770)
> >>> ~[nifi-framework-cluster-1.1.0.jar:1.1.0]
> >>> at java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:511)
> >>> [na:1.8.0_101]
> >>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> [na:1.8.0_101]
> >>> at
> >>> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> >>> [na:1.8.0_101]
> >>> at
> >>> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> >>> [na:1.8.0_101]
> >>> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
> >>> Caused by: java.net.SocketTimeoutException: Read timed out
> >>> at java.net.SocketInputStream.socketRead0(Native 

Re: How to integrate a custom protocol with Apache Nifi

2016-12-07 Thread Andy LoPresto
Hi Kant,

Before we get too much farther, I think the Overview [1] and Getting Started 
[2] guide and Joe Witt’s OSCON talk [3] might be valuable because they really 
underline the problem(s) NiFi solves and the behaviors it uses to do that.

In the scenario you described, you only need one Nifi node (forgive the ASCII 
diagram).

Node 1Node 2
micro service ---HTTP-—>  NiFi ListenHTTP processor
   success cxn
  NiFi PutCassandraQL processor ---CQL--> Cassandra 
instance

NiFi will receive the HTTP messages from your micro service and ingest each 
incoming request, generating a flowfile for each request, placing the request 
body into the content claim, and various header data into attributes. Then you 
will create a connection between the ListenHTTP (or HandleHTTPRequest processor 
if it’s more complex) and a PutCassandraQL processor. This processor will write 
the contents of the flowfile to the Cassandra instance over CQL3. Now, the 
PutCassandraQL processor expects the content of the incoming flowfile to be a 
CQL command, so let’s use the example where the command is a simple INSERT 
statement but the values depend on what is being generated by the micro 
service. In that case, you could use a ReplaceText processor between ListenHTTP 
and PutCassandraQL to format the query correctly. If the HTTP content was JSON, 
for example, you could use an EvaluateJsonPath processor to extract the values 
you care about from the content to attributes, and ReplaceText will populate 
the templated expression with the proper values using the NiFi Expression 
Language [4].

For more examples, I’d suggest you look at the NiFi Template Registry [5] or 
watch some of the demo videos [6]. The User Guide [7] is also valuable. 
Hopefully all of these resources and our continuing conversations will help you 
understand and apply NiFi to solve your data flow challenges.

[1] https://nifi.apache.org/docs/nifi-docs/html/overview.html
[2] https://nifi.apache.org/docs/nifi-docs/html/getting-started.html
[3] https://youtu.be/sQCgtCoZyFQ
[4] https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html
[5] https://cwiki.apache.org/confluence/display/NIFI/Example+Dataflow+Templates
[6] https://nifi.apache.org/videos.html
[7] https://nifi.apache.org/docs/nifi-docs/html/user-guide.html

Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Dec 7, 2016, at 5:31 PM, kant kodali  wrote:
> 
> Hi Koji,
> 
> 
> Thanks again for your response.
> 
> "Should an application/micro service worry about how it talk with NiFi?
> My answer would be no. The service should focus and use the best
> protocol or technology for its purpose."
> 
> This sounds great. This is what we want! I also understand the other 
> capabilities of Nifi you mentioned!
> 
> I have the following question. Say I have a HTTP based micro service on node 
> 1 and a single process Cassandra on node 2 and say my micro service talks to 
> cassandra using CQL3 protocol. so how would I integrate NiFi ? Here is what I 
> understood so far from this thread.
> 
> 1. Run a Nifi JVM on node 1 and node 2.
> 2. Configure Nifi JVM on node 1 to use HTTP protocol for incoming messages 
> and for output messages configure Nifi JVM to use CQL3 protocol.
> 3. Configure Nifi JVM on node 2 to use CQL3 protocol messages for both 
> incoming and outgoing messages.
> 
> Is this correct so far? am I missing something?
> 
> Thanks!
> 
> 
> 
> 
> 
> 
> On Wed, Dec 7, 2016 at 4:47 PM, Koji Kawamura  > wrote:
> Hi Kant,
> 
> Glad to know that you liked the explanation, thanks :)
> 
> By reading discussion in this thread, I assume what you'd like to do
> with NiFi is, get messages from a system amd do some routing/filtering
> on NiFi, then send it to another system, using a custom protocol.
> If so, you can write two custom NiFi processors, typically named as
> Get and Put ( is a name of the protocol).
> 
> Based on this architecture, NiFi as a routing layer, if you need to
> store received data into another datastore to extend the use of data
> further, then NiFi will be really useful, because it already support
> variety of data source and protocol integrations. Also, if data schema
> changed at different rate between these systems, then NiFi can be a
> schema migration point.
> 
> Data ingested into NiFi called FlowFiles, which has opaque binary
> Content, and string key/value pairs called Attributes, very generic
> data format.
> Once the data is converted to a FlowFile, there's no big difference
> where the FlowFile came from via what protocol.
> So, If NiFi can act as a server for different protocols, then your
> NiFi data flow can support broader type of clients.
> 
> There's many existing processors, act as server for certain protocol,
> such as 

Re: How to integrate a custom protocol with Apache Nifi

2016-12-07 Thread kant kodali
Hi Koji,


Thanks again for your response.

"Should an application/micro service worry about how it talk with NiFi?
My answer would be no. The service should focus and use the best
protocol or technology for its purpose."

This sounds great. This is what we want! I also understand the other
capabilities of Nifi you mentioned!

I have the following question. Say I have a HTTP based micro service on
node 1 and a single process Cassandra on node 2 and say my micro service
talks to cassandra using CQL3 protocol. so how would I integrate NiFi ?
Here is what I understood so far from this thread.

1. Run a Nifi JVM on node 1 and node 2.
2. Configure Nifi JVM on node 1 to use HTTP protocol for incoming messages
and for output messages configure Nifi JVM to use CQL3 protocol.
3. Configure Nifi JVM on node 2 to use CQL3 protocol messages for both
incoming and outgoing messages.

Is this correct so far? am I missing something?

Thanks!






On Wed, Dec 7, 2016 at 4:47 PM, Koji Kawamura 
wrote:

> Hi Kant,
>
> Glad to know that you liked the explanation, thanks :)
>
> By reading discussion in this thread, I assume what you'd like to do
> with NiFi is, get messages from a system amd do some routing/filtering
> on NiFi, then send it to another system, using a custom protocol.
> If so, you can write two custom NiFi processors, typically named as
> Get and Put ( is a name of the protocol).
>
> Based on this architecture, NiFi as a routing layer, if you need to
> store received data into another datastore to extend the use of data
> further, then NiFi will be really useful, because it already support
> variety of data source and protocol integrations. Also, if data schema
> changed at different rate between these systems, then NiFi can be a
> schema migration point.
>
> Data ingested into NiFi called FlowFiles, which has opaque binary
> Content, and string key/value pairs called Attributes, very generic
> data format.
> Once the data is converted to a FlowFile, there's no big difference
> where the FlowFile came from via what protocol.
> So, If NiFi can act as a server for different protocols, then your
> NiFi data flow can support broader type of clients.
>
> There's many existing processors, act as server for certain protocol,
> such as ListenTCP, ListenSyslog, ListenHTTP/HandleHTTPRequest ... etc.
> Once you created Put for your custom protocol, then you can use
> NiFi to support TCP, HTTP, WebSocket ... protocols to receive data to
> integrate with the destination using  protocol, e.g. ListenHTTP ->
> some data processing on NiFi -> Put.
>
> Also, MiNiFi can be helpful to bring data integration to the edge.
>
> Should an application/micro service worry about how it talk with NiFi?
> My answer would be no. The service should focus and use the best
> protocol or technology for its purpose.
> But knowing what NiFi can help extending use cases around the service,
> would help you to focus more on the app/service itself.
>
> Thanks,
> Koji
>
> On Wed, Dec 7, 2016 at 6:52 PM, kant kodali  wrote:
> > I am also confused a little bit since I am new to Nifi. I wonder why Nifi
> > would act as a server? isn't Nifi a routing layer between systems?
> because
> > this brings in another question about Nifi in general.
> >
> > When I write my applications/microservices do I need to worry about how
> my
> > service would talk to Nifi or do I have the freedom of just focusing on
> my
> > application and using whatever protocol I want and In the end just
> plugin to
> > Nifi which would take care? other words is Nifi a tight integration with
> > applications such that I always have to import a Nifi Library within my
> > application/microservice ? other words do I need to worry about Nifi at
> > programming/development time of an Application/Microservice or at
> deployment
> > time?
> >
> > Sorry if these are naive questions. But answers to those will help
> greatly
> > and prevent me from asking more questions!
> >
> > Thanks much!
> > kant
> >
> >
> >
> >
> > On Wed, Dec 7, 2016 at 1:23 AM, kant kodali  wrote:
> >>
> >> Hi Koji,
> >>
> >> That is an awesome explanation! I expected processors for HTTP2 at very
> >> least since it is widely used ( the entire GRPC stack runs on that). I
> am
> >> not sure how easy or hard it is to build one?
> >>
> >> Thanks!
> >>
> >> On Wed, Dec 7, 2016 at 1:08 AM, Koji Kawamura 
> >> wrote:
> >>>
> >>> Hi Kant,
> >>>
> >>> Although I'm not aware of existing processor for HTTP2 or NSQ, NiFi
> >>> has a set of processors for WebSocket since 1.1.0.
> >>> It enables NiFi to act as a WebSocket client to communicate with a
> >>> remote WebSocket server, or makes NiFi a WebSocket server so that
> >>> remote clients access to it via WebSocket protocol.
> >>>
> >>> I've written a blog post about how to use it, I hope it will be useful
> >>> for your use case:
> >>> 

Re: problem creating simple cluster

2016-12-07 Thread Koji Kawamura
Sorry, I overlooked the nifi.properties settings you shared.
Would you share what you can see on the NiFi "Cluster window", from
right top Hamburger menu, especially the 'Node Address' column?

Thanks,
Koji

On Thu, Dec 8, 2016 at 10:10 AM, Koji Kawamura  wrote:
> Hi Brian,
>
> Are those three node running on a same host using different port? Or
> running on different hosts?
> nifi.properties has nifi.cluster.node.address configuration, which is
> used by a NiFi node to tell how other NiFi nodes should access the
> node.
>
> If the property is blank, NiFi uses 'localhost' as node hostname.
> I think that's why the node tried to replicate the request to 
> 'localhost:8080'.
>
> If so, the property should be set with a hostname that is accessible
> from other nodes.
>
> Or, if there's any firewall among nodes,
> nifi.web.http.port
> nifi.cluster.node.protocol.port
> should be opened.
>
> I sometimes forget this on AWS with security group setting then get
> timeout error.
>
> Thanks,
> Koji
>
> On Thu, Dec 8, 2016 at 2:59 AM, Brian Jeltema  wrote:
>> The cluster is not running securely, so I don’t believe that file is
>> relavent. In the stack trace,
>> the reference to /nifi-api/flow/current-user is misleading - I think any
>> nifi-api call has problems.
>>
>> On Dec 7, 2016, at 12:37 PM, James Wing  wrote:
>>
>> Brian,
>>
>> Did you add entries for the node DNs in the conf/authorizers.xml file?
>> Something like:
>>
>> 
>> CN=node1.nifi, ...
>> CN=node2.nifi, ...
>> ...
>>
>> Thanks,
>>
>> James
>>
>> On Wed, Dec 7, 2016 at 8:28 AM, Brian Jeltema  wrote:
>>>
>>> I’m trying to create my first cluster using NiFi 1.1.0. It’s a simple
>>> 3-node unsecure configuration with each node running embedded
>>> zookeeper. The instances all come up and the zookeeper quarum is
>>> reached.
>>>
>>> If I bring up the UI for the node that is elected as the
>>> cluster coordinator, it works as expected, and shows that 3 nodes
>>> are participating in the cluster.
>>>
>>> However, if I attempt to display the UI on the non-coordinator nodes,
>>> after a delay of about 10 seconds an error page is returned. The
>>> logs contains a stream of exceptions similar to the following:
>>>
>>> 2016-12-07 11:08:17,914 WARN [Replicate Request Thread-2]
>>> o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET
>>> /nifi-api/flow/current-user to localhost:8080 due to {}
>>> com.sun.jersey.api.client.ClientHandlerException:
>>> java.net.SocketTimeoutException: Read timed out
>>> at
>>> com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155)
>>> ~[jersey-client-1.19.jar:1.19]
>>> at com.sun.jersey.api.client.Client.handle(Client.java:652)
>>> ~[jersey-client-1.19.jar:1.19]
>>> at
>>> com.sun.jersey.api.client.filter.GZIPContentEncodingFilter.handle(GZIPContentEncodingFilter.java:123)
>>> ~[jersey-client-1.19.jar:1.19]
>>> at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682)
>>> ~[jersey-client-1.19.jar:1.19]
>>> at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
>>> ~[jersey-client-1.19.jar:1.19]
>>> at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:509)
>>> ~[jersey-client-1.19.jar:1.19]
>>> at
>>> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:578)
>>> ~[nifi-framework-cluster-1.1.0.jar:1.1.0]
>>> at
>>> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:770)
>>> ~[nifi-framework-cluster-1.1.0.jar:1.1.0]
>>> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> [na:1.8.0_101]
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_101]
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> [na:1.8.0_101]
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> [na:1.8.0_101]
>>> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
>>> Caused by: java.net.SocketTimeoutException: Read timed out
>>> at java.net.SocketInputStream.socketRead0(Native Method) ~[na:1.8.0_101]
>>> at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>>> ~[na:1.8.0_101]
>>> at java.net.SocketInputStream.read(SocketInputStream.java:170)
>>> ~[na:1.8.0_101]
>>> at java.net.SocketInputStream.read(SocketInputStream.java:141)
>>> ~[na:1.8.0_101]
>>> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>>> ~[na:1.8.0_101]
>>> at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
>>> ~[na:1.8.0_101]
>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>>> ~[na:1.8.0_101]
>>> at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704)
>>> ~[na:1.8.0_101]
>>> at 

Re: problem creating simple cluster

2016-12-07 Thread Koji Kawamura
Hi Brian,

Are those three node running on a same host using different port? Or
running on different hosts?
nifi.properties has nifi.cluster.node.address configuration, which is
used by a NiFi node to tell how other NiFi nodes should access the
node.

If the property is blank, NiFi uses 'localhost' as node hostname.
I think that's why the node tried to replicate the request to 'localhost:8080'.

If so, the property should be set with a hostname that is accessible
from other nodes.

Or, if there's any firewall among nodes,
nifi.web.http.port
nifi.cluster.node.protocol.port
should be opened.

I sometimes forget this on AWS with security group setting then get
timeout error.

Thanks,
Koji

On Thu, Dec 8, 2016 at 2:59 AM, Brian Jeltema  wrote:
> The cluster is not running securely, so I don’t believe that file is
> relavent. In the stack trace,
> the reference to /nifi-api/flow/current-user is misleading - I think any
> nifi-api call has problems.
>
> On Dec 7, 2016, at 12:37 PM, James Wing  wrote:
>
> Brian,
>
> Did you add entries for the node DNs in the conf/authorizers.xml file?
> Something like:
>
> 
> CN=node1.nifi, ...
> CN=node2.nifi, ...
> ...
>
> Thanks,
>
> James
>
> On Wed, Dec 7, 2016 at 8:28 AM, Brian Jeltema  wrote:
>>
>> I’m trying to create my first cluster using NiFi 1.1.0. It’s a simple
>> 3-node unsecure configuration with each node running embedded
>> zookeeper. The instances all come up and the zookeeper quarum is
>> reached.
>>
>> If I bring up the UI for the node that is elected as the
>> cluster coordinator, it works as expected, and shows that 3 nodes
>> are participating in the cluster.
>>
>> However, if I attempt to display the UI on the non-coordinator nodes,
>> after a delay of about 10 seconds an error page is returned. The
>> logs contains a stream of exceptions similar to the following:
>>
>> 2016-12-07 11:08:17,914 WARN [Replicate Request Thread-2]
>> o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET
>> /nifi-api/flow/current-user to localhost:8080 due to {}
>> com.sun.jersey.api.client.ClientHandlerException:
>> java.net.SocketTimeoutException: Read timed out
>> at
>> com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155)
>> ~[jersey-client-1.19.jar:1.19]
>> at com.sun.jersey.api.client.Client.handle(Client.java:652)
>> ~[jersey-client-1.19.jar:1.19]
>> at
>> com.sun.jersey.api.client.filter.GZIPContentEncodingFilter.handle(GZIPContentEncodingFilter.java:123)
>> ~[jersey-client-1.19.jar:1.19]
>> at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682)
>> ~[jersey-client-1.19.jar:1.19]
>> at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
>> ~[jersey-client-1.19.jar:1.19]
>> at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:509)
>> ~[jersey-client-1.19.jar:1.19]
>> at
>> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:578)
>> ~[nifi-framework-cluster-1.1.0.jar:1.1.0]
>> at
>> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:770)
>> ~[nifi-framework-cluster-1.1.0.jar:1.1.0]
>> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> [na:1.8.0_101]
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_101]
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> [na:1.8.0_101]
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> [na:1.8.0_101]
>> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
>> Caused by: java.net.SocketTimeoutException: Read timed out
>> at java.net.SocketInputStream.socketRead0(Native Method) ~[na:1.8.0_101]
>> at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>> ~[na:1.8.0_101]
>> at java.net.SocketInputStream.read(SocketInputStream.java:170)
>> ~[na:1.8.0_101]
>> at java.net.SocketInputStream.read(SocketInputStream.java:141)
>> ~[na:1.8.0_101]
>> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>> ~[na:1.8.0_101]
>> at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
>> ~[na:1.8.0_101]
>> at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>> ~[na:1.8.0_101]
>> at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704)
>> ~[na:1.8.0_101]
>> at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
>> ~[na:1.8.0_101]
>> at
>> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1536)
>> ~[na:1.8.0_101]
>> at
>> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
>> ~[na:1.8.0_101]
>> at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
>> ~[na:1.8.0_101]
>> at
>> 

Re: How to integrate a custom protocol with Apache Nifi

2016-12-07 Thread Koji Kawamura
Hi Kant,

Glad to know that you liked the explanation, thanks :)

By reading discussion in this thread, I assume what you'd like to do
with NiFi is, get messages from a system amd do some routing/filtering
on NiFi, then send it to another system, using a custom protocol.
If so, you can write two custom NiFi processors, typically named as
Get and Put ( is a name of the protocol).

Based on this architecture, NiFi as a routing layer, if you need to
store received data into another datastore to extend the use of data
further, then NiFi will be really useful, because it already support
variety of data source and protocol integrations. Also, if data schema
changed at different rate between these systems, then NiFi can be a
schema migration point.

Data ingested into NiFi called FlowFiles, which has opaque binary
Content, and string key/value pairs called Attributes, very generic
data format.
Once the data is converted to a FlowFile, there's no big difference
where the FlowFile came from via what protocol.
So, If NiFi can act as a server for different protocols, then your
NiFi data flow can support broader type of clients.

There's many existing processors, act as server for certain protocol,
such as ListenTCP, ListenSyslog, ListenHTTP/HandleHTTPRequest ... etc.
Once you created Put for your custom protocol, then you can use
NiFi to support TCP, HTTP, WebSocket ... protocols to receive data to
integrate with the destination using  protocol, e.g. ListenHTTP ->
some data processing on NiFi -> Put.

Also, MiNiFi can be helpful to bring data integration to the edge.

Should an application/micro service worry about how it talk with NiFi?
My answer would be no. The service should focus and use the best
protocol or technology for its purpose.
But knowing what NiFi can help extending use cases around the service,
would help you to focus more on the app/service itself.

Thanks,
Koji

On Wed, Dec 7, 2016 at 6:52 PM, kant kodali  wrote:
> I am also confused a little bit since I am new to Nifi. I wonder why Nifi
> would act as a server? isn't Nifi a routing layer between systems? because
> this brings in another question about Nifi in general.
>
> When I write my applications/microservices do I need to worry about how my
> service would talk to Nifi or do I have the freedom of just focusing on my
> application and using whatever protocol I want and In the end just plugin to
> Nifi which would take care? other words is Nifi a tight integration with
> applications such that I always have to import a Nifi Library within my
> application/microservice ? other words do I need to worry about Nifi at
> programming/development time of an Application/Microservice or at deployment
> time?
>
> Sorry if these are naive questions. But answers to those will help greatly
> and prevent me from asking more questions!
>
> Thanks much!
> kant
>
>
>
>
> On Wed, Dec 7, 2016 at 1:23 AM, kant kodali  wrote:
>>
>> Hi Koji,
>>
>> That is an awesome explanation! I expected processors for HTTP2 at very
>> least since it is widely used ( the entire GRPC stack runs on that). I am
>> not sure how easy or hard it is to build one?
>>
>> Thanks!
>>
>> On Wed, Dec 7, 2016 at 1:08 AM, Koji Kawamura 
>> wrote:
>>>
>>> Hi Kant,
>>>
>>> Although I'm not aware of existing processor for HTTP2 or NSQ, NiFi
>>> has a set of processors for WebSocket since 1.1.0.
>>> It enables NiFi to act as a WebSocket client to communicate with a
>>> remote WebSocket server, or makes NiFi a WebSocket server so that
>>> remote clients access to it via WebSocket protocol.
>>>
>>> I've written a blog post about how to use it, I hope it will be useful
>>> for your use case:
>>> http://ijokarumawak.github.io/nifi/2016/11/04/nifi-websocket/
>>>
>>> Thanks,
>>> Koji
>>>
>>>
>>>
>>> On Wed, Dec 7, 2016 at 3:23 PM, kant kodali  wrote:
>>> > Thanks a ton guys! Didn't expect Nifi community to be so good! (Another
>>> > convincing reason!)
>>> >
>>> > Coming back to the problem, We use NSQ a lot (although not my favorite)
>>> > and
>>> > want to be able integrate Nifi with NSQ to other systems such as Kafka,
>>> > Spark, Cassandra, ElasticSearch, some micro services that uses HTTP2
>>> > and
>>> > some micro service that uses Websockets
>>> >
>>> > (which brings in other question if Nifi has HTTP2 and Websocket
>>> > processors?)
>>> >
>>> > Also below is NSQ protocol spec. They have Java client library as well.
>>> >
>>> > http://nsq.io/clients/tcp_protocol_spec.html
>>> >
>>> >
>>> > On Tue, Dec 6, 2016 at 5:14 PM, Oleg Zhurakousky
>>> >  wrote:
>>> >>
>>> >> Hi Kant
>>> >>
>>> >> What you’re trying to accomplish is definitely possible, however more
>>> >> information may be needed from you.
>>> >> For example, the way I understand your statement about “integration
>>> >> with
>>> >> many systems” is something like JMS, Kafka, TCP, FTP etc…. If that is

Re: How to integrate a custom protocol with Apache Nifi

2016-12-07 Thread kant kodali
Thats what I was looking for. That sounds awesome because it is decoupled
from the application itself!

Finally, is there any example or some pointer you can point me to that
shows how Nifi takes cares in case of different protocols (for now you can
assume all standard protocol) so say a simplest case

An HTTP based Application talks to cassandra or some other database Via CQL
V4 or psql in case of postgress . How can I add Nifi ?

or Any other simple use case would be great! I searched to see if there is
any book I can read on Nifi but I couldn't find one.

On Wed, Dec 7, 2016 at 12:19 PM, Oleg Zhurakousky <
ozhurakou...@hortonworks.com> wrote:

> Ohh no, not at all. Just tell it the host/port combination and off you go,
> basically the second part of what you said
>
> Oleg
>
> On Dec 7, 2016, at 3:08 PM, kant kodali  wrote:
>
> so do I need to import PostHTTP processor, GetHTTP processor Libraries in
> my application code? or can I just build a HTTP Server and tell Nifi JVM
> that "hey here is my Application (it runs on specific IP and port) and it
> uses HTTP".
>
> On Wed, Dec 7, 2016 at 12:00 PM, Oleg Zhurakousky <
> ozhurakou...@hortonworks.com> wrote:
>
>> Kant
>>
>> So, yes NiFi already provides basic integration with HTTP by virtue of
>> PostHTTP processor (post a content of the FlowFile to HTTP endpoint) and
>> the same in reverse with GetHTTP. So for you case your integration will be
>> as simple as posting or getting from your micro service
>>
>> And what makes it even more interesting is that NIFi in the microservices
>> architecture can play an even more significant role as coordinator,
>> orchestrator and mediator of heterogeneous environment of many micro
>> services.
>>
>> Hope that answers it.
>> Cheers
>> Oleg
>>
>>
>> On Dec 7, 2016, at 2:48 PM, kant kodali  wrote:
>>
>> Sorry I should have been more clear. My question is even more simpler and
>> naive. Say I am writing a HTTP based microservice (Now I assume Nifi has
>> integration with HTTP since it is a well known protocol ). Now, how would I
>> integrate Nifi with my HTTP based server?
>>
>> On Wed, Dec 7, 2016 at 4:55 AM, Oleg Zhurakousky <
>> ozhurakou...@hortonworks.com> wrote:
>>
>>> Kant
>>>
>>> There are couple of questions here so, let me try one at the time.
>>> 1. “why Nifi would act as a server?”. Well, NiFi is a runtime
>>> environment where things are *happening*. To be more exact; NiFi is a
>>> runtime environment where things are *triggered*. The big distinction
>>> here is trigger vs happening. In other words you may choose (as most
>>> processors do) to have NiFi act as an execution container and run your
>>> code, or you may chose for the NiFi to be a triggering container that
>>> triggers to run your code elsewhere. Example of the later one is the
>>> SpringContextProcessor which while still runs in the same JVM delegates
>>> execution to Spring application that runs in its own container (which could
>>> as well be a separate JVM or nah other container - ala micro services). So
>>> in this case NiFi still manages the orchestration, mediation, security etc.
>>>
>>> 2. “...do I need to worry about how my service would talk to Nifi or do
>>> I have the freedom of just focusing on my application and using whatever
>>> protocol I want and In the end just plugin to Nifi which would take care?”
>>> That is a loaded question and I must admit not fully understood, but i’ll
>>> give it a try. When integrating with NiFi you use one of the integration
>>> points such as Processor and/or ControllerService. Those are NiFi known
>>> strategies (interfaces) and so your custom Processor would need to
>>> implement such strategy and obviously be compliant with its contract.
>>> However, the other part of your question is about implementing something
>>> “independent” and just plug-in and if t’s possible. My answer is still YES
>>> as long as you design it that way. As an example of such design you may
>>> want to look at AMQP support where there are a pair of processors
>>> (PublishAMQP/ConsumeAMQP), but when you look at the code of these
>>> processors you’ll see that neither has any AMQP dependencies. Instead they
>>> depend on another case (let’s call it Worker) and that class is completely
>>> independent of NiFi. So in summary your protocol-specific Processor is
>>> independent of the protocol and your protocol-specific Worker is
>>> independent of NiFi and the two delegate between one another through common
>>> to both object (in this case bye[]). The Spring processor mentioned above
>>> implements the same pattern.
>>>
>>> And one more point about Microservices. I am willing to go as far as
>>> saying that NiFi is a variation of Microservices container. I am basing it
>>> n the fact that NiFi components (i.e., processors) implement a fundamental
>>> micro services pattern  - certain independence from it’s runtime and
>>> persistence of its results - which allows each and every component 

Re: How to integrate a custom protocol with Apache Nifi

2016-12-07 Thread Oleg Zhurakousky
Ohh no, not at all. Just tell it the host/port combination and off you go, 
basically the second part of what you said

Oleg

On Dec 7, 2016, at 3:08 PM, kant kodali 
> wrote:

so do I need to import PostHTTP processor, GetHTTP processor Libraries in my 
application code? or can I just build a HTTP Server and tell Nifi JVM that "hey 
here is my Application (it runs on specific IP and port) and it uses HTTP".

On Wed, Dec 7, 2016 at 12:00 PM, Oleg Zhurakousky 
> wrote:
Kant

So, yes NiFi already provides basic integration with HTTP by virtue of PostHTTP 
processor (post a content of the FlowFile to HTTP endpoint) and the same in 
reverse with GetHTTP. So for you case your integration will be as simple as 
posting or getting from your micro service

And what makes it even more interesting is that NIFi in the microservices 
architecture can play an even more significant role as coordinator, 
orchestrator and mediator of heterogeneous environment of many micro services.

Hope that answers it.
Cheers
Oleg


On Dec 7, 2016, at 2:48 PM, kant kodali 
> wrote:

Sorry I should have been more clear. My question is even more simpler and 
naive. Say I am writing a HTTP based microservice (Now I assume Nifi has 
integration with HTTP since it is a well known protocol ). Now, how would I 
integrate Nifi with my HTTP based server?

On Wed, Dec 7, 2016 at 4:55 AM, Oleg Zhurakousky 
> wrote:
Kant

There are couple of questions here so, let me try one at the time.
1. “why Nifi would act as a server?”. Well, NiFi is a runtime environment where 
things are happening. To be more exact; NiFi is a runtime environment where 
things are triggered. The big distinction here is trigger vs happening. In 
other words you may choose (as most processors do) to have NiFi act as an 
execution container and run your code, or you may chose for the NiFi to be a 
triggering container that triggers to run your code elsewhere. Example of the 
later one is the SpringContextProcessor which while still runs in the same JVM 
delegates execution to Spring application that runs in its own container (which 
could as well be a separate JVM or nah other container - ala micro services). 
So in this case NiFi still manages the orchestration, mediation, security etc.

2. “...do I need to worry about how my service would talk to Nifi or do I have 
the freedom of just focusing on my application and using whatever protocol I 
want and In the end just plugin to Nifi which would take care?” That is a 
loaded question and I must admit not fully understood, but i’ll give it a try. 
When integrating with NiFi you use one of the integration points such as 
Processor and/or ControllerService. Those are NiFi known strategies 
(interfaces) and so your custom Processor would need to implement such strategy 
and obviously be compliant with its contract. However, the other part of your 
question is about implementing something “independent” and just plug-in and if 
t’s possible. My answer is still YES as long as you design it that way. As an 
example of such design you may want to look at AMQP support where there are a 
pair of processors (PublishAMQP/ConsumeAMQP), but when you look at the code of 
these processors you’ll see that neither has any AMQP dependencies. Instead 
they depend on another case (let’s call it Worker) and that class is completely 
independent of NiFi. So in summary your protocol-specific Processor is 
independent of the protocol and your protocol-specific Worker is independent of 
NiFi and the two delegate between one another through common to both object (in 
this case bye[]). The Spring processor mentioned above implements the same 
pattern.

And one more point about Microservices. I am willing to go as far as saying 
that NiFi is a variation of Microservices container. I am basing it n the fact 
that NiFi components (i.e., processors) implement a fundamental micro services 
pattern  - certain independence from it’s runtime and persistence of its 
results - which allows each and every component to be managed 
(start/stopped/reconfigured/changed) independently of upstream/downstream 
components.

Ok, that is a load to process, so I’ll stop ;)

Look forward to more questions

Cheers
Oleg


On Dec 7, 2016, at 4:52 AM, kant kodali 
> wrote:

I am also confused a little bit since I am new to Nifi. I wonder why Nifi would 
act as a server? isn't Nifi a routing layer between systems? because this 
brings in another question about Nifi in general.

When I write my applications/microservices do I need to worry about how my 
service would talk to Nifi or do I have the freedom of just focusing on my 
application and using whatever protocol I want and In the end just plugin to 
Nifi which would 

Re: How to integrate a custom protocol with Apache Nifi

2016-12-07 Thread kant kodali
so do I need to import PostHTTP processor, GetHTTP processor Libraries in
my application code? or can I just build a HTTP Server and tell Nifi JVM
that "hey here is my Application (it runs on specific IP and port) and it
uses HTTP".

On Wed, Dec 7, 2016 at 12:00 PM, Oleg Zhurakousky <
ozhurakou...@hortonworks.com> wrote:

> Kant
>
> So, yes NiFi already provides basic integration with HTTP by virtue of
> PostHTTP processor (post a content of the FlowFile to HTTP endpoint) and
> the same in reverse with GetHTTP. So for you case your integration will be
> as simple as posting or getting from your micro service
>
> And what makes it even more interesting is that NIFi in the microservices
> architecture can play an even more significant role as coordinator,
> orchestrator and mediator of heterogeneous environment of many micro
> services.
>
> Hope that answers it.
> Cheers
> Oleg
>
>
> On Dec 7, 2016, at 2:48 PM, kant kodali  wrote:
>
> Sorry I should have been more clear. My question is even more simpler and
> naive. Say I am writing a HTTP based microservice (Now I assume Nifi has
> integration with HTTP since it is a well known protocol ). Now, how would I
> integrate Nifi with my HTTP based server?
>
> On Wed, Dec 7, 2016 at 4:55 AM, Oleg Zhurakousky <
> ozhurakou...@hortonworks.com> wrote:
>
>> Kant
>>
>> There are couple of questions here so, let me try one at the time.
>> 1. “why Nifi would act as a server?”. Well, NiFi is a runtime environment
>> where things are *happening*. To be more exact; NiFi is a runtime
>> environment where things are *triggered*. The big distinction here is
>> trigger vs happening. In other words you may choose (as most processors do)
>> to have NiFi act as an execution container and run your code, or you may
>> chose for the NiFi to be a triggering container that triggers to run your
>> code elsewhere. Example of the later one is the SpringContextProcessor
>> which while still runs in the same JVM delegates execution to Spring
>> application that runs in its own container (which could as well be a
>> separate JVM or nah other container - ala micro services). So in this case
>> NiFi still manages the orchestration, mediation, security etc.
>>
>> 2. “...do I need to worry about how my service would talk to Nifi or do I
>> have the freedom of just focusing on my application and using whatever
>> protocol I want and In the end just plugin to Nifi which would take care?”
>> That is a loaded question and I must admit not fully understood, but i’ll
>> give it a try. When integrating with NiFi you use one of the integration
>> points such as Processor and/or ControllerService. Those are NiFi known
>> strategies (interfaces) and so your custom Processor would need to
>> implement such strategy and obviously be compliant with its contract.
>> However, the other part of your question is about implementing something
>> “independent” and just plug-in and if t’s possible. My answer is still YES
>> as long as you design it that way. As an example of such design you may
>> want to look at AMQP support where there are a pair of processors
>> (PublishAMQP/ConsumeAMQP), but when you look at the code of these
>> processors you’ll see that neither has any AMQP dependencies. Instead they
>> depend on another case (let’s call it Worker) and that class is completely
>> independent of NiFi. So in summary your protocol-specific Processor is
>> independent of the protocol and your protocol-specific Worker is
>> independent of NiFi and the two delegate between one another through common
>> to both object (in this case bye[]). The Spring processor mentioned above
>> implements the same pattern.
>>
>> And one more point about Microservices. I am willing to go as far as
>> saying that NiFi is a variation of Microservices container. I am basing it
>> n the fact that NiFi components (i.e., processors) implement a fundamental
>> micro services pattern  - certain independence from it’s runtime and
>> persistence of its results - which allows each and every component to be
>> managed (start/stopped/reconfigured/changed) independently of
>> upstream/downstream components.
>>
>> Ok, that is a load to process, so I’ll stop ;)
>>
>> Look forward to more questions
>>
>> Cheers
>> Oleg
>>
>>
>> On Dec 7, 2016, at 4:52 AM, kant kodali  wrote:
>>
>> I am also confused a little bit since I am new to Nifi. I wonder why Nifi
>> would act as a server? isn't Nifi a routing layer between systems? because
>> this brings in another question about Nifi in general.
>>
>> When I write my applications/microservices do I need to worry about how
>> my service would talk to Nifi or do I have the freedom of just focusing on
>> my application and using whatever protocol I want and In the end just
>> plugin to Nifi which would take care? other words is Nifi a tight
>> integration with applications such that I always have to import a Nifi
>> Library within my application/microservice ? other words 

Re: How to integrate a custom protocol with Apache Nifi

2016-12-07 Thread Oleg Zhurakousky
Kant

So, yes NiFi already provides basic integration with HTTP by virtue of PostHTTP 
processor (post a content of the FlowFile to HTTP endpoint) and the same in 
reverse with GetHTTP. So for you case your integration will be as simple as 
posting or getting from your micro service

And what makes it even more interesting is that NIFi in the microservices 
architecture can play an even more significant role as coordinator, 
orchestrator and mediator of heterogeneous environment of many micro services.

Hope that answers it.
Cheers
Oleg


On Dec 7, 2016, at 2:48 PM, kant kodali 
> wrote:

Sorry I should have been more clear. My question is even more simpler and 
naive. Say I am writing a HTTP based microservice (Now I assume Nifi has 
integration with HTTP since it is a well known protocol ). Now, how would I 
integrate Nifi with my HTTP based server?

On Wed, Dec 7, 2016 at 4:55 AM, Oleg Zhurakousky 
> wrote:
Kant

There are couple of questions here so, let me try one at the time.
1. “why Nifi would act as a server?”. Well, NiFi is a runtime environment where 
things are happening. To be more exact; NiFi is a runtime environment where 
things are triggered. The big distinction here is trigger vs happening. In 
other words you may choose (as most processors do) to have NiFi act as an 
execution container and run your code, or you may chose for the NiFi to be a 
triggering container that triggers to run your code elsewhere. Example of the 
later one is the SpringContextProcessor which while still runs in the same JVM 
delegates execution to Spring application that runs in its own container (which 
could as well be a separate JVM or nah other container - ala micro services). 
So in this case NiFi still manages the orchestration, mediation, security etc.

2. “...do I need to worry about how my service would talk to Nifi or do I have 
the freedom of just focusing on my application and using whatever protocol I 
want and In the end just plugin to Nifi which would take care?” That is a 
loaded question and I must admit not fully understood, but i’ll give it a try. 
When integrating with NiFi you use one of the integration points such as 
Processor and/or ControllerService. Those are NiFi known strategies 
(interfaces) and so your custom Processor would need to implement such strategy 
and obviously be compliant with its contract. However, the other part of your 
question is about implementing something “independent” and just plug-in and if 
t’s possible. My answer is still YES as long as you design it that way. As an 
example of such design you may want to look at AMQP support where there are a 
pair of processors (PublishAMQP/ConsumeAMQP), but when you look at the code of 
these processors you’ll see that neither has any AMQP dependencies. Instead 
they depend on another case (let’s call it Worker) and that class is completely 
independent of NiFi. So in summary your protocol-specific Processor is 
independent of the protocol and your protocol-specific Worker is independent of 
NiFi and the two delegate between one another through common to both object (in 
this case bye[]). The Spring processor mentioned above implements the same 
pattern.

And one more point about Microservices. I am willing to go as far as saying 
that NiFi is a variation of Microservices container. I am basing it n the fact 
that NiFi components (i.e., processors) implement a fundamental micro services 
pattern  - certain independence from it’s runtime and persistence of its 
results - which allows each and every component to be managed 
(start/stopped/reconfigured/changed) independently of upstream/downstream 
components.

Ok, that is a load to process, so I’ll stop ;)

Look forward to more questions

Cheers
Oleg


On Dec 7, 2016, at 4:52 AM, kant kodali 
> wrote:

I am also confused a little bit since I am new to Nifi. I wonder why Nifi would 
act as a server? isn't Nifi a routing layer between systems? because this 
brings in another question about Nifi in general.

When I write my applications/microservices do I need to worry about how my 
service would talk to Nifi or do I have the freedom of just focusing on my 
application and using whatever protocol I want and In the end just plugin to 
Nifi which would take care? other words is Nifi a tight integration with 
applications such that I always have to import a Nifi Library within my 
application/microservice ? other words do I need to worry about Nifi at 
programming/development time of an Application/Microservice or at deployment 
time?

Sorry if these are naive questions. But answers to those will help greatly and 
prevent me from asking more questions!

Thanks much!
kant




On Wed, Dec 7, 2016 at 1:23 AM, kant kodali 
> wrote:
Hi Koji,

That is an awesome explanation! I expected 

Re: How to integrate a custom protocol with Apache Nifi

2016-12-07 Thread kant kodali
Sorry I should have been more clear. My question is even more simpler and
naive. Say I am writing a HTTP based microservice (Now I assume Nifi has
integration with HTTP since it is a well known protocol ). Now, how would I
integrate Nifi with my HTTP based server?

On Wed, Dec 7, 2016 at 4:55 AM, Oleg Zhurakousky <
ozhurakou...@hortonworks.com> wrote:

> Kant
>
> There are couple of questions here so, let me try one at the time.
> 1. “why Nifi would act as a server?”. Well, NiFi is a runtime environment
> where things are *happening*. To be more exact; NiFi is a runtime
> environment where things are *triggered*. The big distinction here is
> trigger vs happening. In other words you may choose (as most processors do)
> to have NiFi act as an execution container and run your code, or you may
> chose for the NiFi to be a triggering container that triggers to run your
> code elsewhere. Example of the later one is the SpringContextProcessor
> which while still runs in the same JVM delegates execution to Spring
> application that runs in its own container (which could as well be a
> separate JVM or nah other container - ala micro services). So in this case
> NiFi still manages the orchestration, mediation, security etc.
>
> 2. “...do I need to worry about how my service would talk to Nifi or do I
> have the freedom of just focusing on my application and using whatever
> protocol I want and In the end just plugin to Nifi which would take care?”
> That is a loaded question and I must admit not fully understood, but i’ll
> give it a try. When integrating with NiFi you use one of the integration
> points such as Processor and/or ControllerService. Those are NiFi known
> strategies (interfaces) and so your custom Processor would need to
> implement such strategy and obviously be compliant with its contract.
> However, the other part of your question is about implementing something
> “independent” and just plug-in and if t’s possible. My answer is still YES
> as long as you design it that way. As an example of such design you may
> want to look at AMQP support where there are a pair of processors
> (PublishAMQP/ConsumeAMQP), but when you look at the code of these
> processors you’ll see that neither has any AMQP dependencies. Instead they
> depend on another case (let’s call it Worker) and that class is completely
> independent of NiFi. So in summary your protocol-specific Processor is
> independent of the protocol and your protocol-specific Worker is
> independent of NiFi and the two delegate between one another through common
> to both object (in this case bye[]). The Spring processor mentioned above
> implements the same pattern.
>
> And one more point about Microservices. I am willing to go as far as
> saying that NiFi is a variation of Microservices container. I am basing it
> n the fact that NiFi components (i.e., processors) implement a fundamental
> micro services pattern  - certain independence from it’s runtime and
> persistence of its results - which allows each and every component to be
> managed (start/stopped/reconfigured/changed) independently of
> upstream/downstream components.
>
> Ok, that is a load to process, so I’ll stop ;)
>
> Look forward to more questions
>
> Cheers
> Oleg
>
>
> On Dec 7, 2016, at 4:52 AM, kant kodali  wrote:
>
> I am also confused a little bit since I am new to Nifi. I wonder why Nifi
> would act as a server? isn't Nifi a routing layer between systems? because
> this brings in another question about Nifi in general.
>
> When I write my applications/microservices do I need to worry about how my
> service would talk to Nifi or do I have the freedom of just focusing on my
> application and using whatever protocol I want and In the end just plugin
> to Nifi which would take care? other words is Nifi a tight integration with
> applications such that I always have to import a Nifi Library within my
> application/microservice ? other words do I need to worry about Nifi at
> programming/development time of an Application/Microservice or at
> deployment time?
>
> Sorry if these are naive questions. But answers to those will help greatly
> and prevent me from asking more questions!
>
> Thanks much!
> kant
>
>
>
>
> On Wed, Dec 7, 2016 at 1:23 AM, kant kodali  wrote:
>
>> Hi Koji,
>>
>> That is an awesome explanation! I expected processors for HTTP2 at very
>> least since it is widely used ( the entire GRPC stack runs on that). I am
>> not sure how easy or hard it is to build one?
>>
>> Thanks!
>>
>> On Wed, Dec 7, 2016 at 1:08 AM, Koji Kawamura 
>> wrote:
>>
>>> Hi Kant,
>>>
>>> Although I'm not aware of existing processor for HTTP2 or NSQ, NiFi
>>> has a set of processors for WebSocket since 1.1.0.
>>> It enables NiFi to act as a WebSocket client to communicate with a
>>> remote WebSocket server, or makes NiFi a WebSocket server so that
>>> remote clients access to it via WebSocket protocol.
>>>
>>> I've written a blog 

Re: problem creating simple cluster

2016-12-07 Thread James Wing
Brian,

Did you add entries for the node DNs in the conf/authorizers.xml file?
Something like:


CN=node1.nifi, ...
CN=node2.nifi, ...
...

Thanks,

James

On Wed, Dec 7, 2016 at 8:28 AM, Brian Jeltema  wrote:

> I’m trying to create my first cluster using NiFi 1.1.0. It’s a simple
> 3-node unsecure configuration with each node running embedded
> zookeeper. The instances all come up and the zookeeper quarum is
> reached.
>
> If I bring up the UI for the node that is elected as the
> cluster coordinator, it works as expected, and shows that 3 nodes
> are participating in the cluster.
>
> However, if I attempt to display the UI on the non-coordinator nodes,
> after a delay of about 10 seconds an error page is returned. The
> logs contains a stream of exceptions similar to the following:
>
> 2016-12-07 11:08:17,914 WARN [Replicate Request Thread-2] 
> o.a.n.c.c.h.r.ThreadPoolRequestReplicator
> Failed to replicate request GET /nifi-api/flow/current-user to
> localhost:8080 due to {}
> com.sun.jersey.api.client.ClientHandlerException: 
> java.net.SocketTimeoutException:
> Read timed out
> at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(
> URLConnectionClientHandler.java:155) ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.handle(Client.java:652)
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.filter.GZIPContentEncodingFilter.handle(
> GZIPContentEncodingFilter.java:123) ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682)
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:509)
> ~[jersey-client-1.19.jar:1.19]
> at org.apache.nifi.cluster.coordination.http.replication.
> ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:578)
> ~[nifi-framework-cluster-1.1.0.jar:1.1.0]
> at org.apache.nifi.cluster.coordination.http.replication.
> ThreadPoolRequestReplicator$NodeHttpRequest.run(
> ThreadPoolRequestReplicator.java:770) ~[nifi-framework-cluster-1.1.
> 0.jar:1.1.0]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [na:1.8.0_101]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_101]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [na:1.8.0_101]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [na:1.8.0_101]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
> Caused by: java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method) ~[na:1.8.0_101]
> at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> ~[na:1.8.0_101]
> at java.net.SocketInputStream.read(SocketInputStream.java:170)
> ~[na:1.8.0_101]
> at java.net.SocketInputStream.read(SocketInputStream.java:141)
> ~[na:1.8.0_101]
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
> ~[na:1.8.0_101]
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
> ~[na:1.8.0_101]
> at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
> ~[na:1.8.0_101]
> at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704)
> ~[na:1.8.0_101]
> at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
> ~[na:1.8.0_101]
> at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1536)
> ~[na:1.8.0_101]
> at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
> ~[na:1.8.0_101]
> at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
> ~[na:1.8.0_101]
> at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(
> URLConnectionClientHandler.java:253) ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(
> URLConnectionClientHandler.java:153) ~[jersey-client-1.19.jar:1.19]
> ... 12 common frames omitted
>
> The nifi.properties file contains the following, with
> nifi.cluster.node.address set appropriately on each node
>
>  # cluster node properties (only configure for cluster nodes) #
>  nifi.cluster.is.node=true
>  nifi.cluster.node.address=172.31.1.247
>  nifi.cluster.node.protocol.port=8081
>  nifi.cluster.node.protocol.threads=10
>  nifi.cluster.node.event.history.size=25
>  nifi.cluster.node.connection.timeout=5 sec
>  nifi.cluster.node.read.timeout=5 sec
>  nifi.cluster.firewall.file=
>  nifi.cluster.flow.election.max.wait.time=5 mins
>  nifi.cluster.flow.election.max.candidates=3
>
>  # zookeeper properties, used for cluster management #
>  nifi.zookeeper.connect.string=172.31.13.177:2181,172.31.1.247:2181,
> 172.31.1.69:2181
>  nifi.zookeeper.connect.timeout=3 secs
>  nifi.zookeeper.session.timeout=3 

problem creating simple cluster

2016-12-07 Thread Brian Jeltema
I’m trying to create my first cluster using NiFi 1.1.0. It’s a simple
3-node unsecure configuration with each node running embedded
zookeeper. The instances all come up and the zookeeper quarum is
reached.

If I bring up the UI for the node that is elected as the 
cluster coordinator, it works as expected, and shows that 3 nodes
are participating in the cluster.

However, if I attempt to display the UI on the non-coordinator nodes,
after a delay of about 10 seconds an error page is returned. The
logs contains a stream of exceptions similar to the following:

2016-12-07 11:08:17,914 WARN [Replicate Request Thread-2] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET 
/nifi-api/flow/current-user to localhost:8080 due to {}
com.sun.jersey.api.client.ClientHandlerException: 
java.net.SocketTimeoutException: Read timed out
at 
com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155)
 ~[jersey-client-1.19.jar:1.19]
at com.sun.jersey.api.client.Client.handle(Client.java:652) 
~[jersey-client-1.19.jar:1.19]
at 
com.sun.jersey.api.client.filter.GZIPContentEncodingFilter.handle(GZIPContentEncodingFilter.java:123)
 ~[jersey-client-1.19.jar:1.19]
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682) 
~[jersey-client-1.19.jar:1.19]
at 
com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) 
~[jersey-client-1.19.jar:1.19]
at 
com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:509) 
~[jersey-client-1.19.jar:1.19]
at 
org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:578)
 ~[nifi-framework-cluster-1.1.0.jar:1.1.0]
at 
org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:770)
 ~[nifi-framework-cluster-1.1.0.jar:1.1.0]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_101]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[na:1.8.0_101]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_101]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_101]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method) ~[na:1.8.0_101]
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) 
~[na:1.8.0_101]
at java.net.SocketInputStream.read(SocketInputStream.java:170) 
~[na:1.8.0_101]
at java.net.SocketInputStream.read(SocketInputStream.java:141) 
~[na:1.8.0_101]
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) 
~[na:1.8.0_101]
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) 
~[na:1.8.0_101]
at java.io.BufferedInputStream.read(BufferedInputStream.java:345) 
~[na:1.8.0_101]
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704) 
~[na:1.8.0_101]
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647) 
~[na:1.8.0_101]
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1536)
 ~[na:1.8.0_101]
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
 ~[na:1.8.0_101]
at 
java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480) 
~[na:1.8.0_101]
at 
com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:253)
 ~[jersey-client-1.19.jar:1.19]
at 
com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:153)
 ~[jersey-client-1.19.jar:1.19]
... 12 common frames omitted

The nifi.properties file contains the following, with nifi.cluster.node.address 
set appropriately on each node

 # cluster node properties (only configure for cluster nodes) #
 nifi.cluster.is.node=true
 nifi.cluster.node.address=172.31.1.247
 nifi.cluster.node.protocol.port=8081
 nifi.cluster.node.protocol.threads=10
 nifi.cluster.node.event.history.size=25
 nifi.cluster.node.connection.timeout=5 sec
 nifi.cluster.node.read.timeout=5 sec
 nifi.cluster.firewall.file=
 nifi.cluster.flow.election.max.wait.time=5 mins
 nifi.cluster.flow.election.max.candidates=3

 # zookeeper properties, used for cluster management #
 
nifi.zookeeper.connect.string=172.31.13.177:2181,172.31.1.247:2181,172.31.1.69:2181
 nifi.zookeeper.connect.timeout=3 secs
 nifi.zookeeper.session.timeout=3 secs
 nifi.zookeeper.root.node=/nifi

I’ve run out of ideas; I presume I’ve overlooked some simple configuration 
value, but I can’t find it.
Any help out there?

Thanks

Re: Getting org.springframework.web.client.HttpClientErrorException: 409 Conflict on DELETE ProcessGroup

2016-12-07 Thread Matt Gilman
Ok. It could be caused if the ProcessGroup is not in a deletable state (if
for instance there is data queued up in an encapsulated connection). Again,
the response body should indicate the underlying issue.

I looked at the RestTemplate stuff and it appears you can specify a custom
ResponseErrorHandler [1] that provides you access to the underlying
response. The default implementation simply throws an exception as shown in
your previous stack traces.

Hope this helps.

Matt

[1]
http://docs.spring.io/spring/docs/3.2.x/javadoc-api/org/springframework/web/client/ResponseErrorHandler.html

On Wed, Dec 7, 2016 at 2:51 AM, Nirmal Kumar 
wrote:

> Each time I fire the DELETE request I first GET the ProcessGroup details
> and then fetch the *version*.
>
> So I am passing the correct version in DELETE request.
>
>
>
> Thanks,
>
> -Nirmal
>
>
>
> *From:* Matt Gilman [mailto:matt.c.gil...@gmail.com]
> *Sent:* Wednesday, December 7, 2016 12:37 AM
>
> *To:* users@nifi.apache.org
> *Subject:* Re: Getting 
> org.springframework.web.client.HttpClientErrorException:
> 409 Conflict on DELETE ProcessGroup
>
>
>
> I'm not familiar with Spring RestTemplate but I'm sure they offer the
> ability to get at the response body. Maybe it's possible to implement a
> custom error handler or something.
>
>
>
> Without knowing more, I could venture a guess that your requests are
> failing intermittently because you not passing along the correct revision.
> NiFi employs an optimistic locking model whereby clients pass along a
> revision version to indicate that currently hold the most update to date
> version of that component.
>
>
>
> Here a blog that details this and provides a nice sequence diagram to help
> visualize it [1]. The relevant details are towards the bottom.
>
>
>
> Matt
>
>
>
> [1] https://community.hortonworks.com/content/
> kbentry/3160/update-nifi-flow-on-the-fly-via-api.html
>
>
>
> On Tue, Dec 6, 2016 at 1:53 PM, Nirmal Kumar 
> wrote:
>
> Thanks but as I calling it programmatically using Spring RestTemplate I
> just get the stack trace as mentioned in my below mail.
>
>
>
> -Nirmal
>
>
>
> *From:* Matt Gilman [mailto:matt.c.gil...@gmail.com]
> *Sent:* Tuesday, December 6, 2016 6:58 PM
> *To:* users@nifi.apache.org
> *Subject:* Re: Getting 
> org.springframework.web.client.HttpClientErrorException:
> 409 Conflict on DELETE ProcessGroup
>
>
>
> Nirmal,
>
>
>
> If you consume the body of the response it should indicate the reason for
> the 409. On failed responses, the response body should be text/plain. Let
> me know if that helps.
>
>
>
> Thanks
>
>
>
> Matt
>
>
>
>
>
> On Mon, Dec 5, 2016 at 11:29 PM, Nirmal Kumar 
> wrote:
>
> Hi All,
>
>
>
> I am getting the following exception **intermittently** when firing a
> DELETE request for deleting the Process group:
>
> http://xxx.xxx.xxx.xxx:9091/nifi-api/process-groups/
> 103111c4-123b-14f9-a794-14795038dbdf?version=0
>
>
>
> org.springframework.web.client.HttpClientErrorException: 409 Conflict
>
> at org.springframework.web.client.DefaultResponseErrorHandler.
> handleError(DefaultResponseErrorHandler.java:91) ~[nifi-util-1.0.0.jar:na]
>
> at org.springframework.web.client.RestTemplate.
> handleResponseError(RestTemplate.java:615) ~[nifi-util-1.0.0.jar:na]
>
> at 
> org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:573)
> ~[nifi-util-1.0.0.jar:na]
>
> at 
> org.springframework.web.client.RestTemplate.execute(RestTemplate.java:529)
> ~[nifi-util-1.0.0.jar:na]
>
> at 
> org.springframework.web.client.RestTemplate.delete(RestTemplate.java:401)
> ~[nifi-util-1.0.0.jar:na]
>
>
>
> I am using Nifi 1.0.0 version.
>
>
>
> Not sure if setting these timeout will help here:
>
>   *public* RestTemplate restTemplate() {
>
> *return* *new* RestTemplate(*clientHttpRequestFactory*());
>
> }
>
>
>
> *private* *ClientHttpRequestFactory* clientHttpRequestFactory() {
>
> *HttpComponentsClientHttpRequestFactory* factory = *new*
> *HttpComponentsClientHttpRequestFactory*();
>
> factory.setReadTimeout(2000);
>
> factory.setConnectTimeout(2000);
>
> *return* factory;
>
> }
>
>
>
> Any pointers here would be great.
>
>
>
> Thanks,
>
> -Nirmal
>
>
>
>
> --
>
>
>
>
>
>
>
> NOTE: This message may contain information that is confidential,
> proprietary, privileged or otherwise protected by law. The message is
> intended solely for the named addressee. If received in error, please
> destroy and notify the sender. Any use of this email is prohibited when
> received in error. Impetus does not represent, warrant and/or guarantee,
> that the integrity of this communication has been maintained nor that the
> communication is free of errors, virus, interception or interference.
>
>
>
>
> --
>
>
>
>
>
>
>
> NOTE: This message may contain 

Re: How to integrate a custom protocol with Apache Nifi

2016-12-07 Thread Oleg Zhurakousky
Kant

There are couple of questions here so, let me try one at the time.
1. “why Nifi would act as a server?”. Well, NiFi is a runtime environment where 
things are happening. To be more exact; NiFi is a runtime environment where 
things are triggered. The big distinction here is trigger vs happening. In 
other words you may choose (as most processors do) to have NiFi act as an 
execution container and run your code, or you may chose for the NiFi to be a 
triggering container that triggers to run your code elsewhere. Example of the 
later one is the SpringContextProcessor which while still runs in the same JVM 
delegates execution to Spring application that runs in its own container (which 
could as well be a separate JVM or nah other container - ala micro services). 
So in this case NiFi still manages the orchestration, mediation, security etc.

2. “...do I need to worry about how my service would talk to Nifi or do I have 
the freedom of just focusing on my application and using whatever protocol I 
want and In the end just plugin to Nifi which would take care?” That is a 
loaded question and I must admit not fully understood, but i’ll give it a try. 
When integrating with NiFi you use one of the integration points such as 
Processor and/or ControllerService. Those are NiFi known strategies 
(interfaces) and so your custom Processor would need to implement such strategy 
and obviously be compliant with its contract. However, the other part of your 
question is about implementing something “independent” and just plug-in and if 
t’s possible. My answer is still YES as long as you design it that way. As an 
example of such design you may want to look at AMQP support where there are a 
pair of processors (PublishAMQP/ConsumeAMQP), but when you look at the code of 
these processors you’ll see that neither has any AMQP dependencies. Instead 
they depend on another case (let’s call it Worker) and that class is completely 
independent of NiFi. So in summary your protocol-specific Processor is 
independent of the protocol and your protocol-specific Worker is independent of 
NiFi and the two delegate between one another through common to both object (in 
this case bye[]). The Spring processor mentioned above implements the same 
pattern.

And one more point about Microservices. I am willing to go as far as saying 
that NiFi is a variation of Microservices container. I am basing it n the fact 
that NiFi components (i.e., processors) implement a fundamental micro services 
pattern  - certain independence from it’s runtime and persistence of its 
results - which allows each and every component to be managed 
(start/stopped/reconfigured/changed) independently of upstream/downstream 
components.

Ok, that is a load to process, so I’ll stop ;)

Look forward to more questions

Cheers
Oleg


On Dec 7, 2016, at 4:52 AM, kant kodali 
> wrote:

I am also confused a little bit since I am new to Nifi. I wonder why Nifi would 
act as a server? isn't Nifi a routing layer between systems? because this 
brings in another question about Nifi in general.

When I write my applications/microservices do I need to worry about how my 
service would talk to Nifi or do I have the freedom of just focusing on my 
application and using whatever protocol I want and In the end just plugin to 
Nifi which would take care? other words is Nifi a tight integration with 
applications such that I always have to import a Nifi Library within my 
application/microservice ? other words do I need to worry about Nifi at 
programming/development time of an Application/Microservice or at deployment 
time?

Sorry if these are naive questions. But answers to those will help greatly and 
prevent me from asking more questions!

Thanks much!
kant




On Wed, Dec 7, 2016 at 1:23 AM, kant kodali 
> wrote:
Hi Koji,

That is an awesome explanation! I expected processors for HTTP2 at very least 
since it is widely used ( the entire GRPC stack runs on that). I am not sure 
how easy or hard it is to build one?

Thanks!

On Wed, Dec 7, 2016 at 1:08 AM, Koji Kawamura 
> wrote:
Hi Kant,

Although I'm not aware of existing processor for HTTP2 or NSQ, NiFi
has a set of processors for WebSocket since 1.1.0.
It enables NiFi to act as a WebSocket client to communicate with a
remote WebSocket server, or makes NiFi a WebSocket server so that
remote clients access to it via WebSocket protocol.

I've written a blog post about how to use it, I hope it will be useful
for your use case:
http://ijokarumawak.github.io/nifi/2016/11/04/nifi-websocket/

Thanks,
Koji



On Wed, Dec 7, 2016 at 3:23 PM, kant kodali 
> wrote:
> Thanks a ton guys! Didn't expect Nifi community to be so good! (Another
> convincing reason!)
>
> Coming back to the problem, We use NSQ a lot (although not my favorite) and
> want 

Re: How to integrate a custom protocol with Apache Nifi

2016-12-07 Thread kant kodali
I am also confused a little bit since I am new to Nifi. I wonder why Nifi
would act as a server? isn't Nifi a routing layer between systems? because
this brings in another question about Nifi in general.

When I write my applications/microservices do I need to worry about how my
service would talk to Nifi or do I have the freedom of just focusing on my
application and using whatever protocol I want and In the end just plugin
to Nifi which would take care? other words is Nifi a tight integration with
applications such that I always have to import a Nifi Library within my
application/microservice ? other words do I need to worry about Nifi at
programming/development time of an Application/Microservice or at
deployment time?

Sorry if these are naive questions. But answers to those will help greatly
and prevent me from asking more questions!

Thanks much!
kant




On Wed, Dec 7, 2016 at 1:23 AM, kant kodali  wrote:

> Hi Koji,
>
> That is an awesome explanation! I expected processors for HTTP2 at very
> least since it is widely used ( the entire GRPC stack runs on that). I am
> not sure how easy or hard it is to build one?
>
> Thanks!
>
> On Wed, Dec 7, 2016 at 1:08 AM, Koji Kawamura 
> wrote:
>
>> Hi Kant,
>>
>> Although I'm not aware of existing processor for HTTP2 or NSQ, NiFi
>> has a set of processors for WebSocket since 1.1.0.
>> It enables NiFi to act as a WebSocket client to communicate with a
>> remote WebSocket server, or makes NiFi a WebSocket server so that
>> remote clients access to it via WebSocket protocol.
>>
>> I've written a blog post about how to use it, I hope it will be useful
>> for your use case:
>> http://ijokarumawak.github.io/nifi/2016/11/04/nifi-websocket/
>>
>> Thanks,
>> Koji
>>
>>
>>
>> On Wed, Dec 7, 2016 at 3:23 PM, kant kodali  wrote:
>> > Thanks a ton guys! Didn't expect Nifi community to be so good! (Another
>> > convincing reason!)
>> >
>> > Coming back to the problem, We use NSQ a lot (although not my favorite)
>> and
>> > want to be able integrate Nifi with NSQ to other systems such as Kafka,
>> > Spark, Cassandra, ElasticSearch, some micro services that uses HTTP2 and
>> > some micro service that uses Websockets
>> >
>> > (which brings in other question if Nifi has HTTP2 and Websocket
>> processors?)
>> >
>> > Also below is NSQ protocol spec. They have Java client library as well.
>> >
>> > http://nsq.io/clients/tcp_protocol_spec.html
>> >
>> >
>> > On Tue, Dec 6, 2016 at 5:14 PM, Oleg Zhurakousky
>> >  wrote:
>> >>
>> >> Hi Kant
>> >>
>> >> What you’re trying to accomplish is definitely possible, however more
>> >> information may be needed from you.
>> >> For example, the way I understand your statement about “integration
>> with
>> >> many systems” is something like JMS, Kafka, TCP, FTP etc…. If that is
>> the
>> >> case such integration is definitely possible with your “custom system”
>> by
>> >> developing custom Processor and/or ControllerService.
>> >> Processors and ControllerServices are the two main integration points
>> >> within NiFi
>> >> You can definitely find may examples by looking at some of the
>> processors
>> >> (i.e., PublishKafka or ConsumeKafka, PublishJMS or ConsumeJMS etc.)
>> >>
>> >> Let us know if you need more help to guide you through the process.
>> >>
>> >> Cheers
>> >> Oleg
>> >>
>> >> > On Dec 6, 2016, at 7:46 PM, kant kodali  wrote:
>> >> >
>> >> > HI All,
>> >> >
>> >> > I understand that Apache Nifi has integration with many systems but
>> what
>> >> > If I have an application that talks a custom protocol ? How do I
>> integrate
>> >> > Apache Nifi with the custom protocol?
>> >> >
>> >> > Thanks,
>> >> > kant
>> >>
>> >
>>
>
>