Re: Past month data insertion is possible

2017-01-26 Thread Lee Laim
In that case,  Extract the 'month' and 'year' using ExtractText.  Followed
by RouteOnAttribute to send all months except January to an updateAttribute
processor that would decrement the month.   previous_month:
${month:minus(1):toRadix(10,2)}, and set year_prime as the year

The exception is January which would be routed to a branch where
 previous_month would be set to 12, and the year would be decremented
${year:minus(1)}, and stored as an attribute called year_prime .

The 2 branches could then be funneled back to the same flow.


On Thu, Jan 26, 2017 at 11:33 PM, prabhu Mahendran 
wrote:

> Lee,
>
> This case may not be work for me.
>
>
>
> Actually I want like this:
>
>
>
> Input given  -  Output Expected
>
> 01-2017  -  12-2016
>
> 05-2017  -  04-2017
>
>
>
> If my input is the current month with year, output expected is the last
> month cross checked with the year(have to consider the year also, for the
> case January month).
>
>
>
> Your answer may satisfy the less than condition for all the previous
> months.
>
> On Fri, Jan 27, 2017 at 11:43 AM, Lee Laim  wrote:
>
>> Prabhu,
>>
>> Using epoch time might end up being a simpler comparison.   If the
>> converted date is less than
>> 1483254000 (midnight of first day of current month), it is the previous
>> month (for my timezone).Thanks,
>> Lee
>>
>>
>> On Thu, Jan 26, 2017 at 10:42 PM, prabhu Mahendran <
>> prabhuu161...@gmail.com> wrote:
>>
>>> Hi Andy,
>>>
>>> i have already tried with your alternative solution.
>>> "UpdateAttribute to add a new attribute with the previous month value,
>>> and then RouteOnAttribute to determine if the flowfile should be
>>> inserted "
>>>
>>> i have used below expression language in RouteOnAttribute,
>>>
>>>
>>> *${literal('Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec'):getDelimitedField(${csv.1:toDate('dd.MM.
>>> hh:mm:ss'):format('MM')}):equals(${literal('Dec,Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov'):getDelimitedField(${now():toDate('
>>> Z MM dd HH:mm:ss.SSS '):format('MM'):toNumber()})})}*
>>>
>>>
>>> it could be failed in below data.,
>>>
>>> *23.12.2015,Andy,21*
>>> *23.12.2017,Present,32*
>>>
>>>
>>> My data may contains some past years and future years
>>>
>>> It matches with my expression it also inserted.
>>>
>>> I need to check month with year in data.
>>>
>>> How can i check it?
>>>
>>>
>>> On Fri, Jan 27, 2017 at 10:52 AM, Andy LoPresto 
>>> wrote:
>>>
 Prabhu,

 I answered this question with an ExecuteScript example which will do
 what you are looking for on Stack Overflow [1].

 [1] http://stackoverflow.com/a/41887397/70465

 Andy LoPresto
 alopre...@apache.org
 *alopresto.apa...@gmail.com *
 PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

 On Jan 26, 2017, at 8:40 PM, prabhu Mahendran 
 wrote:

 Hi All,

 I have data in which i need to compare month of data if it is previous
 month then it should be insert otherwise not.

 *Example:*

 *23.12.2016 12:02:23,Koji,24*
 22.01.2016 01:21:22,Mahi,24

 Now i need to get first column of data (23.12.2016 12:02:23) and then
 get month (12) on it.

 Compared that with before of current month like.,


 *If current month is 'JAN_2017',then get before of 'JAN_2017' it should
 be 'Dec_2016'*
 For First row,

 *compare that 'Dec_2016' with month of data 'Dec_2016' *[23.12.2016]*.*

 It matched then insert into database.

 if it not matched then ignore it.

 is it possible in nifi?

 Many thanks,
 prabhu





>>>
>>
>


Re: Past month data insertion is possible

2017-01-26 Thread prabhu Mahendran
Lee,

This case may not be work for me.



Actually I want like this:



Input given  -  Output Expected

01-2017  -  12-2016

05-2017  -  04-2017



If my input is the current month with year, output expected is the last
month cross checked with the year(have to consider the year also, for the
case January month).



Your answer may satisfy the less than condition for all the previous months.

On Fri, Jan 27, 2017 at 11:43 AM, Lee Laim  wrote:

> Prabhu,
>
> Using epoch time might end up being a simpler comparison.   If the
> converted date is less than
> 1483254000 (midnight of first day of current month), it is the previous
> month (for my timezone).Thanks,
> Lee
>
>
> On Thu, Jan 26, 2017 at 10:42 PM, prabhu Mahendran <
> prabhuu161...@gmail.com> wrote:
>
>> Hi Andy,
>>
>> i have already tried with your alternative solution.
>> "UpdateAttribute to add a new attribute with the previous month value,
>> and then RouteOnAttribute to determine if the flowfile should be
>> inserted "
>>
>> i have used below expression language in RouteOnAttribute,
>>
>>
>> *${literal('Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec'):getDelimitedField(${csv.1:toDate('dd.MM.
>> hh:mm:ss'):format('MM')}):equals(${literal('Dec,Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov'):getDelimitedField(${now():toDate('
>> Z MM dd HH:mm:ss.SSS '):format('MM'):toNumber()})})}*
>>
>>
>> it could be failed in below data.,
>>
>> *23.12.2015,Andy,21*
>> *23.12.2017,Present,32*
>>
>>
>> My data may contains some past years and future years
>>
>> It matches with my expression it also inserted.
>>
>> I need to check month with year in data.
>>
>> How can i check it?
>>
>>
>> On Fri, Jan 27, 2017 at 10:52 AM, Andy LoPresto 
>> wrote:
>>
>>> Prabhu,
>>>
>>> I answered this question with an ExecuteScript example which will do
>>> what you are looking for on Stack Overflow [1].
>>>
>>> [1] http://stackoverflow.com/a/41887397/70465
>>>
>>> Andy LoPresto
>>> alopre...@apache.org
>>> *alopresto.apa...@gmail.com *
>>> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>>>
>>> On Jan 26, 2017, at 8:40 PM, prabhu Mahendran 
>>> wrote:
>>>
>>> Hi All,
>>>
>>> I have data in which i need to compare month of data if it is previous
>>> month then it should be insert otherwise not.
>>>
>>> *Example:*
>>>
>>> *23.12.2016 12:02:23,Koji,24*
>>> 22.01.2016 01:21:22,Mahi,24
>>>
>>> Now i need to get first column of data (23.12.2016 12:02:23) and then
>>> get month (12) on it.
>>>
>>> Compared that with before of current month like.,
>>>
>>>
>>> *If current month is 'JAN_2017',then get before of 'JAN_2017' it should
>>> be 'Dec_2016'*
>>> For First row,
>>>
>>> *compare that 'Dec_2016' with month of data 'Dec_2016' *[23.12.2016]*.*
>>>
>>> It matched then insert into database.
>>>
>>> if it not matched then ignore it.
>>>
>>> is it possible in nifi?
>>>
>>> Many thanks,
>>> prabhu
>>>
>>>
>>>
>>>
>>>
>>
>


Re: Past month data insertion is possible

2017-01-26 Thread Lee Laim
Prabhu,

Using epoch time might end up being a simpler comparison.   If the
converted date is less than
1483254000 (midnight of first day of current month), it is the previous
month (for my timezone).Thanks,
Lee


On Thu, Jan 26, 2017 at 10:42 PM, prabhu Mahendran 
wrote:

> Hi Andy,
>
> i have already tried with your alternative solution.
> "UpdateAttribute to add a new attribute with the previous month value,
> and then RouteOnAttribute to determine if the flowfile should be inserted
> "
>
> i have used below expression language in RouteOnAttribute,
>
>
> *${literal('Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec'):getDelimitedField(${csv.1:toDate('dd.MM.
> hh:mm:ss'):format('MM')}):equals(${literal('Dec,Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov'):getDelimitedField(${now():toDate('
> Z MM dd HH:mm:ss.SSS '):format('MM'):toNumber()})})}*
>
>
> it could be failed in below data.,
>
> *23.12.2015,Andy,21*
> *23.12.2017,Present,32*
>
>
> My data may contains some past years and future years
>
> It matches with my expression it also inserted.
>
> I need to check month with year in data.
>
> How can i check it?
>
>
> On Fri, Jan 27, 2017 at 10:52 AM, Andy LoPresto 
> wrote:
>
>> Prabhu,
>>
>> I answered this question with an ExecuteScript example which will do what
>> you are looking for on Stack Overflow [1].
>>
>> [1] http://stackoverflow.com/a/41887397/70465
>>
>> Andy LoPresto
>> alopre...@apache.org
>> *alopresto.apa...@gmail.com *
>> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>>
>> On Jan 26, 2017, at 8:40 PM, prabhu Mahendran 
>> wrote:
>>
>> Hi All,
>>
>> I have data in which i need to compare month of data if it is previous
>> month then it should be insert otherwise not.
>>
>> *Example:*
>>
>> *23.12.2016 12:02:23,Koji,24*
>> 22.01.2016 01:21:22,Mahi,24
>>
>> Now i need to get first column of data (23.12.2016 12:02:23) and then get
>> month (12) on it.
>>
>> Compared that with before of current month like.,
>>
>>
>> *If current month is 'JAN_2017',then get before of 'JAN_2017' it should
>> be 'Dec_2016'*
>> For First row,
>>
>> *compare that 'Dec_2016' with month of data 'Dec_2016' *[23.12.2016]*.*
>>
>> It matched then insert into database.
>>
>> if it not matched then ignore it.
>>
>> is it possible in nifi?
>>
>> Many thanks,
>> prabhu
>>
>>
>>
>>
>>
>


Re: Past month data insertion is possible

2017-01-26 Thread prabhu Mahendran
Hi Andy,

i have already tried with your alternative solution.
"UpdateAttribute to add a new attribute with the previous month value, and
then RouteOnAttribute to determine if the flowfile should be inserted "

i have used below expression language in RouteOnAttribute,


*${literal('Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec'):getDelimitedField(${csv.1:toDate('dd.MM.
hh:mm:ss'):format('MM')}):equals(${literal('Dec,Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov'):getDelimitedField(${now():toDate('
Z MM dd HH:mm:ss.SSS '):format('MM'):toNumber()})})}*


it could be failed in below data.,

*23.12.2015,Andy,21*
*23.12.2017,Present,32*


My data may contains some past years and future years

It matches with my expression it also inserted.

I need to check month with year in data.

How can i check it?


On Fri, Jan 27, 2017 at 10:52 AM, Andy LoPresto 
wrote:

> Prabhu,
>
> I answered this question with an ExecuteScript example which will do what
> you are looking for on Stack Overflow [1].
>
> [1] http://stackoverflow.com/a/41887397/70465
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On Jan 26, 2017, at 8:40 PM, prabhu Mahendran 
> wrote:
>
> Hi All,
>
> I have data in which i need to compare month of data if it is previous
> month then it should be insert otherwise not.
>
> *Example:*
>
> *23.12.2016 12:02:23,Koji,24*
> 22.01.2016 01:21:22,Mahi,24
>
> Now i need to get first column of data (23.12.2016 12:02:23) and then get
> month (12) on it.
>
> Compared that with before of current month like.,
>
>
> *If current month is 'JAN_2017',then get before of 'JAN_2017' it should be
> 'Dec_2016'*
> For First row,
>
> *compare that 'Dec_2016' with month of data 'Dec_2016' *[23.12.2016]*.*
>
> It matched then insert into database.
>
> if it not matched then ignore it.
>
> is it possible in nifi?
>
> Many thanks,
> prabhu
>
>
>
>
>


Re: Past month data insertion is possible

2017-01-26 Thread Andy LoPresto
Prabhu,

I answered this question with an ExecuteScript example which will do what you 
are looking for on Stack Overflow [1].

[1] http://stackoverflow.com/a/41887397/70465

Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Jan 26, 2017, at 8:40 PM, prabhu Mahendran  wrote:
> 
> Hi All,
> 
> I have data in which i need to compare month of data if it is previous month 
> then it should be insert otherwise not.
> 
> Example:
> 
> 23.12.2016 12:02:23,Koji,24
> 22.01.2016 01:21:22,Mahi,24
> 
> Now i need to get first column of data (23.12.2016 12:02:23) and then get 
> month (12) on it.
> 
> Compared that with before of current month like.,
> 
> If current month is 'JAN_2017',then get before of 'JAN_2017' it should be 
> 'Dec_2016'
> 
> For First row,
> 
> compare that 'Dec_2016' with month of data 'Dec_2016' [23.12.2016].
> 
> It matched then insert into database.
> 
> if it not matched then ignore it.
> 
> is it possible in nifi?
> 
> Many thanks,
> prabhu
> 
> 
> 



signature.asc
Description: Message signed with OpenPGP using GPGMail


Past month data insertion is possible

2017-01-26 Thread prabhu Mahendran
Hi All,

I have data in which i need to compare month of data if it is previous
month then it should be insert otherwise not.

*Example:*

*23.12.2016 12:02:23,Koji,24*
22.01.2016 01:21:22,Mahi,24

Now i need to get first column of data (23.12.2016 12:02:23) and then get
month (12) on it.

Compared that with before of current month like.,


*If current month is 'JAN_2017',then get before of 'JAN_2017' it should be
'Dec_2016'*
For First row,

*compare that 'Dec_2016' with month of data 'Dec_2016' *[23.12.2016]*.*

It matched then insert into database.

if it not matched then ignore it.

is it possible in nifi?

Many thanks,
prabhu


Re: Convert xls to csv in Nifi.

2017-01-26 Thread prabhu Mahendran
Jeremy,Thanks for your information.

i think you will make effort to HSSF implementation in PR for convert (XLS
into CSV). Is it right?

Many thanks,
prabhu

On Wed, Jan 25, 2017 at 7:01 PM, Jeremy Dyer  wrote:

> Prabhu - NIFI-2613 is currently only able to convert .xlxs documents to
> csv. The processor uses Apache POI XSSF implementation which only supports
> xlxs while HSSF would be pre 2007 excel files. I think to your point I
> should probably make an effort to add the HSSF implementation to the
> existing PR.
>
> - Jeremy
>
> On Wed, Jan 25, 2017 at 1:31 AM, prabhu Mahendran  > wrote:
>
>> Hi All,
>>
>> i have look into below JIRA  for conversion of my excel documents into
>> csv file.
>>
>> https://issues.apache.org/jira/browse/NIFI-2613
>>
>> i have apply patches in GitHub Pull request.And then i can able to
>> convert my .xlxs documents into csv file.
>>
>> But while give .xls documents it can't converted to csv file.
>>
>> Is patch applicable in jira  only for .xlxs files or it's too for all
>> excel formats?
>>
>> Many thanks,
>> prabhu
>>
>
>


Re: NiFi 1.1.0 stuck starting, no errors

2017-01-26 Thread Andy LoPresto
Hi Peter,

Can you provide a thread dump of the process? You should be able to do this via 
the jcmd tool [1].

[1] 
https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr006.html#BABEHABG
 


Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Jan 26, 2017, at 12:42 PM, Peter Wicks (pwicks)  wrote:
> 
> I’m looking for help in troubleshooting my NiFi 1.1.0 install.  It’s been 
> running stably for some time, but I restarted it this morning when I deployed 
> an updated custom NAR. Now it gets stuck at startup, see logs at the end.
> There are no error messages, and the processes don’t die. The process just 
> seems to be hanging waiting for something.
> 
> · My first thought was to try rolling back the modified nar, and even 
> just removing the nar all together since it was custom.  Neither of these 
> made any difference.
> · I also tried deleting the “work” folder, which has fixed nar 
> versioning issues for me in the past (not really related, but was worth a 
> shot). This made no difference.
> · NiFi is set to start with java.arg.2=-Xms4G and java.arg.3=-Xmx8G, 
> 22GB’s of free RAM are available on the system (out of some 60GB’s total).
> · I’ve checked running processes, and when I stop NiFi no rouge 
> instances are left running.
> · Since NiFi gets stuck right around the JettyServer step I checked 
> to see if any processes were using port 8443. No other processes are using 
> this port.
> · I thought maybe a key file was being locked, but with NiFi off 
> `lsof | grep nifi` returns no locked files.
> 
> Nifi-app Log:
> 2017-01-26 20:23:43,359 INFO [main] org.eclipse.jetty.util.log Logging 
> initialized @90357ms
> 2017-01-26 20:23:43,418 INFO [main] org.apache.nifi.web.server.JettyServer 
> Configuring Jetty for HTTPs on port: 8443
> 2017-01-26 20:23:43,691 INFO [main] org.apache.nifi.web.server.JettyServer 
> Loading WAR: 
> /data/nifi/nifi-1.1.0/./work/nar/extensions/nifi-media-nar-1.1.0.nar-unpacked/META-INF/bundled-dependencies/nifi-image-viewer-1.1.0.war
>  with context path set to /nifi-image-viewer-1.1.0
> 2017-01-26 20:23:43,702 INFO [main] org.apache.nifi.web.server.JettyServer 
> Loading WAR: 
> /data/nifi/nifi-1.1.0/./work/nar/extensions/nifi-update-attribute-nar-1.1.0.nar-unpacked/META-INF/bundled-dependencies/nifi-update-attribute-ui-1.1.0.war
>  with context path set to /nifi-update-attribute-ui-1.1.0
> 2017-01-26 20:23:43,703 INFO [main] org.apache.nifi.web.server.JettyServer 
> Loading UI extension [ProcessorConfiguration, 
> /nifi-update-attribute-ui-1.1.0] for 
> [org.apache.nifi.processors.attributes.UpdateAttribute]
> 2017-01-26 20:23:43,713 INFO [main] org.apache.nifi.web.server.JettyServer 
> Loading WAR: 
> /data/nifi/nifi-1.1.0/./work/nar/extensions/nifi-standard-nar-1.1.0.nar-unpacked/META-INF/bundled-dependencies/nifi-standard-content-viewer-1.1.0.war
>  with context path set to /nifi-standard-content-viewer-1.1.0
> 2017-01-26 20:23:43,723 INFO [main] org.apache.nifi.web.server.JettyServer 
> Loading WAR: 
> /data/nifi/nifi-1.1.0/./work/nar/extensions/nifi-standard-nar-1.1.0.nar-unpacked/META-INF/bundled-dependencies/nifi-jolt-transform-json-ui-1.1.0.war
>  with context path set to /nifi-jolt-transform-json-ui-1.1.0
> 2017-01-26 20:23:43,724 INFO [main] org.apache.nifi.web.server.JettyServer 
> Loading UI extension [ProcessorConfiguration, 
> /nifi-jolt-transform-json-ui-1.1.0] for 
> [org.apache.nifi.processors.standard.JoltTransformJSON]
> 2017-01-26 20:23:43,729 INFO [main] org.apache.nifi.web.server.JettyServer 
> Loading WAR: 
> /data/nifi/nifi-1.1.0/./work/nar/framework/nifi-framework-nar-1.1.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-ui-1.1.0.war
>  with context path set to /nifi
> 2017-01-26 20:23:43,733 INFO [main] org.apache.nifi.web.server.JettyServer 
> Loading WAR: 
> /data/nifi/nifi-1.1.0/./work/nar/framework/nifi-framework-nar-1.1.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-api-1.1.0.war
>  with context path set to /nifi-api
> 2017-01-26 20:23:43,735 INFO [main] org.apache.nifi.web.server.JettyServer 
> Loading WAR: 
> /data/nifi/nifi-1.1.0/./work/nar/framework/nifi-framework-nar-1.1.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-content-viewer-1.1.0.war
>  with context path set to /nifi-content-viewer
> 2017-01-26 20:23:43,738 INFO [main] org.apache.nifi.web.server.JettyServer 
> Loading WAR: 
> /data/nifi/nifi-1.1.0/./work/nar/framework/nifi-framework-nar-1.1.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-docs-1.1.0.war
>  with context path set to /nifi-docs
> 2017-01-26 20:23:43,753 INFO [main] org.apache.nifi.web.server.JettyServer 
> Loading documents web app with context path set to /nifi-docs
> 2017-01-26 20:23:43,761 INFO [main] 

Re: Asynchronous Request/Response Data Flows

2017-01-26 Thread Bryan Bende
In that case, maybe you can use PutDistributedMapCache and
FetchDistributedMapCache?

Although, you might need to modify PutDistributedMapCache so that it
allowed caching attributes, currently it only uses flow file content as the
thing to cache.


On Thu, Jan 26, 2017 at 3:48 PM, Benjamin Janssen 
wrote:

> What if the bulk of the data is what is coming back from the remote
> processing?  So I really want a pattern like this:
>
> Store a couple of attributes.
> Send small request off for additional processing.
> Receive response from remote processing.
> Append to that response the originally saved off attributes.
>
> Could I accomplish the same thing by reversing the ordering of the
> Wait/Notify processors?  So do something like this:
>
> Get a file for processing.
> Send it to Notify.
> Send it to remote processing.
> Remote processing data comes back.
> Send it to Wait (where it doesn't actually wait at all).
> Data leaves Wait with appended attributes from the Notify.
>
>
> On Thu, Jan 26, 2017 at 3:35 PM Bryan Bende  wrote:
>
>> Ben,
>>
>> To elaborate on NIFI-190... this ticket introduced two new processors
>> (Wait and Notify) that use the DistributedMapCacheServer to communicate.
>> They aren't released yet, but are in the master branch.
>>
>> One example of using these processors is something like the following:
>>
>> - Lets say we have a flow file where the content is a CSV file, and each
>> line is a URL to do a look-up somewhere
>> - The flow file can be sent to a SplitText processor to get each line
>> into its own flow file
>> - The "original" relationship from SplitText can go to a Wait processor
>> which will keep checking the cache for N signals (in this case N = the
>> number of splits)
>> - The "splits" relationship would go down a separate path where each
>> split would be processed and eventually hit a Notify processor, which would
>> increment the number of signals in the cache and optionally add attributes
>> - When Wait sees N signals (or when an expiration is reached) it releases
>> the original flow file and can optionally copy over attributes that the
>> signals put in the cache
>>
>> So you get to continue processing the original flow file that is the CSV,
>> but still being able to process the splits individually and get the
>> attributes from them that might be the "results" in your case.
>>
>> Hope that helps.
>>
>> -Bryan
>>
>>
>> On Thu, Jan 26, 2017 at 3:16 PM, Joe Witt  wrote:
>>
>> Ben,
>>
>> One way to approach this is using the sort of capabilities this opens
>> up: https://issues.apache.org/jira/browse/NIFI-190
>>
>> Certainly is a good case/idea to work through.  Doable and
>> increasingly seems to be an ask.
>>
>> Thanks
>> Joe
>>
>> On Thu, Jan 26, 2017 at 3:10 PM, Benjamin Janssen 
>> wrote:
>> > Hello all,
>> >
>> > I've got a use case where I get some data, I want to fork a portion of
>> that
>> > data off to an external service for asynchronous processing, and when
>> that
>> > external service has finished processing the data, I want to take its
>> > output, marry it up with the original data, and pass the whole thing on
>> for
>> > further processing.
>> >
>> > So essentially two data flows:
>> >
>> > Receive Data -> Store Some State -> Send Data To External Service
>> >
>> > Do More Processing On Original Data + Results <-  Retrieve Previously
>> Stored
>> > State  <-  Receive Results From External Service
>> >
>> > Is there a way to do this while taking advantage of NiFi's State
>> Management
>> > capabilities?  I wasn't finding any obvious processors for persisting
>> and
>> > retrieving shared state from State Management.  The closest my googling
>> was
>> > able to get me was this:  https://issues.apache.org/
>> jira/browse/NIFI-1582
>> > but if I'm understanding the State Management documentation properly,
>> that
>> > won't actually help me because I'd need the same processor to do all
>> storing
>> > and retrieving of state?
>> >
>> > Does something exist to use State Management like this?  Or is what I'm
>> > proposing to do a bad idea?
>> >
>> > Or maybe I should just be using the DistributedMapCacheServer for this?
>> >
>> > Any help/advice would be appreciated.
>>
>>
>>


Re: Asynchronous Request/Response Data Flows

2017-01-26 Thread Benjamin Janssen
What if the bulk of the data is what is coming back from the remote
processing?  So I really want a pattern like this:

Store a couple of attributes.
Send small request off for additional processing.
Receive response from remote processing.
Append to that response the originally saved off attributes.

Could I accomplish the same thing by reversing the ordering of the
Wait/Notify processors?  So do something like this:

Get a file for processing.
Send it to Notify.
Send it to remote processing.
Remote processing data comes back.
Send it to Wait (where it doesn't actually wait at all).
Data leaves Wait with appended attributes from the Notify.


On Thu, Jan 26, 2017 at 3:35 PM Bryan Bende  wrote:

> Ben,
>
> To elaborate on NIFI-190... this ticket introduced two new processors
> (Wait and Notify) that use the DistributedMapCacheServer to communicate.
> They aren't released yet, but are in the master branch.
>
> One example of using these processors is something like the following:
>
> - Lets say we have a flow file where the content is a CSV file, and each
> line is a URL to do a look-up somewhere
> - The flow file can be sent to a SplitText processor to get each line into
> its own flow file
> - The "original" relationship from SplitText can go to a Wait processor
> which will keep checking the cache for N signals (in this case N = the
> number of splits)
> - The "splits" relationship would go down a separate path where each split
> would be processed and eventually hit a Notify processor, which would
> increment the number of signals in the cache and optionally add attributes
> - When Wait sees N signals (or when an expiration is reached) it releases
> the original flow file and can optionally copy over attributes that the
> signals put in the cache
>
> So you get to continue processing the original flow file that is the CSV,
> but still being able to process the splits individually and get the
> attributes from them that might be the "results" in your case.
>
> Hope that helps.
>
> -Bryan
>
>
> On Thu, Jan 26, 2017 at 3:16 PM, Joe Witt  wrote:
>
> Ben,
>
> One way to approach this is using the sort of capabilities this opens
> up: https://issues.apache.org/jira/browse/NIFI-190
>
> Certainly is a good case/idea to work through.  Doable and
> increasingly seems to be an ask.
>
> Thanks
> Joe
>
> On Thu, Jan 26, 2017 at 3:10 PM, Benjamin Janssen 
> wrote:
> > Hello all,
> >
> > I've got a use case where I get some data, I want to fork a portion of
> that
> > data off to an external service for asynchronous processing, and when
> that
> > external service has finished processing the data, I want to take its
> > output, marry it up with the original data, and pass the whole thing on
> for
> > further processing.
> >
> > So essentially two data flows:
> >
> > Receive Data -> Store Some State -> Send Data To External Service
> >
> > Do More Processing On Original Data + Results <-  Retrieve Previously
> Stored
> > State  <-  Receive Results From External Service
> >
> > Is there a way to do this while taking advantage of NiFi's State
> Management
> > capabilities?  I wasn't finding any obvious processors for persisting and
> > retrieving shared state from State Management.  The closest my googling
> was
> > able to get me was this:
> https://issues.apache.org/jira/browse/NIFI-1582
> > but if I'm understanding the State Management documentation properly,
> that
> > won't actually help me because I'd need the same processor to do all
> storing
> > and retrieving of state?
> >
> > Does something exist to use State Management like this?  Or is what I'm
> > proposing to do a bad idea?
> >
> > Or maybe I should just be using the DistributedMapCacheServer for this?
> >
> > Any help/advice would be appreciated.
>
>
>


NiFi 1.1.0 stuck starting, no errors

2017-01-26 Thread Peter Wicks (pwicks)
I'm looking for help in troubleshooting my NiFi 1.1.0 install.  It's been 
running stably for some time, but I restarted it this morning when I deployed 
an updated custom NAR. Now it gets stuck at startup, see logs at the end.
There are no error messages, and the processes don't die. The process just 
seems to be hanging waiting for something.


* My first thought was to try rolling back the modified nar, and even 
just removing the nar all together since it was custom.  Neither of these made 
any difference.

* I also tried deleting the "work" folder, which has fixed nar 
versioning issues for me in the past (not really related, but was worth a 
shot). This made no difference.

* NiFi is set to start with java.arg.2=-Xms4G and java.arg.3=-Xmx8G, 
22GB's of free RAM are available on the system (out of some 60GB's total).

* I've checked running processes, and when I stop NiFi no rouge 
instances are left running.

* Since NiFi gets stuck right around the JettyServer step I checked to 
see if any processes were using port 8443. No other processes are using this 
port.

* I thought maybe a key file was being locked, but with NiFi off `lsof 
| grep nifi` returns no locked files.

Nifi-app Log:
2017-01-26 20:23:43,359 INFO [main] org.eclipse.jetty.util.log Logging 
initialized @90357ms
2017-01-26 20:23:43,418 INFO [main] org.apache.nifi.web.server.JettyServer 
Configuring Jetty for HTTPs on port: 8443
2017-01-26 20:23:43,691 INFO [main] org.apache.nifi.web.server.JettyServer 
Loading WAR: 
/data/nifi/nifi-1.1.0/./work/nar/extensions/nifi-media-nar-1.1.0.nar-unpacked/META-INF/bundled-dependencies/nifi-image-viewer-1.1.0.war
 with context path set to /nifi-image-viewer-1.1.0
2017-01-26 20:23:43,702 INFO [main] org.apache.nifi.web.server.JettyServer 
Loading WAR: 
/data/nifi/nifi-1.1.0/./work/nar/extensions/nifi-update-attribute-nar-1.1.0.nar-unpacked/META-INF/bundled-dependencies/nifi-update-attribute-ui-1.1.0.war
 with context path set to /nifi-update-attribute-ui-1.1.0
2017-01-26 20:23:43,703 INFO [main] org.apache.nifi.web.server.JettyServer 
Loading UI extension [ProcessorConfiguration, /nifi-update-attribute-ui-1.1.0] 
for [org.apache.nifi.processors.attributes.UpdateAttribute]
2017-01-26 20:23:43,713 INFO [main] org.apache.nifi.web.server.JettyServer 
Loading WAR: 
/data/nifi/nifi-1.1.0/./work/nar/extensions/nifi-standard-nar-1.1.0.nar-unpacked/META-INF/bundled-dependencies/nifi-standard-content-viewer-1.1.0.war
 with context path set to /nifi-standard-content-viewer-1.1.0
2017-01-26 20:23:43,723 INFO [main] org.apache.nifi.web.server.JettyServer 
Loading WAR: 
/data/nifi/nifi-1.1.0/./work/nar/extensions/nifi-standard-nar-1.1.0.nar-unpacked/META-INF/bundled-dependencies/nifi-jolt-transform-json-ui-1.1.0.war
 with context path set to /nifi-jolt-transform-json-ui-1.1.0
2017-01-26 20:23:43,724 INFO [main] org.apache.nifi.web.server.JettyServer 
Loading UI extension [ProcessorConfiguration, 
/nifi-jolt-transform-json-ui-1.1.0] for 
[org.apache.nifi.processors.standard.JoltTransformJSON]
2017-01-26 20:23:43,729 INFO [main] org.apache.nifi.web.server.JettyServer 
Loading WAR: 
/data/nifi/nifi-1.1.0/./work/nar/framework/nifi-framework-nar-1.1.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-ui-1.1.0.war
 with context path set to /nifi
2017-01-26 20:23:43,733 INFO [main] org.apache.nifi.web.server.JettyServer 
Loading WAR: 
/data/nifi/nifi-1.1.0/./work/nar/framework/nifi-framework-nar-1.1.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-api-1.1.0.war
 with context path set to /nifi-api
2017-01-26 20:23:43,735 INFO [main] org.apache.nifi.web.server.JettyServer 
Loading WAR: 
/data/nifi/nifi-1.1.0/./work/nar/framework/nifi-framework-nar-1.1.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-content-viewer-1.1.0.war
 with context path set to /nifi-content-viewer
2017-01-26 20:23:43,738 INFO [main] org.apache.nifi.web.server.JettyServer 
Loading WAR: 
/data/nifi/nifi-1.1.0/./work/nar/framework/nifi-framework-nar-1.1.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-docs-1.1.0.war
 with context path set to /nifi-docs
2017-01-26 20:23:43,753 INFO [main] org.apache.nifi.web.server.JettyServer 
Loading documents web app with context path set to /nifi-docs
2017-01-26 20:23:43,761 INFO [main] org.apache.nifi.web.server.JettyServer 
Loading WAR: 
/data/nifi/nifi-1.1.0/./work/nar/framework/nifi-framework-nar-1.1.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-error-1.1.0.war
 with context path set to /
2017-01-26 20:23:43,804 INFO [main] org.eclipse.jetty.server.Server 
jetty-9.3.9.v20160517
2017-01-26 20:23:44,748 INFO [main] o.e.jetty.server.handler.ContextHandler 
Started 
o.e.j.w.WebAppContext@4b511e61{/nifi-image-viewer-1.1.0,file:///data/nifi/nifi-1.1.0/work/jetty/nifi-image-viewer-1.1.0.war/webapp/,AVAILABLE}{./work/nar/extensions/nifi-media-nar-1.1.0.nar-unpacked/META-INF/bundled-dependencies/nifi-image-viewer-1.1.0.war}
2017-01-26 20:23:46,566 

Re: Asynchronous Request/Response Data Flows

2017-01-26 Thread Bryan Bende
Ben,

To elaborate on NIFI-190... this ticket introduced two new processors (Wait
and Notify) that use the DistributedMapCacheServer to communicate. They
aren't released yet, but are in the master branch.

One example of using these processors is something like the following:

- Lets say we have a flow file where the content is a CSV file, and each
line is a URL to do a look-up somewhere
- The flow file can be sent to a SplitText processor to get each line into
its own flow file
- The "original" relationship from SplitText can go to a Wait processor
which will keep checking the cache for N signals (in this case N = the
number of splits)
- The "splits" relationship would go down a separate path where each split
would be processed and eventually hit a Notify processor, which would
increment the number of signals in the cache and optionally add attributes
- When Wait sees N signals (or when an expiration is reached) it releases
the original flow file and can optionally copy over attributes that the
signals put in the cache

So you get to continue processing the original flow file that is the CSV,
but still being able to process the splits individually and get the
attributes from them that might be the "results" in your case.

Hope that helps.

-Bryan


On Thu, Jan 26, 2017 at 3:16 PM, Joe Witt  wrote:

> Ben,
>
> One way to approach this is using the sort of capabilities this opens
> up: https://issues.apache.org/jira/browse/NIFI-190
>
> Certainly is a good case/idea to work through.  Doable and
> increasingly seems to be an ask.
>
> Thanks
> Joe
>
> On Thu, Jan 26, 2017 at 3:10 PM, Benjamin Janssen 
> wrote:
> > Hello all,
> >
> > I've got a use case where I get some data, I want to fork a portion of
> that
> > data off to an external service for asynchronous processing, and when
> that
> > external service has finished processing the data, I want to take its
> > output, marry it up with the original data, and pass the whole thing on
> for
> > further processing.
> >
> > So essentially two data flows:
> >
> > Receive Data -> Store Some State -> Send Data To External Service
> >
> > Do More Processing On Original Data + Results <-  Retrieve Previously
> Stored
> > State  <-  Receive Results From External Service
> >
> > Is there a way to do this while taking advantage of NiFi's State
> Management
> > capabilities?  I wasn't finding any obvious processors for persisting and
> > retrieving shared state from State Management.  The closest my googling
> was
> > able to get me was this:  https://issues.apache.org/
> jira/browse/NIFI-1582
> > but if I'm understanding the State Management documentation properly,
> that
> > won't actually help me because I'd need the same processor to do all
> storing
> > and retrieving of state?
> >
> > Does something exist to use State Management like this?  Or is what I'm
> > proposing to do a bad idea?
> >
> > Or maybe I should just be using the DistributedMapCacheServer for this?
> >
> > Any help/advice would be appreciated.
>


Re: Asynchronous Request/Response Data Flows

2017-01-26 Thread Joe Witt
Ben,

One way to approach this is using the sort of capabilities this opens
up: https://issues.apache.org/jira/browse/NIFI-190

Certainly is a good case/idea to work through.  Doable and
increasingly seems to be an ask.

Thanks
Joe

On Thu, Jan 26, 2017 at 3:10 PM, Benjamin Janssen  wrote:
> Hello all,
>
> I've got a use case where I get some data, I want to fork a portion of that
> data off to an external service for asynchronous processing, and when that
> external service has finished processing the data, I want to take its
> output, marry it up with the original data, and pass the whole thing on for
> further processing.
>
> So essentially two data flows:
>
> Receive Data -> Store Some State -> Send Data To External Service
>
> Do More Processing On Original Data + Results <-  Retrieve Previously Stored
> State  <-  Receive Results From External Service
>
> Is there a way to do this while taking advantage of NiFi's State Management
> capabilities?  I wasn't finding any obvious processors for persisting and
> retrieving shared state from State Management.  The closest my googling was
> able to get me was this:  https://issues.apache.org/jira/browse/NIFI-1582
> but if I'm understanding the State Management documentation properly, that
> won't actually help me because I'd need the same processor to do all storing
> and retrieving of state?
>
> Does something exist to use State Management like this?  Or is what I'm
> proposing to do a bad idea?
>
> Or maybe I should just be using the DistributedMapCacheServer for this?
>
> Any help/advice would be appreciated.


Asynchronous Request/Response Data Flows

2017-01-26 Thread Benjamin Janssen
Hello all,

I've got a use case where I get some data, I want to fork a portion of that
data off to an external service for asynchronous processing, and when that
external service has finished processing the data, I want to take its
output, marry it up with the original data, and pass the whole thing on for
further processing.

So essentially two data flows:

Receive Data -> Store Some State -> Send Data To External Service

Do More Processing On Original Data + Results <-  Retrieve Previously
Stored State  <-  Receive Results From External Service

Is there a way to do this while taking advantage of NiFi's State Management
capabilities?  I wasn't finding any obvious processors for persisting and
retrieving shared state from State Management.  The closest my googling was
able to get me was this:  https://issues.apache.org/jira/browse/NIFI-1582
but if I'm understanding the State Management documentation properly, that
won't actually help me because I'd need the same processor to do all
storing and retrieving of state?

Does something exist to use State Management like this?  Or is what I'm
proposing to do a bad idea?

Or maybe I should just be using the DistributedMapCacheServer for this?

Any help/advice would be appreciated.


Re: Scientific Notation conversion?

2017-01-26 Thread Matt Burgess
Sven,

Are your values Strings or numbers? Meaning does the JSON look like:

{ "a": "2.1234567891E10" }
 or
{ "a" : 2.1234567891E10 }

If the latter, would the output field ("a" or "new_a" or whatever)
have to remain a number, or is a String ok?  I think most
applications/libraries will default the "printed" version of a number
to Java's Number.toString() method, so if you wanted any number to be
displayed with the thousands separator, then you really are doing
string manipulation vs math, so what you propose should work fine.
Adding a thousands separator is itself a string operation as there is
no real concept of a separator when it comes to dealing with numbers
as numbers.

If you just want to change the "manageable" numbers to human readable
format (and have them remain Numbers), you could use ExecuteScript
with Groovy and just read in and write out the JSON object, Groovy
will represent some of the example values in their more human-readable
form.  For example, I started with this JSON:

{
  "a": 1,
  "b": 2.23456789E6,
  "c": 2.1234567891E10,
  "d": 15.123456789E7,
  "e": 2.1234567891E23
}


And used this Groovy script in ExecuteScript:

import groovy.json.*
import org.apache.commons.io.IOUtils
import java.nio.charset.StandardCharsets

def flowFile = session.get()
if(!flowFile) return
flowFile = session.write(flowFile, { inputStream, outputStream ->
   def json = new
JsonSlurper().parseText(IOUtils.toString(inputStream,
StandardCharsets.UTF_8))
   
outputStream.write(JsonOutput.prettyPrint(JsonOutput.toJson(json)).getBytes(StandardCharsets.UTF_8))
} as StreamCallback)
session.transfer(flowFile, REL_SUCCESS)


And got this result:
{
"a": 1,
"b": 2234567.89,
"c": 21234567891,
"d": 151234567.89,
"e": 2.1234567891E+23
}

Your original example values ("a" through "d") have been "expanded",
where the one I added ("e", with a large exponent) has remained in
scientific notation.

Alternatively, if you have a flat JSON object and don't mind that the
numeric values will be replaced by strings, you can add the following
to the script before the outputStream.write() line:

json.each {k,v ->
  json[k] = java.text.NumberFormat.getNumberInstance(Locale.US).format(v)
}


to get this output:
{
"a": "10,000",
"b": "2,234,567.89",
"c": "21,234,567,891",
"d": "151,234,567.89",
"e": "212,345,678,910,000,000,000,000"
}

If your JSON is not flat but you know which fields contain the numbers
you wish to transform, you can refer to those fields in dot notation
(vs doing the json.each() to get each top-level key/value pair) as
they are just Maps once the JSON has been "slurped" [1].

Regards,
Matt

[1] http://groovy-lang.org/json.html

On Thu, Jan 26, 2017 at 9:41 AM, Sven Davison  wrote:
> I have a json object the SOMETIMES contains a scientific notation for the
> value. I want to have a nice or "human readable" number w/ thousands
> separator.
>
> example values:
> 1
> 2.23456789E6
> 2.1234567891E10
> 15.123456789E7
>
> what i thought of doing was checking for the existance of E. then forking it
> in the flow. if it has an E.. get the value to the right of the E. split the
> string into an array (including the .). finding the index location of the
> "." then moving it X positions per value indicated on the right of the E.
> once thats done, go back and add the thousands seperator "," for easy
> reading.
>
> this is string manipulation rather than "math".. Can anyone recomend an
> easier way?
>
>


Re: nifi at AWS

2017-01-26 Thread mohammed shambakey
Thank you all for your help

I'm using nifi 1.1.1 and the HTTP protocol. I can access the remote
instance in AWS in my local browser, but I'm trying to upload a local file
to this remote instance (using GetFile processor), then it fails to make
the transaction.

Regards

On Wed, Jan 25, 2017 at 9:51 PM, Koji Kawamura 
wrote:

> Hi Mohammed,
>
> Which version of NiFi are you using? If it's 1.0.0 or later, you can
> choose 'HTTP' as 'Transport Protocol' in RemoteProcessGroup
> configuration in your local NiFi, this is what Andrew suggested
> earlier.
>
> With HTTP transport protocol, the local NiFi will use HTTP port (8080
> in your case) to send flow files to that remote NiFi running on AWS.
> If you can access remote NiFi's UI from browser without issue, this
> should work.
>
> As a side note, if you prefer using RAW transport protocol, then you'd
> have to open additional port, which is defined as
> nifi.remote.input.socket.port, in AWS security group setting. Since
> HTTP transport protocol doesn't require this, HTTP is more advisable.
>
> Thanks,
> Koji
>
> On Wed, Jan 25, 2017 at 1:07 AM, mohammed shambakey
>  wrote:
> > yes
> >
> > On Tue, Jan 24, 2017 at 10:40 AM, Antunes, Ravel <
> ravel.antu...@disney.com>
> > wrote:
> >>
> >> Have you set nifi.remote.input.host to the EC2 instance public DNS?
> >>
> >>
> >>
> >> From: mohammed shambakey 
> >> Reply-To: "users@nifi.apache.org" 
> >> Date: Tuesday, January 24, 2017 at 9:25 AM
> >> To: "users@nifi.apache.org" 
> >> Subject: Re: nifi at AWS
> >>
> >>
> >>
> >> Thank you all.
> >>
> >>
> >>
> >> I'm already using HTTP, and "transmission" and "ports" are open (I think
> >> this is what you mean by nifi.remote.input.socket.port, right?), but
> still
> >> the same problem.
> >>
> >>
> >>
> >> I can access the remote instance from another AWS instance in the same
> VPC
> >> (I didn't try another VPC), but from my local machine to the remote AWS
> >> instance, transaction fails.
> >>
> >>
> >>
> >> I tried to open all input TCP tranffic to the AWS instance, but AWS
> didn't
> >> allow that. If it can't be solved, I think I'll just use same EC2
> instances
> >> running in the same VPC.
> >>
> >>
> >>
> >> Regards
> >>
> >>
> >>
> >> On Sun, Jan 22, 2017 at 4:04 PM, Andrew Grande 
> wrote:
> >>
> >> Isn't it more advisable to use the HTTP mode instead, i.e. no additional
> >> ports to open? Make sure to change the client RPG mode to http from RAW
> (in
> >> the UI).
> >>
> >> Andrew
> >>
> >>
> >>
> >> On Sun, Jan 22, 2017, 10:47 AM Bryan Bende  wrote:
> >>
> >> Hello,
> >>
> >>
> >>
> >> I'm assuming you are using site-to-site since you mentioned failing to
> >> create a transaction.
> >>
> >>
> >>
> >> In nifi.properties on the AWS instance, there is probably a value for
> >> nifi.remote.input.socket.port which would also need to be opened.
> >>
> >>
> >>
> >> -Bryan
> >>
> >>
> >>
> >> On Sat, Jan 21, 2017 at 7:00 PM, mohammed shambakey <
> shambak...@gmail.com>
> >> wrote:
> >>
> >> Hi
> >>
> >>
> >>
> >> I'm trying to send a file from a local nifi instatnce to a remote nifi
> >> instance in AWS. Security rules at remote instance has port 8080
> opened, yet
> >> each time I try to send the file, local nifi says it failed to create
> >> transaction to the remote instance.
> >>
> >>
> >>
> >> Regards
> >>
> >>
> >>
> >> --
> >>
> >> Mohammed
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> --
> >>
> >> Mohammed
> >
> >
> >
> >
> > --
> > Mohammed
>



-- 
Mohammed


RE: InvokeHTTP and SocketTimeoutException

2017-01-26 Thread Giovanni Lanzani
Hi Andy,

Thanks for chiming in. I've gathered some info down here:

The LogAttribute before InvokeHTTP:

2017-01-26 09:15:10,373 INFO [Timer-Driven Process Thread-4] 
o.a.n.processors.standard.LogAttribute 
LogAttribute[id=d9d45caf-0159-1000-d4a6-5127a
fdeaf64] logging for flow file 
StandardFlowFileRecord[uuid=dffe5b52-b5f1-4597-8540-dc0baaff29ad,claim=StandardContentClaim
 [resourceClaim=Standar
dResourceClaim[id=1485197201540-1, container=default, section=1], offset=37389, 
length=10],offset=0,name=258752766597833,size=10]
--
Standard FlowFile Attributes
Key: 'entryDate'
Value: 'Thu Jan 26 09:14:57 CET 2017'
Key: 'lineageStartDate'
Value: 'Thu Jan 26 09:14:57 CET 2017'
Key: 'fileSize'
Value: '10'
FlowFile Attribute Map Content
Key: 'filename'
Value: '258752766597833'
Key: 'path'
Value: './'
Key: 'uuid'
Value: 'dffe5b52-b5f1-4597-8540-dc0baaff29ad'
--

After:

2017-01-26 09:15:48,572 INFO [Timer-Driven Process Thread-6] 
o.a.n.processors.standard.LogAttribute 
LogAttribute[id=d9d49cf0-0159-1000-c5cd-c0810449e070] logging for flow file 
StandardFlowFileRecord[uuid=dffe5b52-b5f1-4597-8540-dc0baaff29ad,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1485197201540-1, container=default, 
section=1], offset=37389, length=10],offset=0,name=258752766597833,size=10]
--
Standard FlowFile Attributes
Key: 'entryDate'
Value: 'Thu Jan 26 09:14:57 CET 2017'
Key: 'lineageStartDate'
Value: 'Thu Jan 26 09:14:57 CET 2017'
Key: 'fileSize'
Value: '10'
FlowFile Attribute Map Content
Key: 'filename'
Value: '258752766597833'
Key: 'path'
Value: './'
Key: 'uuid'
Value: 'dffe5b52-b5f1-4597-8540-dc0baaff29ad'
--

The exception InvokeHTTP is raising (in this case a ConnectException, I can 
make it a SocketTimeout with some extra work but I think new attributes should 
be present in this case as well)

2017-01-26 09:15:10,975 INFO [StandardProcessScheduler Thread-1] 
o.a.n.c.s.TimerDrivenSchedulingAgent Stopped scheduling LogAttribute[id=d9d45caf
-0159-1000-d4a6-5127afdeaf64] to run
2017-01-26 09:15:11,100 INFO [Flow Service Tasks Thread-2] 
o.a.nifi.controller.StandardFlowService Saved flow controller 
org.apache.nifi.controll
er.FlowController@68275864 // Another save pending = true
2017-01-26 09:15:11,955 INFO [Flow Service Tasks Thread-2] 
o.a.nifi.controller.StandardFlowService Saved flow controller 
org.apache.nifi.controll
er.FlowController@68275864 // Another save pending = false
2017-01-26 09:15:16,228 INFO [StandardProcessScheduler Thread-8] 
o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled InvokeHTTP[id=d9d48225-0159-1000-
9958-4bed80de9fa6] to run with 1 threads
2017-01-26 09:15:16,842 INFO [Flow Service Tasks Thread-2] 
o.a.nifi.controller.StandardFlowService Saved flow controller 
org.apache.nifi.controll
er.FlowController@68275864 // Another save pending = false
2017-01-26 09:15:17,551 ERROR [Timer-Driven Process Thread-8] 
o.a.nifi.processors.standard.InvokeHTTP 
InvokeHTTP[id=d9d48225-0159-1000-9958-4bed8
0de9fa6] Routing to Failure due to exception: java.net.ConnectException: Failed 
to connect to /127.0.0.1:8000: java.net.ConnectException: Failed
to connect to /127.0.0.1:8000
2017-01-26 09:15:17,553 ERROR [Timer-Driven Process Thread-8] 
o.a.nifi.processors.standard.InvokeHTTP
java.net.ConnectException: Failed to connect to /127.0.0.1:8000
at 
com.squareup.okhttp.internal.io.RealConnection.connectSocket(RealConnection.java:139)
 ~[okhttp-2.7.1.jar:na]
at 
com.squareup.okhttp.internal.io.RealConnection.connect(RealConnection.java:108) 
~[okhttp-2.7.1.jar:na]
at 
com.squareup.okhttp.internal.http.StreamAllocation.findConnection(StreamAllocation.java:184)
 ~[okhttp-2.7.1.jar:na]
at 
com.squareup.okhttp.internal.http.StreamAllocation.findHealthyConnection(StreamAllocation.java:126)
 ~[okhttp-2.7.1.jar:na]
at 
com.squareup.okhttp.internal.http.StreamAllocation.newStream(StreamAllocation.java:95)
 ~[okhttp-2.7.1.jar:na]
at 
com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:283) 
~[okhttp-2.7.1.jar:na]
at 
com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:224) 
~[okhttp-2.7.1.jar:na]
at com.squareup.okhttp.Call.getResponse(Call.java:286) 
~[okhttp-2.7.1.jar:na]
at 
com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243) 
~[okhttp-2.7.1.jar:na]
at 
com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:205) 
~[okhttp-2.7.1.jar:na]
at com.squareup.okhttp.Call.execute(Call.java:80) ~[okhttp-2.7.1.jar:na]
at 
org.apache.nifi.processors.standard.InvokeHTTP.onTrigger(InvokeHTTP.java:624) 
~[nifi-standard-processors-1.0.0.jar:1.0.0]
at