Re: Dynamic property in QueryDatabaseTable

2016-09-08 Thread Ravisankar Mani
Hi Matt,

Thanks for your reply.
I can add the new property like initial.maxvalue.id and set the value(Like
1) using add property option in this processor. But i can see the error
message like " 'initial.maxvalue.id' validated against '1'  is invalid
because 'initial.maxvalue.id' is not a supported property " . May be i have
added the dynamic property in wrong way.
Can you please help me to resolve the problem?


Thanks,
Ravisankar


On Thu, Sep 8, 2016 at 8:43 AM, Matt Burgess  wrote:

> Ravisankar,
>
> The dynamic property needs to have a certain name, in general of the
> form initial.maxvalue.{max_value_column}.  So if you have a max value
> column called last_updated, you will want to add a dynamic property
> called initial.maxvalue.last_updated, and you set the value to
> whatever you want the initial value to be. At that point the processor
> will only fetch rows where last_update > the value you specified.
>
> Regards,
> Matt
>
> On Thu, Sep 8, 2016 at 8:31 AM, Ravisankar Mani  wrote:
> > Hi All,
> >
> > I have used 'QueryDatabaseTable' processor in my workflow(incremental
> update
> > ETL process), initially , Its working properly once executed the job
> without
> > setiing max value column(first time only) and then setting the max value
> > columns(because the max value doesn't know processor) .  But i need to
> set
> > the max value column initially , in this case , i saw the dynamic
> property
> > like 'Initial Max Value'  in nifi guide. But i don't know how to set the
> > property in this processor.
> > Can you please help me to resolve the problem?
> >
> >
> > Regards,
> > Ravisankar
>


RE: Erroneous Queue has No FlowFiles message

2016-09-08 Thread Peter Wicks (pwicks)
Gunjan,

Thanks for the response. I included those messages to emphasize the difference 
between a normal Queue List and mine.  In a normal queue list the GET step 
includes a non-empty “flowFileSummaries” array, assuming there are FlowFiles to 
show.
When I list my other queue, the one with 23 FlowFiles in it, I get back an 
array with 23 entries.  Based on the JSON I’m assuming that my queue with 
100,000 files in it should return 100, but instead I get 0.

Thanks,
  Peter

From: Gunjan Dave [mailto:gunjanpiyushd...@gmail.com]
Sent: Thursday, September 08, 2016 9:26 PM
To: users@nifi.apache.org
Subject: Re: Erroneous Queue has No FlowFiles message


Hi Peter, once you post the request, your first step, you get a listing request 
reference handle UUID as part of response.
This UUID is used to perform the all the operations on the queue.
This UUID is active until a DELETE request is sent. Once you delete the active 
request, you get the message you mentioned in the logs, this is not an issue.
If you check the developer panel in chrome, you will see all 3 operations, 
post-get-delete in succession.

On Fri, Sep 9, 2016, 8:48 AM Peter Wicks (pwicks) 
mailto:pwi...@micron.com>> wrote:
Running NiFI 1.0.0, I’m listing a queue that has 100k files queued. I’ve 
stopped both the incoming and outgoing processors, so the files are just 
hanging out in the queue, no possible motion.

I get, “The queue has no FlowFiles” message.  Here are the actual responses 
from the REST calls:

POST - Listing-requests
{"listingRequest":{"id":"0cee44de-0157-1000-5668-6e93a465e227","uri":"https://localhost:8443/nifi-api/flowfile-queues/0bacce2d-0157-1000-1a6d-6e0fd84a6bd6/listing-requests/0cee44de-0157-1000-5668-6e93a465e227","submissionTime":"09/09/2016
 03:12:04.318 GMT+00:00","lastUpdated":"03:12:04 
GMT+00:00","percentCompleted":0,"finished":false,"maxResults":100,"state":"Waiting
 for other queue requests to 
complete","queueSize":{"byteCount":2540,"objectCount":10},"sourceRunning":false,"destinationRunning":false}}

GET
{"listingRequest":{"id":"0cee44de-0157-1000-5668-6e93a465e227","uri":"https:// 
localhost:8443/nifi-api/flowfile-queues/0bacce2d-0157-1000-1a6d-6e0fd84a6bd6/listing-requests/0cee44de-0157-1000-5668-6e93a465e227","submissionTime":"09/09/2016
 03:12:04.318 GMT+00:00","lastUpdated":"03:12:04 
GMT+00:00","percentCompleted":100,"finished":true,"maxResults":100,"state":"Completed
 
successfully","queueSize":{"byteCount":2540,"objectCount":10},"flowFileSummaries":[],"sourceRunning":false,"destinationRunning":false}}

DELETE
{"listingRequest":{"id":"0cee44de-0157-1000-5668-6e93a465e227","uri":"https:// 
localhost:8443/nifi-api/flowfile-queues/0bacce2d-0157-1000-1a6d-6e0fd84a6bd6/listing-requests/0cee44de-0157-1000-5668-6e93a465e227","submissionTime":"09/09/2016
 03:12:04.318 GMT+00:00","lastUpdated":"03:12:04 
GMT+00:00","percentCompleted":100,"finished":true,"maxResults":100,"state":"Completed
 
successfully","queueSize":{"byteCount":2540,"objectCount":10},"sourceRunning":false,"destinationRunning":false}}

On a subsequent test (thus the difference in ID’s) I checked the nifi-app.log 
file and found this single message:

2016-09-09 03:15:50,043 INFO [NiFi Web Server-828] 
o.a.n.controller.StandardFlowFileQueue Canceling ListFlowFile Request with ID 
0cf1b178-0157-1000-9111-9b889415bcdc

Not clear why it was canceled.

I went up one step in the process, and that queue has 23 items in it. I was 
able to list it without issue.

Any ideas why I can’t list the queue?

Thanks,
  Peter Wicks


Re: Erroneous Queue has No FlowFiles message

2016-09-08 Thread Gunjan Dave
Hi Peter, once you post the request, your first step, you get a listing
request reference handle UUID as part of response.
This UUID is used to perform the all the operations on the queue.
This UUID is active until a DELETE request is sent. Once you delete the
active request, you get the message you mentioned in the logs, this is not
an issue.
If you check the developer panel in chrome, you will see all 3 operations,
post-get-delete in succession.

On Fri, Sep 9, 2016, 8:48 AM Peter Wicks (pwicks)  wrote:

> Running NiFI 1.0.0, I’m listing a queue that has 100k files queued. I’ve
> stopped both the incoming and outgoing processors, so the files are just
> hanging out in the queue, no possible motion.
>
>
>
> I get, “The queue has no FlowFiles” message.  Here are the actual
> responses from the REST calls:
>
>
>
> *POST - Listing-requests*
>
> {"listingRequest":{"id":"0cee44de-0157-1000-5668-6e93a465e227","uri":"
> https://localhost:8443/nifi-api/flowfile-queues/0bacce2d-0157-1000-1a6d-6e0fd84a6bd6/listing-requests/0cee44de-0157-1000-5668-6e93a465e227","submissionTime":"09/09/2016
> 03:12:04.318 GMT+00:00","lastUpdated":"03:12:04
> GMT+00:00","percentCompleted":0,"finished":false,"maxResults":100,"state":"Waiting
> for other queue requests to
> complete","queueSize":{"byteCount":2540,"objectCount":10},"sourceRunning":false,"destinationRunning":false}}
>
>
>
> *GET*
>
>
> {"listingRequest":{"id":"0cee44de-0157-1000-5668-6e93a465e227","uri":"https://
> localhost:8443/nifi-api/flowfile-queues/0bacce2d-0157-1000-1a6d-6e0fd84a6bd6/listing-requests/0cee44de-0157-1000-5668-6e93a465e227","submissionTime":"09/09/2016
> 03:12:04.318 GMT+00:00","lastUpdated":"03:12:04
> GMT+00:00","percentCompleted":100,"finished":true,"maxResults":100,"state":"Completed
> successfully","queueSize":{"byteCount":2540,"objectCount":10},"flowFileSummaries":[],"sourceRunning":false,"destinationRunning":false}}
>
>
>
> *DELETE*
>
>
> {"listingRequest":{"id":"0cee44de-0157-1000-5668-6e93a465e227","uri":"https://
> localhost:8443/nifi-api/flowfile-queues/0bacce2d-0157-1000-1a6d-6e0fd84a6bd6/listing-requests/0cee44de-0157-1000-5668-6e93a465e227","submissionTime":"09/09/2016
> 03:12:04.318 GMT+00:00","lastUpdated":"03:12:04
> GMT+00:00","percentCompleted":100,"finished":true,"maxResults":100,"state":"Completed
> successfully","queueSize":{"byteCount":2540,"objectCount":10},"sourceRunning":false,"destinationRunning":false}}
>
>
>
> On a subsequent test (thus the difference in ID’s) I checked the
> nifi-app.log file and found this single message:
>
>
>
> 2016-09-09 03:15:50,043 INFO [NiFi Web Server-828]
> o.a.n.controller.StandardFlowFileQueue Canceling ListFlowFile Request with
> ID 0cf1b178-0157-1000-9111-9b889415bcdc
>
>
>
> Not clear why it was canceled.
>
>
>
> I went up one step in the process, and that queue has 23 items in it. I
> was able to list it without issue.
>
>
>
> Any ideas why I can’t list the queue?
>
>
>
> Thanks,
>
>   Peter Wicks
>


Erroneous Queue has No FlowFiles message

2016-09-08 Thread Peter Wicks (pwicks)
Running NiFI 1.0.0, I'm listing a queue that has 100k files queued. I've 
stopped both the incoming and outgoing processors, so the files are just 
hanging out in the queue, no possible motion.

I get, "The queue has no FlowFiles" message.  Here are the actual responses 
from the REST calls:

POST - Listing-requests
{"listingRequest":{"id":"0cee44de-0157-1000-5668-6e93a465e227","uri":"https://localhost:8443/nifi-api/flowfile-queues/0bacce2d-0157-1000-1a6d-6e0fd84a6bd6/listing-requests/0cee44de-0157-1000-5668-6e93a465e227","submissionTime":"09/09/2016
 03:12:04.318 GMT+00:00","lastUpdated":"03:12:04 
GMT+00:00","percentCompleted":0,"finished":false,"maxResults":100,"state":"Waiting
 for other queue requests to 
complete","queueSize":{"byteCount":2540,"objectCount":10},"sourceRunning":false,"destinationRunning":false}}

GET
{"listingRequest":{"id":"0cee44de-0157-1000-5668-6e93a465e227","uri":"https:// 
localhost:8443/nifi-api/flowfile-queues/0bacce2d-0157-1000-1a6d-6e0fd84a6bd6/listing-requests/0cee44de-0157-1000-5668-6e93a465e227","submissionTime":"09/09/2016
 03:12:04.318 GMT+00:00","lastUpdated":"03:12:04 
GMT+00:00","percentCompleted":100,"finished":true,"maxResults":100,"state":"Completed
 
successfully","queueSize":{"byteCount":2540,"objectCount":10},"flowFileSummaries":[],"sourceRunning":false,"destinationRunning":false}}

DELETE
{"listingRequest":{"id":"0cee44de-0157-1000-5668-6e93a465e227","uri":"https:// 
localhost:8443/nifi-api/flowfile-queues/0bacce2d-0157-1000-1a6d-6e0fd84a6bd6/listing-requests/0cee44de-0157-1000-5668-6e93a465e227","submissionTime":"09/09/2016
 03:12:04.318 GMT+00:00","lastUpdated":"03:12:04 
GMT+00:00","percentCompleted":100,"finished":true,"maxResults":100,"state":"Completed
 
successfully","queueSize":{"byteCount":2540,"objectCount":10},"sourceRunning":false,"destinationRunning":false}}

On a subsequent test (thus the difference in ID's) I checked the nifi-app.log 
file and found this single message:

2016-09-09 03:15:50,043 INFO [NiFi Web Server-828] 
o.a.n.controller.StandardFlowFileQueue Canceling ListFlowFile Request with ID 
0cf1b178-0157-1000-9111-9b889415bcdc

Not clear why it was canceled.

I went up one step in the process, and that queue has 23 items in it. I was 
able to list it without issue.

Any ideas why I can't list the queue?

Thanks,
  Peter Wicks


PermissionBasedStatusMergerSpec is failing

2016-09-08 Thread Tijo Thomas
Hi
Nifi test case is failing (PermissionBasedStatusMergerSpec) .
This is written in Grovy .. not comfortable with Groovy .

Running org.apache.nifi.cluster.manager.PermissionBasedStatusMergerSpec
Tests run: 20, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.922 sec
<<< FAILURE! - in
org.apache.nifi.cluster.manager.PermissionBasedStatusMergerSpec
Merge
ProcessorStatusSnapshotDTO[0](org.apache.nifi.cluster.manager.PermissionBasedStatusMergerSpec)
Time elapsed: 0.144 sec  <<< FAILURE!
org.spockframework.runtime.SpockComparisonFailure: Condition not satisfied:

returnedJson == expectedJson
||  |
||
{"id":"hidden","groupId":"hidden","name":"hidden","type":"hidden","bytesRead":0,"bytesWritten":0,"read":"0
bytes","written":"0 bytes","flowFilesIn":0,"bytesIn":0,"input":"0 (0
bytes)","flowFilesOut":0,"bytesOut":0,"output":"0 (0
bytes)","taskCount":0,"tasksDurationNanos":0,"tasks":"0","tasksDuration":"00:00:00.000","activeThreadCount":0}
|false
|1 difference (99% similarity)
|
{"id":"hidden","groupId":"hidden","name":"hidden","type":"hidden","bytesRead":0,"bytesWritten":0,"read":"0
bytes","written":"0 bytes","flowFilesIn":0,"bytesIn":0,"input":"0 (0
bytes)","flowFilesOut":0,"bytesOut":0,"output":"0 (0
bytes)","taskCount":0,"tasksDurationNanos":0,"tasks":"0","tasksDuration":"00:(3)0:00.000","activeThreadCount":0}
|
{"id":"hidden","groupId":"hidden","name":"hidden","type":"hidden","bytesRead":0,"bytesWritten":0,"read":"0
bytes","written":"0 bytes","flowFilesIn":0,"bytesIn":0,"input":"0 (0
bytes)","flowFilesOut":0,"bytesOut":0,"output":"0 (0
bytes)","taskCount":0,"tasksDurationNanos":0,"tasks":"0","tasksDuration":"00:(0)0:00.000","activeThreadCount":0}
{"id":"hidden","groupId":"hidden","name":"hidden","type":"hidden","bytesRead":0,"bytesWritten":0,"read":"0
bytes","written":"0 bytes","flowFilesIn":0,"bytesIn":0,"input":"0 (0
bytes)","flowFilesOut":0,"bytesOut":0,"output":"0 (0
bytes)","taskCount":0,"tasksDurationNanos":0,"tasks":"0","tasksDuration":"00:30:00.000","activeThreadCount":0}

at
org.apache.nifi.cluster.manager.PermissionBasedStatusMergerSpec.Merge
ProcessorStatusSnapshotDTO(PermissionBasedStatusMergerSpec.groovy:257)

Merge
ProcessorStatusSnapshotDTO[1](org.apache.nifi.cluster.manager.PermissionBasedStatusMergerSpec)
Time elapsed: 0.01 sec  <<< FAILURE!
org.spockframework.runtime.SpockComparisonFailure: Condition not satisfied:

Tijo


RE: Nifi 1.0.0 compatibility with Hive 1.1.0

2016-09-08 Thread Peter Wicks (pwicks)
Also, ORC File support was pulled out into its own library on the HIVE side.
If you are willing to compile and run your own version you might need to 
include orc-core as a MVN dependency: 
https://mvnrepository.com/artifact/org.apache.orc/orc-core/1.2.0.


From: Andre [mailto:andre-li...@fucs.org]
Sent: Thursday, September 08, 2016 4:51 AM
To: users@nifi.apache.org
Subject: Re: Nifi 1.0.0 compatibility with Hive 1.1.0

Yari,

Is there any chance you can cherry pick commit 
80224e3e5ed7ee7b09c4985a920a7fa393bff26c and try again?

Post 1.0.0 there have been some changes to streamline compilation using vendor 
provided libraries.

Cheers

On Thu, Sep 8, 2016 at 8:44 PM, Yari Marchetti 
mailto:yari.marche...@buongiorno.com>> wrote:
Hello,
I'd like to use Nifi 1.0.0 with Hive 1.1.0 (on CDH 5.5.2) but after some 
investigation I realised that the hive-jdbc driver included in Nifi is 
incompatible with the Hive version we're using (1.1.0 on CDH 5.5.2) as I'm 
getting the error:

org.apache.hive.jdbc.HiveConnection Error opening session
org.apache.thrift.TApplicationException: Required field 'client_protocol' is 
unset! Struct:TOpenSessionReq(client_protocol:null, 
configuration:{use:database=unifieddata})

So I just tried to recompile Nifi using the Cloudera profile 5.5.2 but 
compilation is failing:

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on 
project nifi-hive-processors: Compilation failure: Compilation failure:
[ERROR] 
/home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/hadoop/hive/ql/io/orc/NiFiOrcUtils.java:[26,43]
 error: package 
org.apache.hadoop.hive.ql.io.filters does 
not exist
[ERROR] 
/home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/hadoop/hive/ql/io/orc/OrcFlowFileWriter.java:[45,43]
 error: package 
org.apache.hadoop.hive.ql.io.filters does 
not exist
[ERROR] 
/home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/hadoop/hive/ql/io/orc/OrcFlowFileWriter.java:[643,24]
 error: cannot find symbol
[ERROR] symbol:   class BloomFilterIO
[ERROR] location: class TreeWriter
[ERROR] 
/home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/hadoop/hive/ql/io/orc/OrcFlowFileWriter.java:[645,30]
 error: cannot find symbol
[ERROR] symbol:   class BloomFilterIndex
[ERROR] location: class OrcProto
[ERROR] 
/home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/hadoop/hive/ql/io/orc/OrcFlowFileWriter.java:[646,30]
 error: cannot find symbol
[ERROR] symbol:   class BloomFilter
[ERROR] location: class OrcProto
[ERROR] 
/home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/hadoop/hive/ql/io/orc/NiFiOrcUtils.java:[450,32]
 error: cannot find symbol
[ERROR] symbol:   variable BloomFilterIO
[ERROR] location: class NiFiOrcUtils
[ERROR] 
/home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/hadoop/hive/ql/io/orc/OrcFlowFileWriter.java:[200,20]
 error: cannot find symbol
[ERROR] symbol:   variable OrcUtils
[ERROR] location: class OrcFlowFileWriter


Is there any way to get Nifi to work with Hive 1.1.0 and CDH 5.5.2?

Thanks,
Yari



RE: NiFI 1.0.0 errors with DetectDuplicate processor

2016-09-08 Thread Porta Léonard
Thanks Bryan,

Everything worked well using the map cache server.
4 eyes are better than two !

Regards,
Léo

From: Bryan Bende [mailto:bbe...@gmail.com]
Sent: jeudi 8 septembre 2016 15:28
To: users@nifi.apache.org
Subject: Re: NiFI 1.0.0 errors with DetectDuplicate processor

Hello,

I'm not sure if this is the problem, but I noticed you are using the map cache 
client with the set cache server.

I think the map cache client needs to be used with the map cache server. Can 
you let us know if that works.

-Bryan



On Thu, Sep 8, 2016 at 9:00 AM, Porta Léonard 
mailto:leonard.po...@kudelskisecurity.com>> 
wrote:
Hello,

Under NiFI 1.0.0 I am trying to configure a DetectDuplicate processor.

I have configured and enabled
DistributedSetCacheServer, with by default configuration (Port=4557)
and
DistributedMapCacheClientService (Server Hostname=localhost, Server Port=4557, 
SSL Context Service=, Comm. Timeout=30 secs)


DetectDuplicate configuration is
Cache Entry Identifier=${my-attribute-key}
FlowFile Description=my text here
Age Off Duration=24 hrs
Distributed Cache Service= DistributedMapCacheClientService
Cache The Entry Identifier=true


May someone help.

Thanks
Leo

Here is the log from nifi-user.log :
2016-09-08 12:12:26,157 ERROR [Distributed Cache Server Communications Thread: 
09a381b1-0157-1000-902d-067af19e4c19] o.a.n.d.cache.server.AbstractCacheServer 
org.apache.nifi.distributed.cache.server.AbstractCacheServer$1$1@759fdc8a
 unable to communicate with remote peer localhost due to java.io.IOException: 
IllegalRequest
2016-09-08 12:12:26,166 ERROR [Timer-Driven Process Thread-1] 
o.a.n.p.standard.DetectDuplicate 
DetectDuplicate[id=09981881-0157-1000-ac0f-43f500adcf1a] Unable to communicate 
with cache when processing 
StandardFlowFileRecord[uuid=35de4947-92b3-46fb-ada7-0a195600e4f1,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1473336740239-1293, container=default, 
section=269], offset=22578, 
length=1044],offset=0,name=689694820062267,size=1044] due to 
java.io.EOFException: java.io.EOFException
2016-09-08 12:12:26,169 ERROR [Timer-Driven Process Thread-1] 
o.a.n.p.standard.DetectDuplicate
java.io.EOFException: null
at java.io.DataInputStream.readInt(DataInputStream.java:392) 
~[na:1.8.0_101]
at 
org.apache.nifi.distributed.cache.client.DistributedMapCacheClientService.readLengthDelimitedResponse(DistributedMapCacheClientService.java:220)
 ~[na:na]
at 
org.apache.nifi.distributed.cache.client.DistributedMapCacheClientService.access$100(DistributedMapCacheClientService.java:51)
 ~[na:na]
at 
org.apache.nifi.distributed.cache.client.DistributedMapCacheClientService$4.execute(DistributedMapCacheClientService.java:176)
 ~[na:na]
at 
org.apache.nifi.distributed.cache.client.DistributedMapCacheClientService.withCommsSession(DistributedMapCacheClientService.java:305)
 ~[na:na]
at 
org.apache.nifi.distributed.cache.client.DistributedMapCacheClientService.getAndPutIfAbsent(DistributedMapCacheClientService.java:164)
 ~[na:na]
at sun.reflect.GeneratedMethodAccessor882.invoke(Unknown Source) 
~[na:na]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[na:1.8.0_101]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_101]
at 
org.apache.nifi.controller.service.StandardControllerServiceProvider$1.invoke(StandardControllerServiceProvider.java:177)
 ~[na:na]
at com.sun.proxy.$Proxy163.getAndPutIfAbsent(Unknown Source) ~[na:na]
at 
org.apache.nifi.processors.standard.DetectDuplicate.onTrigger(DetectDuplicate.java:182)
 ~[nifi-standard-processors-1.0.0.jar:1.0.0]
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 [nifi-api-1.0.0.jar:1.0.0]
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1064)
 [nifi-framework-core-1.0.0.jar:1.0.0]
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
 [nifi-framework-core-1.0.0.jar:1.0.0]
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
 [nifi-framework-core-1.0.0.jar:1.0.0]
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
 [nifi-framework-core-1.0.0.jar:1.0.0]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_101]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
[na:1.8.0_101]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 [na:1.8.0_101]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 [na:1.8.0_101]
at 
ja

Re: NiFI 1.0.0 errors with DetectDuplicate processor

2016-09-08 Thread Bryan Bende
Hello,

I'm not sure if this is the problem, but I noticed you are using the map
cache client with the set cache server.

I think the map cache client needs to be used with the map cache server.
Can you let us know if that works.

-Bryan



On Thu, Sep 8, 2016 at 9:00 AM, Porta Léonard <
leonard.po...@kudelskisecurity.com> wrote:

> Hello,
>
>
>
> Under NiFI 1.0.0 I am trying to configure a DetectDuplicate processor.
>
>
>
> I have configured and enabled
>
> DistributedSetCacheServer, with by default configuration (Port=4557)
>
> and
>
> DistributedMapCacheClientService (Server Hostname=localhost, Server
> Port=4557, SSL Context Service=, Comm. Timeout=30 secs)
>
>
>
>
>
> DetectDuplicate configuration is
>
> Cache Entry Identifier=${my-attribute-key}
>
> FlowFile Description=my text here
>
> Age Off Duration=24 hrs
>
> Distributed Cache Service= DistributedMapCacheClientService
>
> Cache The Entry Identifier=true
>
>
>
>
>
> May someone help.
>
>
>
> Thanks
>
> Leo
>
>
>
> Here is the log from nifi-user.log :
>
> 2016-09-08 12:12:26,157 ERROR [Distributed Cache Server Communications
> Thread: 09a381b1-0157-1000-902d-067af19e4c19] 
> o.a.n.d.cache.server.AbstractCacheServer
> org.apache.nifi.distributed.cache.server.AbstractCacheServer$1$1@759fdc8a
> unable to communicate with remote peer localhost due to
> java.io.IOException: IllegalRequest
>
> 2016-09-08 12:12:26,166 ERROR [Timer-Driven Process Thread-1]
> o.a.n.p.standard.DetectDuplicate 
> DetectDuplicate[id=09981881-0157-1000-ac0f-43f500adcf1a]
> Unable to communicate with cache when processing
> StandardFlowFileRecord[uuid=35de4947-92b3-46fb-ada7-0a195600e4f1,claim=StandardContentClaim
> [resourceClaim=StandardResourceClaim[id=1473336740239-1293,
> container=default, section=269], offset=22578, 
> length=1044],offset=0,name=689694820062267,size=1044]
> due to java.io.EOFException: java.io.EOFException
>
> 2016-09-08 12:12:26,169 ERROR [Timer-Driven Process Thread-1]
> o.a.n.p.standard.DetectDuplicate
>
> java.io.EOFException: null
>
> at java.io.DataInputStream.readInt(DataInputStream.java:392)
> ~[na:1.8.0_101]
>
> at org.apache.nifi.distributed.cache.client.
> DistributedMapCacheClientService.readLengthDelimitedResponse(
> DistributedMapCacheClientService.java:220) ~[na:na]
>
> at org.apache.nifi.distributed.cache.client.
> DistributedMapCacheClientService.access$100(DistributedMapCacheClientService.java:51)
> ~[na:na]
>
> at org.apache.nifi.distributed.cache.client.
> DistributedMapCacheClientService$4.execute(DistributedMapCacheClientService.java:176)
> ~[na:na]
>
> at org.apache.nifi.distributed.cache.client.
> DistributedMapCacheClientService.withCommsSession(
> DistributedMapCacheClientService.java:305) ~[na:na]
>
> at org.apache.nifi.distributed.cache.client.
> DistributedMapCacheClientService.getAndPutIfAbsent(
> DistributedMapCacheClientService.java:164) ~[na:na]
>
> at sun.reflect.GeneratedMethodAccessor882.invoke(Unknown Source)
> ~[na:na]
>
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_101]
>
> at java.lang.reflect.Method.invoke(Method.java:498)
> ~[na:1.8.0_101]
>
> at org.apache.nifi.controller.service.
> StandardControllerServiceProvider$1.invoke(StandardControllerServiceProvider.java:177)
> ~[na:na]
>
> at com.sun.proxy.$Proxy163.getAndPutIfAbsent(Unknown Source)
> ~[na:na]
>
> at org.apache.nifi.processors.standard.DetectDuplicate.
> onTrigger(DetectDuplicate.java:182) ~[nifi-standard-processors-1.
> 0.0.jar:1.0.0]
>
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> [nifi-api-1.0.0.jar:1.0.0]
>
> at org.apache.nifi.controller.StandardProcessorNode.onTrigger(
> StandardProcessorNode.java:1064) [nifi-framework-core-1.0.0.jar:1.0.0]
>
> at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.
> call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.0.0.
> jar:1.0.0]
>
> at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.
> call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.0.0.
> jar:1.0.0]
>
> at org.apache.nifi.controller.scheduling.
> TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
> [nifi-framework-core-1.0.0.jar:1.0.0]
>
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [na:1.8.0_101]
>
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> [na:1.8.0_101]
>
> at java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> [na:1.8.0_101]
>
> at java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> [na:1.8.0_101]
>
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [na:1.8.0_101]
>
>   

NiFI 1.0.0 errors with DetectDuplicate processor

2016-09-08 Thread Porta Léonard
Hello,

Under NiFI 1.0.0 I am trying to configure a DetectDuplicate processor.

I have configured and enabled
DistributedSetCacheServer, with by default configuration (Port=4557)
and
DistributedMapCacheClientService (Server Hostname=localhost, Server Port=4557, 
SSL Context Service=, Comm. Timeout=30 secs)


DetectDuplicate configuration is
Cache Entry Identifier=${my-attribute-key}
FlowFile Description=my text here
Age Off Duration=24 hrs
Distributed Cache Service= DistributedMapCacheClientService
Cache The Entry Identifier=true


May someone help.

Thanks
Leo

Here is the log from nifi-user.log :
2016-09-08 12:12:26,157 ERROR [Distributed Cache Server Communications Thread: 
09a381b1-0157-1000-902d-067af19e4c19] o.a.n.d.cache.server.AbstractCacheServer 
org.apache.nifi.distributed.cache.server.AbstractCacheServer$1$1@759fdc8a 
unable to communicate with remote peer localhost due to java.io.IOException: 
IllegalRequest
2016-09-08 12:12:26,166 ERROR [Timer-Driven Process Thread-1] 
o.a.n.p.standard.DetectDuplicate 
DetectDuplicate[id=09981881-0157-1000-ac0f-43f500adcf1a] Unable to communicate 
with cache when processing 
StandardFlowFileRecord[uuid=35de4947-92b3-46fb-ada7-0a195600e4f1,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1473336740239-1293, container=default, 
section=269], offset=22578, 
length=1044],offset=0,name=689694820062267,size=1044] due to 
java.io.EOFException: java.io.EOFException
2016-09-08 12:12:26,169 ERROR [Timer-Driven Process Thread-1] 
o.a.n.p.standard.DetectDuplicate
java.io.EOFException: null
at java.io.DataInputStream.readInt(DataInputStream.java:392) 
~[na:1.8.0_101]
at 
org.apache.nifi.distributed.cache.client.DistributedMapCacheClientService.readLengthDelimitedResponse(DistributedMapCacheClientService.java:220)
 ~[na:na]
at 
org.apache.nifi.distributed.cache.client.DistributedMapCacheClientService.access$100(DistributedMapCacheClientService.java:51)
 ~[na:na]
at 
org.apache.nifi.distributed.cache.client.DistributedMapCacheClientService$4.execute(DistributedMapCacheClientService.java:176)
 ~[na:na]
at 
org.apache.nifi.distributed.cache.client.DistributedMapCacheClientService.withCommsSession(DistributedMapCacheClientService.java:305)
 ~[na:na]
at 
org.apache.nifi.distributed.cache.client.DistributedMapCacheClientService.getAndPutIfAbsent(DistributedMapCacheClientService.java:164)
 ~[na:na]
at sun.reflect.GeneratedMethodAccessor882.invoke(Unknown Source) 
~[na:na]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[na:1.8.0_101]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_101]
at 
org.apache.nifi.controller.service.StandardControllerServiceProvider$1.invoke(StandardControllerServiceProvider.java:177)
 ~[na:na]
at com.sun.proxy.$Proxy163.getAndPutIfAbsent(Unknown Source) ~[na:na]
at 
org.apache.nifi.processors.standard.DetectDuplicate.onTrigger(DetectDuplicate.java:182)
 ~[nifi-standard-processors-1.0.0.jar:1.0.0]
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 [nifi-api-1.0.0.jar:1.0.0]
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1064)
 [nifi-framework-core-1.0.0.jar:1.0.0]
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
 [nifi-framework-core-1.0.0.jar:1.0.0]
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
 [nifi-framework-core-1.0.0.jar:1.0.0]
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
 [nifi-framework-core-1.0.0.jar:1.0.0]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_101]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
[na:1.8.0_101]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 [na:1.8.0_101]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 [na:1.8.0_101]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_101]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_101]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]




Re: Posting input files to NiFi using REST

2016-09-08 Thread Joe Skora
That's great news!

I hope the rest goes smoothly, but if not let us know!

On Thu, Sep 8, 2016 at 8:30 AM, James McMahon  wrote:

> Worked like a charm once I added the SSLContextService. With the basics in
> place for the HTTP exchange now I can press on and add my NiFi flow logic.
> Thanks again.
>
> On Wed, Sep 7, 2016 at 5:26 PM, Joe Skora  wrote:
>
>> James,
>>
>> I believe you need both.  StandardHttpContextMap caches request
>> information between a pair of HandleHttpRequest and HandleHttpResponse
>> processors and the StandardSSLContextService provides server side
>> encryption of the HTTPS connection.
>>
>> I don't think you mentioned a HandleHttpResponse processor, do you have
>> one to provide the web service response?  If not, look at
>> "Hello_NiFi_Web_Service" template on the Example Dataflow Templates
>> 
>> page.
>>
>> Regards,
>> Joe S
>>
>> On Wed, Sep 7, 2016 at 3:07 PM, James McMahon 
>> wrote:
>>
>>> I tried to POST to HandleHTTPRequest from Python, but it is not working.
>>> I noticed that I've got StandardHttpContextMap configured in this
>>> processor. I am using a secure https NiFi instance; instead of
>>> StandardHttpContextMap, should I be configuring this processor with
>>> StandardSSLContextService? Thank you.
>>>
>>> On Sun, Sep 4, 2016 at 8:22 AM, Matt Burgess 
>>> wrote:
>>>
 James,

 For simple calls that return immediately, ListenHttp probably works
 fine. For more flexible and powerful processing of HTTP requests (and
 responses), you might be better off with HandleHttpRequest and
 HandleHttpResponse. There is an example of this under
 Hello_NiFi_Web_Service [1].

 Regards,
 Matt

 [1] https://cwiki.apache.org/confluence/display/NIFI/Example+Dat
 aflow+Templates

 On Sun, Sep 4, 2016 at 7:40 AM, James McMahon 
 wrote:
 > I have a series of applications, each of which I need to redesign so
 that
 > they post raw data to NiFi. Is there an example showing how to to
 construct
 > and execute RESTful calls that push data to NiFi? My assumption is
 that I
 > would establish a ListenHTTP processor in my workflow to receive the
 > incoming content. Is that correct, or should this be handled
 differently? My
 > apps are in java and are simple, so I can readily port them to nodeJS,
 > Python, Groovy, etc. Thanks in advance for your help.

>>>
>>>
>>
>


Re: Dynamic property in QueryDatabaseTable

2016-09-08 Thread Matt Burgess
Ravisankar,

The dynamic property needs to have a certain name, in general of the
form initial.maxvalue.{max_value_column}.  So if you have a max value
column called last_updated, you will want to add a dynamic property
called initial.maxvalue.last_updated, and you set the value to
whatever you want the initial value to be. At that point the processor
will only fetch rows where last_update > the value you specified.

Regards,
Matt

On Thu, Sep 8, 2016 at 8:31 AM, Ravisankar Mani  wrote:
> Hi All,
>
> I have used 'QueryDatabaseTable' processor in my workflow(incremental update
> ETL process), initially , Its working properly once executed the job without
> setiing max value column(first time only) and then setting the max value
> columns(because the max value doesn't know processor) .  But i need to set
> the max value column initially , in this case , i saw the dynamic property
> like 'Initial Max Value'  in nifi guide. But i don't know how to set the
> property in this processor.
> Can you please help me to resolve the problem?
>
>
> Regards,
> Ravisankar


Dynamic property in QueryDatabaseTable

2016-09-08 Thread Ravisankar Mani
Hi All,

I have used 'QueryDatabaseTable' processor in my workflow(incremental
update ETL process), initially , Its working properly once executed the job
without setiing max value column(first time only) and then setting the max
value columns(because the max value doesn't know processor) .  But i need
to set the max value column initially , in this case , i saw the dynamic
property like 'Initial Max Value'  in nifi guide. But i don't know how to
set the property in this processor.
Can you please help me to resolve the problem?


Regards,
Ravisankar


Re: Posting input files to NiFi using REST

2016-09-08 Thread James McMahon
Worked like a charm once I added the SSLContextService. With the basics in
place for the HTTP exchange now I can press on and add my NiFi flow logic.
Thanks again.

On Wed, Sep 7, 2016 at 5:26 PM, Joe Skora  wrote:

> James,
>
> I believe you need both.  StandardHttpContextMap caches request
> information between a pair of HandleHttpRequest and HandleHttpResponse
> processors and the StandardSSLContextService provides server side
> encryption of the HTTPS connection.
>
> I don't think you mentioned a HandleHttpResponse processor, do you have
> one to provide the web service response?  If not, look at
> "Hello_NiFi_Web_Service" template on the Example Dataflow Templates
> 
> page.
>
> Regards,
> Joe S
>
> On Wed, Sep 7, 2016 at 3:07 PM, James McMahon 
> wrote:
>
>> I tried to POST to HandleHTTPRequest from Python, but it is not working.
>> I noticed that I've got StandardHttpContextMap configured in this
>> processor. I am using a secure https NiFi instance; instead of
>> StandardHttpContextMap, should I be configuring this processor with
>> StandardSSLContextService? Thank you.
>>
>> On Sun, Sep 4, 2016 at 8:22 AM, Matt Burgess 
>> wrote:
>>
>>> James,
>>>
>>> For simple calls that return immediately, ListenHttp probably works
>>> fine. For more flexible and powerful processing of HTTP requests (and
>>> responses), you might be better off with HandleHttpRequest and
>>> HandleHttpResponse. There is an example of this under
>>> Hello_NiFi_Web_Service [1].
>>>
>>> Regards,
>>> Matt
>>>
>>> [1] https://cwiki.apache.org/confluence/display/NIFI/Example+Dat
>>> aflow+Templates
>>>
>>> On Sun, Sep 4, 2016 at 7:40 AM, James McMahon 
>>> wrote:
>>> > I have a series of applications, each of which I need to redesign so
>>> that
>>> > they post raw data to NiFi. Is there an example showing how to to
>>> construct
>>> > and execute RESTful calls that push data to NiFi? My assumption is
>>> that I
>>> > would establish a ListenHTTP processor in my workflow to receive the
>>> > incoming content. Is that correct, or should this be handled
>>> differently? My
>>> > apps are in java and are simple, so I can readily port them to nodeJS,
>>> > Python, Groovy, etc. Thanks in advance for your help.
>>>
>>
>>
>


Re: Nifi 1.0.0 compatibility with Hive 1.1.0

2016-09-08 Thread Andre
Yari,

Is there any chance you can cherry pick
commit 80224e3e5ed7ee7b09c4985a920a7fa393bff26c and try again?

Post 1.0.0 there have been some changes to streamline compilation using
vendor provided libraries.

Cheers

On Thu, Sep 8, 2016 at 8:44 PM, Yari Marchetti <
yari.marche...@buongiorno.com> wrote:

> Hello,
> I'd like to use Nifi 1.0.0 with Hive 1.1.0 (on CDH 5.5.2) but after some
> investigation I realised that the hive-jdbc driver included in Nifi is
> incompatible with the Hive version we're using (1.1.0 on CDH 5.5.2) as I'm
> getting the error:
>
> org.apache.hive.jdbc.HiveConnection Error opening session
> org.apache.thrift.TApplicationException: Required field 'client_protocol'
> is unset! Struct:TOpenSessionReq(client_protocol:null,
> configuration:{use:database=unifieddata})
>
> So I just tried to recompile Nifi using the Cloudera profile 5.5.2 but
> compilation is failing:
>
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.2:compile
> (default-compile) on project nifi-hive-processors: Compilation failure:
> Compilation failure:
> [ERROR] /home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/nifi
> -hive-processors/src/main/java/org/apache/hadoop/hive/ql/io/orc/NiFiOrcUtils.java:[26,43]
> error: package org.apache.hadoop.hive.ql.io.filters does not exist
> [ERROR] /home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/nifi
> -hive-processors/src/main/java/org/apache/hadoop/hive/ql/io/
> orc/OrcFlowFileWriter.java:[45,43] error: package
> org.apache.hadoop.hive.ql.io.filters does not exist
> [ERROR] /home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/nifi
> -hive-processors/src/main/java/org/apache/hadoop/hive/ql/io/
> orc/OrcFlowFileWriter.java:[643,24] error: cannot find symbol
> [ERROR] symbol:   class BloomFilterIO
> [ERROR] location: class TreeWriter
> [ERROR] /home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/nifi
> -hive-processors/src/main/java/org/apache/hadoop/hive/ql/io/
> orc/OrcFlowFileWriter.java:[645,30] error: cannot find symbol
> [ERROR] symbol:   class BloomFilterIndex
> [ERROR] location: class OrcProto
> [ERROR] /home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/nifi
> -hive-processors/src/main/java/org/apache/hadoop/hive/ql/io/
> orc/OrcFlowFileWriter.java:[646,30] error: cannot find symbol
> [ERROR] symbol:   class BloomFilter
> [ERROR] location: class OrcProto
> [ERROR] /home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/nifi
> -hive-processors/src/main/java/org/apache/hadoop/hive/ql/io/orc/NiFiOrcUtils.java:[450,32]
> error: cannot find symbol
> [ERROR] symbol:   variable BloomFilterIO
> [ERROR] location: class NiFiOrcUtils
> [ERROR] /home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/nifi
> -hive-processors/src/main/java/org/apache/hadoop/hive/ql/io/
> orc/OrcFlowFileWriter.java:[200,20] error: cannot find symbol
> [ERROR] symbol:   variable OrcUtils
> [ERROR] location: class OrcFlowFileWriter
>
>
> Is there any way to get Nifi to work with Hive 1.1.0 and CDH 5.5.2?
>
> Thanks,
> Yari
>


Nifi 1.0.0 compatibility with Hive 1.1.0

2016-09-08 Thread Yari Marchetti
Hello,
I'd like to use Nifi 1.0.0 with Hive 1.1.0 (on CDH 5.5.2) but after some
investigation I realised that the hive-jdbc driver included in Nifi is
incompatible with the Hive version we're using (1.1.0 on CDH 5.5.2) as I'm
getting the error:

org.apache.hive.jdbc.HiveConnection Error opening session
org.apache.thrift.TApplicationException: Required field 'client_protocol'
is unset! Struct:TOpenSessionReq(client_protocol:null,
configuration:{use:database=unifieddata})

So I just tried to recompile Nifi using the Cloudera profile 5.5.2 but
compilation is failing:

[ERROR] Failed to execute goal org.apache.maven.plugins:
maven-compiler-plugin:3.2:compile (default-compile) on project
nifi-hive-processors: Compilation failure: Compilation failure:
[ERROR] /home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/
nifi-hive-processors/src/main/java/org/apache/hadoop/hive/
ql/io/orc/NiFiOrcUtils.java:[26,43] error: package
org.apache.hadoop.hive.ql.io.filters does not exist
[ERROR] /home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/
nifi-hive-processors/src/main/java/org/apache/hadoop/hive/
ql/io/orc/OrcFlowFileWriter.java:[45,43] error: package
org.apache.hadoop.hive.ql.io.filters does not exist
[ERROR] /home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/
nifi-hive-processors/src/main/java/org/apache/hadoop/hive/
ql/io/orc/OrcFlowFileWriter.java:[643,24] error: cannot find symbol
[ERROR] symbol:   class BloomFilterIO
[ERROR] location: class TreeWriter
[ERROR] /home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/
nifi-hive-processors/src/main/java/org/apache/hadoop/hive/
ql/io/orc/OrcFlowFileWriter.java:[645,30] error: cannot find symbol
[ERROR] symbol:   class BloomFilterIndex
[ERROR] location: class OrcProto
[ERROR] /home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/
nifi-hive-processors/src/main/java/org/apache/hadoop/hive/
ql/io/orc/OrcFlowFileWriter.java:[646,30] error: cannot find symbol
[ERROR] symbol:   class BloomFilter
[ERROR] location: class OrcProto
[ERROR] /home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/
nifi-hive-processors/src/main/java/org/apache/hadoop/hive/
ql/io/orc/NiFiOrcUtils.java:[450,32] error: cannot find symbol
[ERROR] symbol:   variable BloomFilterIO
[ERROR] location: class NiFiOrcUtils
[ERROR] /home/matteo/git/nifi/nifi-nar-bundles/nifi-hive-bundle/
nifi-hive-processors/src/main/java/org/apache/hadoop/hive/
ql/io/orc/OrcFlowFileWriter.java:[200,20] error: cannot find symbol
[ERROR] symbol:   variable OrcUtils
[ERROR] location: class OrcFlowFileWriter


Is there any way to get Nifi to work with Hive 1.1.0 and CDH 5.5.2?

Thanks,
Yari


Re: Posting input files to NiFi using REST

2016-09-08 Thread James McMahon
Thank you very much Joe. I'll work on configuring my SSLContextService. I
am indeed using Hello_NiFi_Web_Service as my starting point. I do have that
HandleHttpResponse processor in there but haven't yet configured and
enabled that. I will do that too. My initial thought was this: get
something to POST from Python over to the processor in NiFi first, worry
about the response next. Perhaps both are required for it to work; I'm not
sure, being that I'm just now trying this for the first time. I'll give it
another try. Thanks again, Matt and Joe.   Cheers, Jim

On Wed, Sep 7, 2016 at 5:26 PM, Joe Skora  wrote:

> James,
>
> I believe you need both.  StandardHttpContextMap caches request
> information between a pair of HandleHttpRequest and HandleHttpResponse
> processors and the StandardSSLContextService provides server side
> encryption of the HTTPS connection.
>
> I don't think you mentioned a HandleHttpResponse processor, do you have
> one to provide the web service response?  If not, look at
> "Hello_NiFi_Web_Service" template on the Example Dataflow Templates
> 
> page.
>
> Regards,
> Joe S
>
> On Wed, Sep 7, 2016 at 3:07 PM, James McMahon 
> wrote:
>
>> I tried to POST to HandleHTTPRequest from Python, but it is not working.
>> I noticed that I've got StandardHttpContextMap configured in this
>> processor. I am using a secure https NiFi instance; instead of
>> StandardHttpContextMap, should I be configuring this processor with
>> StandardSSLContextService? Thank you.
>>
>> On Sun, Sep 4, 2016 at 8:22 AM, Matt Burgess 
>> wrote:
>>
>>> James,
>>>
>>> For simple calls that return immediately, ListenHttp probably works
>>> fine. For more flexible and powerful processing of HTTP requests (and
>>> responses), you might be better off with HandleHttpRequest and
>>> HandleHttpResponse. There is an example of this under
>>> Hello_NiFi_Web_Service [1].
>>>
>>> Regards,
>>> Matt
>>>
>>> [1] https://cwiki.apache.org/confluence/display/NIFI/Example+Dat
>>> aflow+Templates
>>>
>>> On Sun, Sep 4, 2016 at 7:40 AM, James McMahon 
>>> wrote:
>>> > I have a series of applications, each of which I need to redesign so
>>> that
>>> > they post raw data to NiFi. Is there an example showing how to to
>>> construct
>>> > and execute RESTful calls that push data to NiFi? My assumption is
>>> that I
>>> > would establish a ListenHTTP processor in my workflow to receive the
>>> > incoming content. Is that correct, or should this be handled
>>> differently? My
>>> > apps are in java and are simple, so I can readily port them to nodeJS,
>>> > Python, Groovy, etc. Thanks in advance for your help.
>>>
>>
>>
>