Hi Matt,
Thank you for the input. I updated my config as you suggested and it worked
like charm and also big thankyou for nice article. i used your article as
reference when i am started Exploring ExecuteScript.
Thanks
Madhu
On Thu, Mar 24, 2016 at 12:18 AM, Matt Burgess wrote:
> Madhukar,
Andre,
Thanks. Those stacks suggest that they occurred at a time when space
was still full. I believe that then to be the correct behavior. It
sounds like you think there is more it could do in those cases then.
Can you describe more what you had in mind?
Thanks
Joe
On Wed, Mar 23, 2016 at 3:
Madhukar,
Glad to hear you found a solution, I was just replying when your email came
in.
Although in ExecuteScript you have chosen "python" as the script engine, it
is actually Jython that is being used to interpret the scripts, not your
installed version of Python. The first line (shebang) is
I was able to solve the python modules issues by adding the following lines:
import sys
sys.path.append('/usr/local/lib/python2.7/site-packages') # Path where my
modules are installed.
Now the issue i have is , how do i parse the incoming attributes using this
libarary correctly and get the new
Hi
I am trying to use the following script to parse http.headers.useragent
with python useragent module using ExecuteScript Processor.
Script:
#!/usr/bin/env python2.7
from user_agents import parse
flowFile = session.get()
if (flowFile != None):
flowFile = session.putAttribute(flowFile, "brow
Hi you also need the controller service DistributedMapCacheServer
configured.
On Mar 23, 2016 19:59, "Hong Li" wrote:
> I'm using processors PutHBaseCell, PutHBaseJson, and GetHBase against the
> same HBase table. But GetHBase is the only one that is giving me the
> error:"failed to invoke @on
I'm using processors PutHBaseCell, PutHBaseJson, and GetHBase against the
same HBase table. But GetHBase is the only one that is giving me the
error:"failed to invoke @onScheduled Method due to
java.net.ConnectException:Connection Refused". That is, I can write to the
table but I cannot read from
Interesting question and the only way I can answer it is that it shouldn’t, but
would need more context
Oleg
> On Mar 23, 2016, at 6:27 PM, McDermott, Chris Kevin (MSDU -
> STaTS/StorefrontRemote) wrote:
>
> Thanks, Oleg.
>
> FWIW the traceback seems to correspond to this line in PutKafka.ja
Thanks, Oleg.
FWIW the traceback seems to correspond to this line in PutKafka.java
final byte[] value = new byte[(int) flowFile.getSize()];
Can the flow file size be negative?
On 3/23/16, 6:24 PM, "Oleg Zhurakousky" wrote:
>Chris
>
>Yes we have (can’t remember the details though). There w
Chris
Yes we have (can’t remember the details though). There was a lot of work around
NiFi Kafka support lately and it somewhat culminated today. So. . .
1. We’ve downgraded back to 0.8.2
https://issues.apache.org/jira/browse/NIFI-1629. You can see details in JIRA
2. We’ve done some refactoring
2016-03-23 21:29:22,383 ERROR [Timer-Driven Process Thread-5]
o.apache.nifi.processors.kafka.PutKafka
PutKafka[id=f66e4092-946e-3338-93d6-7ea3a56bfd20]
PutKafka[id=f66e4092-946e-3338-93d6-7ea3a56bfd20] failed to process session due
to java.lang.NegativeArraySizeException: java.lang.NegativeArra
Russell
First, thank you for taking initiative!
Indeed we need to bring that JIRA to a closure. Having said that there is
actually a link in our Contributor Guide on interactive debugging
https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide#ContributorGuide-RunningNiFiinDebugmode
I needed to break down and debug a processor I'm working on. I found a
JIRA issue a year old that's probably not been taken care of. I'd like
to help.
https://issues.apache.org/jira/browse/NIFI-513
I wrote this quickie on how to set up IntelliJ or Eclipse to accomplish
that. I'm using
The Identifier Attribute property should contain the name of a Flow File
attribute, which in turn contains the ID of the document to be put into
Elasticsearch. Unfortunately it is a required property so having ES
auto-generate it is not yet supported [1].
If you don't care what the ID is but need
Hello guys,
I trying to define a data flow to export data for MySQL to Elasticsearch. For
this I’m using this processors
ExecuteSQL -> ConvertAvroToJSON -> PutElasticsearch
But I’m getting errors with the last one. So what is the identifier Attribute
of the PutElasticsearch processor for? I j
Maybe I am too naive here, but formatting for text based formats could be done using a template engine.
Matt is right with the user experience - but only if the complexity of this one processor is not too high - that is what I think. Personally I - with the user hat on - don't like dialogs wit
Joe,
Thanks for the re assurance. I found it totally puzzling that the system
took it so hard.
The stacks I could find are:
--- START TRACE 1 ---
016-03-22 14:00:00,016 ERROR [Provenance Repository Rollover Thread-1]
o.a.n.p.PersistentProvenanceRepository Failed to merge Journal Files
[./repos
17 matches
Mail list logo