Re: Wait/Notify inconsistent behavior

2019-01-04 Thread Luis Carmona
I'm sorry, 

got messages from nifi-users saying "UNCHECKED", and reading about understood 
the message did not get trough. 

Thanks for letting me know. 

LC 


De: "Joe Witt"  
Para: "users"  
Enviados: Viernes, 4 de Enero 2019 23:23:02 
Asunto: Re: Wait/Notify inconsistent behavior 

Please avoid sending more copies of the question. Hopefully someone familiar 
with the processors in question will be available in time. 


Thanks 


On Fri, Jan 4, 2019 at 9:14 PM Luis Carmona < [ mailto:lcarm...@openpartner.cl 
| lcarm...@openpartner.cl ] > wrote: 


Hi everyone, 

Im having a strange behavior with Wait / Notify mechanism. Attached is 
the image of the flow. 
Basically I'm trying to insert in Elastic search two record, 
simultaneously, and if both went ok, then insert a record on a bpm service. 

For that (in the image): 

- Step 1: Set the attribute fragment.identifier to 1 
- Step 2: Send the flow to Wait state, and, 
for 2a I set the attribute filename to 'caso' (without the 
quotes) just before the POST to ElasticSearch 
for 2b I set the attribute filename to 'doc' (without the 
quotes) just before the other POST to ElasticSearch 
- Step 3: On 3a, once the insert is finished, I'm expecting the notify 
sends the signal to the wait state, and same thing for 3b. 
- Step 4: HERE is my problem: 
Sometimes the WAIT has count.caso = 1, and count.doc=1 so 
everything goes well on the RouteAttribute after step 5. 
Some other, it just receives one of the Nitifications, either 
the 'doc' one, or the 'caso' one, so as the other doesn't come, 
the firts to arrive gets Queued. I 've checked and the two 
Elastic insertions work almost inmediatly, so thatś not the problem. 
- Setp 5: Is the expected path unless there was a real fail, but it is 
not what is happening. 

Please any help, or tip would be very preciated. 

Regards, 

LC 







Re: Wait/Notify inconsistent behavior

2019-01-08 Thread Luis Carmona


Hi Koji,

tryed with some manual tests and seems to be working now, doesn't have the 
problema I had before. Today I will try it with a massvice flow and that will 
be the final check.

Thanks for your help. Do you know where I can get a sample about processing in 
a loop, I mean, send things to a server, wait the answer on that, send to a 
second server accumulating the answers of both, and all this in a finite loop 
determined by the answers, Gathering all the answers in one final Json. 


Thank you very much.

LC



- Mensaje original -
De: "Koji Kawamura" 
Para: "users" 
Enviados: Lunes, 7 de Enero 2019 22:22:12
Asunto: Re: Wait/Notify inconsistent behavior

Hi Luis,

Look forward to know how it goes with a consolidated single Notify instance.
Another benefit of doing so is by increasing 'Signal Buffer Count',
the number of updates against the Distributed Cache Service can be
decreased in your case, because multiple FlowFiles share the same
signal id. Notify processor can merge counter deltas locally then
update the cache entry only once.

Thanks,
Koji

On Tue, Jan 8, 2019 at 3:50 AM Luis Carmona  wrote:
>
> Hi Koji,
>
> thanks for taking the time to answer my question
>
> In the Wait processor:
> - Signal CounterName is empty (default).
> - Target Signal Count is set to 2
>
> About the Notify processor, I used two of them thinking about that previously 
> I set differently ${filename} in the precedings UpdateAttribute(s).
>
> I attached the image of both processors, and the template as well.
>
>
> The whole point of my layout is to send things to a server, wait the answer 
> on that, send to a second server accumulating the answers of both, and all 
> this in a finite loop determined by the answers, Gathering all the answers in 
> one final Json.
>
> By now Im stuck in the wait/notify issue, thanks for the sample I'll look 
> into it. Then I will see how to get the loop.
>
> Thanks a lot,
>
> Regards,
>
> LC
>
>
> - Mensaje original -
> De: "Koji Kawamura" 
> Para: "users" 
> Enviados: Domingo, 6 de Enero 2019 23:42:56
> Asunto: Re: Wait/Notify inconsistent behavior
>
> The reason tu put two Notify processors is that I'm using different
>
> Hi Luis,
>
> Just a quick question, how are the "Signal Counter Name" and "Target
> Signal Count" properties for the Wait processor configured?
> If you'd like to wait the two sub-flow branches to complete, then:
> "Signal Counter Name" should be blank, meaning check total count for
> all counter names
> "Target Signal Count" should be 2.
>
> If those are configured like that, then would you be able to share
> your flow as a template for further investigation?
>
> One more thing, although Notify processor cares about atomicity, due
> to the underlying distributed cache mechanism, concurrent writes to
> the same cache identifier can overwrite existing signal, meaning one
> of the two notifications can be lost.
> To avoid this, using the same Notify instance at 3a and 3b in your
> flow is highly recommended.
> Here is an example flow to do that:
> https://gist.github.com/ijokarumawak/6da4bd914236e941071cad103e1186dd
>
> Thanks,
> Koji
>
> On Sat, Jan 5, 2019 at 11:28 AM Joe Witt  wrote:
> >
> > thanks for letting us know.  the lists can be a bit awkward from a user 
> > experience pov.  no worries
> >
> > On Fri, Jan 4, 2019, 9:26 PM Luis Carmona  >>
> >> I'm sorry,
> >>
> >> got messages from nifi-users saying "UNCHECKED", and reading about 
> >> understood the message did not get trough.
> >>
> >> Thanks for letting me know.
> >>
> >> LC
> >>
> >> 
> >> De: "Joe Witt" 
> >> Para: "users" 
> >> Enviados: Viernes, 4 de Enero 2019 23:23:02
> >> Asunto: Re: Wait/Notify inconsistent behavior
> >>
> >> Please avoid sending more copies of the question.  Hopefully someone 
> >> familiar with the processors in question will be available in time.
> >>
> >>
> >> Thanks
> >>
> >>
> >> On Fri, Jan 4, 2019 at 9:14 PM Luis Carmona  
> >> wrote:
> >>>
> >>> Hi everyone,
> >>>
> >>> Im having a strange behavior with Wait / Notify mechanism. Attached is
> >>> the image of the flow.
> >>> Basically I'm trying to insert in Elastic search two record,
> >>> simultaneously, and if both went ok, then insert a record on a bpm 
> >>> service.
> >&

Re: PutElasticsearchHttp can not use Flowfile attribute for ES_URL

2019-02-04 Thread Luis Carmona
HI Jean, 

I'm not even near to be an expert on NIFI, but I did accomplish to put to work 
an scenario similar to the one you describe. 

I was able to read some data, process it and store the Json payload in ES. I 
used HTTP invoke, instead of ES Processors. 

Hope it helps you. 

Regards, 

LC 



De: "Jean-Sebastien Vachon"  
Para: "users"  
Enviados: Lunes, 4 de Febrero 2019 18:16:23 
Asunto: PutElasticsearchHttp can not use Flowfile attribute for ES_URL 

Hi all, 

I was just finishing modifying my flow to make it more reusable by having my 
source document containing information about where to store the final document 
(some Elasticsearch index) 
Everything was fine until I found out that the PutElasticsearchHttp's 
documentation was saying this... 




Supports Expression Language: true (will be evaluated using variable registry 
only) 




It looks like this restriction appeared around Nifi 1.6 (as per the 
documentation)... is there a reason for such a limitation? 

My current flow was extracting the information from the input JSON document and 
saving the information inside a Flow attribute. 

What can I do about this? I don't like monkey patching.. is there any other way 
to get around this? 

Thanks 



Re: PutElasticsearchHttp can not use Flowfile attribute for ES_URL

2019-02-04 Thread Luis Carmona

You are welcome. 

Good Luck ! 

LC 



De: "Jean-Sebastien Vachon"  
Para: "Luis Carmona" , "users"  
Enviados: Lunes, 4 de Febrero 2019 18:45:34 
Asunto: Re: PutElasticsearchHttp can not use Flowfile attribute for ES_URL 

Hi Luis, 

thanks for the hint... this is indeed a good work around. So simple that I 
should have thought about it \uD83D\uDE09 

Regards 

From: Luis Carmona  
Sent: Monday, February 4, 2019 4:23 PM 
To: users 
Subject: Re: PutElasticsearchHttp can not use Flowfile attribute for ES_URL 
HI Jean, 

I'm not even near to be an expert on NIFI, but I did accomplish to put to work 
an scenario similar to the one you describe. 

I was able to read some data, process it and store the Json payload in ES. I 
used HTTP invoke, instead of ES Processors. 

Hope it helps you. 

Regards, 

LC 



De: "Jean-Sebastien Vachon"  
Para: "users"  
Enviados: Lunes, 4 de Febrero 2019 18:16:23 
Asunto: PutElasticsearchHttp can not use Flowfile attribute for ES_URL 

Hi all, 

I was just finishing modifying my flow to make it more reusable by having my 
source document containing information about where to store the final document 
(some Elasticsearch index) 
Everything was fine until I found out that the PutElasticsearchHttp's 
documentation was saying this... 




Supports Expression Language: true (will be evaluated using variable registry 
only) 




It looks like this restriction appeared around Nifi 1.6 (as per the 
documentation)... is there a reason for such a limitation? 

My current flow was extracting the information from the input JSON document and 
saving the information inside a Flow attribute. 

What can I do about this? I don't like monkey patching.. is there any other way 
to get around this? 

Thanks 




Re: How do you use ElasticSearch with NiFi?

2019-02-20 Thread Luis Carmona
Hi everyone, 

I've been using Nifi for the last 6 months interacting with ES 6.X. Made 
queries, read and write data to it. All of that in production environments. 

I haven't used any special processor dedicated to ES, just HTTP Request 
processor, and everything has worked nicely. In terms of stress, the top has 
been 200 flowfile (50 KB each) in one call (SplitRecord) and queues worked 
perfectly. The only detail was to use insert/update with painless script is to 
use parameters, otherwise it crashes, but it is a ES issue. 

Now I'm trying to access ES through GraphQL, and Bulk inserts, but just in lab 
environment. 

Hope this info helps, and if you want I can keep posted the results of this 
last two topics. 

Regards, 

LC 




De: "Mike Thomsen"  
Para: "users"  
Enviados: Miércoles, 20 de Febrero 2019 13:40:02 
Asunto: Re: How do you use ElasticSearch with NiFi? 

I've got a PR for a new bulk ingest processor, so I could easily add batching 
the record ingest to that plus something like your PR. I think it might be 
useful to have some enforcement mechanisms that prevent a request from being 
way too big. Last documentation I saw said about 32MB/payload. What do you 
think about that? 

On Wed, Feb 20, 2019 at 11:22 AM Joe Percivall < [ mailto:jperciv...@apache.org 
| jperciv...@apache.org ] > wrote: 



Hey Mike, 
As a data point, we're ingesting into ES v6 using PutElasticsearchHttp and 
PutElasticsearchHttpRecord. We do almost no querying of anything in ES using 
NiFi. Continued improvement around ingesting into ES would be our core 
use-case. 

One item that frustrated me was the issue around failures in the record 
processor that I put up a PR here[1]. Another example of a potential 
improvement would be to not load the entire request body (and thus all the 
records/FF content) into memory when inserting into ES using those processors. 
Not 100% sure how you would go about doing that but would be an awesome 
improvement. Of course, any other improvements around performance would also be 
welcome. 

[1] [ https://github.com/apache/nifi/pull/3299 | 
https://github.com/apache/nifi/pull/3299 ] 

Cheers, 
Joe 

On Wed, Feb 20, 2019 at 8:08 AM Mike Thomsen < [ mailto:mikerthom...@gmail.com 
| mikerthom...@gmail.com ] > wrote: 

BQ_BEGIN

I'm looking for feedback from ElasticSearch users on how they use and how they 
**want** to use ElasticSearch v5 and newer with NiFi. 

So please respond with some use cases and what you want, what frustrates you, 
etc. so I can prioritize Jira tickets for the ElasticSearch REST API bundle. 

(Note: basic JSON DSL queries are already supported via JsonQueryElasticSearch. 
If you didn't know that, please try it out and drop some feedback on what is 
needed to make it work for your use cases.) 

Thanks, 

Mike 





-- 
Joe Percivall 
[ http://linkedin.com/in/Percivall | linkedin.com/in/Percivall ] 
e: [ mailto:jperciv...@apache.com | jperciv...@apache.com ] 

BQ_END




Telnet login and data capture

2019-04-22 Thread Luis Carmona
Hi, 

has anyone used Nifi to read data from a Telnet connection. What I'm trying to 
do is to open the telnet connection, send a Login string, and then after 
receive all the traffic that will come from that connection from the server to 
NIFI. 



Any clues ? 


Thanks in advance 


High CPU consumption

2019-10-13 Thread Luis Carmona
HI, 

I've struggling to reduce my nifi installation CPU consumption. Even in idle 
state, all processors running but no data flowing, it is beyond 60% CPU 
consumption, with peaks of 200%. 

What I've done so far 
- Read and followed every instruction/post about tuning NIFI I've found 
googling. 
- Verify scheduling is 1s for most consuming processors: http processors, 
wait/notify, jolt, etc. 
- Scratch my head... 

But nothing seems to have a major effect on the issue. 

Can anyone give me some precise directions or tips about how to solve this 
please ? 
Is this the regular situation, I mean this is the minimun NIFI consumption. 

The server is configure with 4 CPU's, 8 GB RAM - 4 of them dedicated to heap at 
bootstrap.conf. 

Thanks in advance. 

LC 


Re: High CPU consumption

2019-10-15 Thread Luis Carmona
HI All, 

minutes ago I got to the origin of the problem, or at least a big part of it. 
It was in the same direction that Evan indicates. 

In my case it was a port pointing to a ProcessGroup which happens to be 
disabled. I fixed that and now CPU consumption is in between 60% when no data 
is being processed, and 100% -120% when processing. 

The diagram has about 200 hundred processors by the way. So I don't know if 
those ranges can now be considered as normals. Anyway it can operate well now. 

Still I got more search to do if there is something else I got wrong, and share 
it if it is worthy. 

Thank you all for the tips and recommendations. 

LC 



From: "Evan Reynolds"  
To: "users"  
Sent: Tuesday, October 15, 2019 5:40:07 PM 
Subject: Re: High CPU consumption 



I have found two issues that can cause high CPU when idle (high being about 
200% CPU when idle.) I haven’t verified these with 1.9.2, but it doesn’t hurt 
to tell you. 



If you are using ports, make sure each input port is connected. If you have a 
one that isn’t connected, that can bring your CPU to 200% and stay there. 



If you have any processors that are set to run on the primary node of a 
cluster, that can also take your CPU to 200%. I know RouteOnAttribute will do 
that (again, haven’t tested 1.9.2, but it was a problem for me for a bit!) The 
fix – either don’t run it on the primary node, or else set the run schedule to 
5 seconds or something instead of 0. 



To find out if this is the case – well, this is what I did. It worked, and 
wasn’t that hard, though isn’t exactly elegant. 



Back up your flowfile (flow.xml.gz) 

Stop all your processors and see what your CPU does 

Start half of them and watch your CPU – basically do a binary search. If your 
CPU stays reasonable, then whatever group you started is fine. If not, then 
start stopping things until your CPU becomes reasonable. 

Eventually you’ll find a processor that spikes your CPU when you start it and 
then you can figure out what to do about that processor. Record which processor 
it is and how you altered it to bring CPU down. 

Repeat, as there may be more than one processor spiking CPU. 

Stop NiFi and restore your flowfile by copying it back in place – since you 
were running around stopping things, this just makes sure everything is 
correctly back to where it should be 



Then use the list of processors and fixes to make changes. 




--- 



Evan Reynolds 

[ mailto:e...@usermind.com | e...@usermind.com ] 







From: Jon Logan  
Reply-To: "users@nifi.apache.org"  
Date: Sunday, October 13, 2019 at 6:12 PM 
To: "users@nifi.apache.org"  
Subject: Re: High CPU consumption 





That isn't logback, that lists all jars on your classpath, the first of which 
happens to be logback. 





If you send a SIGKILL3 (you can send it in HTOP) it will dump the thread stacks 
to stdout (probably the bootstrap log)...but that's just for one instant, and 
may or may not be helpful. 





On Sun, Oct 13, 2019 at 8:58 PM Luis Carmona < [ mailto:lcarm...@openpartner.cl 
| lcarm...@openpartner.cl ] > wrote: 





hi Aldrin, 





thanks a lot, by now I'm trying to learn how to make the profiling you 
mentioned. 





One more question: Is it normal that the father java process has very low 
consumption while the child process related to logback jar is the one that is 
eating up all the CPU ? 


Please take a look at the attached image. 





Thanks, 





LC 






From: "Aldrin Piri" < [ mailto:aldrinp...@gmail.com | aldrinp...@gmail.com ] > 
To: "users" < [ mailto:users@nifi.apache.org | users@nifi.apache.org ] > 
Sent: Sunday, October 13, 2019 9:30:47 PM 
Subject: Re: High CPU consumption 





Luis, please feel free to give us some information on your flow so we can help 
you track down problematic areas as well. 





On Sun, Oct 13, 2019 at 3:56 PM Jon Logan < [ mailto:jmlo...@buffalo.edu | 
jmlo...@buffalo.edu ] > wrote: 

BQ_BEGIN



You should put a profiler on it to be sure. 


Just because your processors aren't processing data doesn't mean they are idle 
though -- many have to poll for new data, especially sources -- ex. connecting 
to Kafka, etc, will itself consume some CPU. 





But again, you should attach a profiler before participating in a wild goose 
chase of performance issues. 





On Sun, Oct 13, 2019 at 12:20 PM Luis Carmona < [ 
mailto:lcarm...@openpartner.cl | lcarm...@openpartner.cl ] > wrote: 

BQ_BEGIN



HI, 





I've struggling to reduce my nifi installation CPU consumption. Even in idle 
state, all processors running but no data flowing, it is beyond 60% CPU 
consumption, with peaks of 200%. 





What I've done so far 


- Read and followed every instruction/post about tuning NIFI I've found 
googling. 


- Verify scheduling is 1s for most co

Re: High CPU consumption

2019-10-16 Thread Luis Carmona
unday, October 13, 2019 at 6:12 PM
>> To: "users@nifi.apache.org" 
>> Subject: Re: High CPU consumption
>>
>> That isn't logback, that lists all jars on your classpath, the first of 
>> which happens to be logback.
>>
>> If you send a SIGKILL3 (you can send it in HTOP) it will dump the thread 
>> stacks to stdout (probably the bootstrap log)...but that's just for one 
>> instant, and may or may not be helpful.
>>
>> On Sun, Oct 13, 2019 at 8:58 PM Luis Carmona  wrote:
>>
>> hi Aldrin,
>>
>> thanks a  lot, by now I'm trying to learn how to make the profiling you 
>> mentioned.
>>
>> One more question: Is it normal that the father java process has very low 
>> consumption while the child process related to logback jar is the one that 
>> is eating up all the CPU ?
>> Please take a look at the attached image.
>>
>> Thanks,
>>
>> LC
>>
>> 
>> From: "Aldrin Piri" 
>> To: "users" 
>> Sent: Sunday, October 13, 2019 9:30:47 PM
>> Subject: Re: High CPU consumption
>>
>> Luis, please feel free to give us some information on your flow so we can 
>> help you track down problematic areas as well.
>>
>> On Sun, Oct 13, 2019 at 3:56 PM Jon Logan  wrote:
>>
>> You should put a profiler on it to be sure.
>> Just because your processors aren't processing data doesn't mean they are 
>> idle though -- many have to poll for new data, especially sources -- ex. 
>> connecting to Kafka, etc, will itself consume some CPU.
>>
>> But again, you should attach a profiler before participating in a wild goose 
>> chase of performance issues.
>>
>> On Sun, Oct 13, 2019 at 12:20 PM Luis Carmona  
>> wrote:
>>
>> HI,
>>
>> I've struggling to reduce my nifi installation CPU consumption. Even in idle 
>> state, all processors running but no data flowing, it is beyond 60% CPU 
>> consumption, with peaks of 200%.
>>
>> What I've done so far
>> - Read and followed every instruction/post about tuning NIFI I've found 
>> googling.
>> - Verify scheduling is 1s for most consuming processors: http processors, 
>> wait/notify, jolt, etc.
>> - Scratch my head...
>>
>> But nothing seems to have a major effect on the issue.
>>
>> Can anyone give me some precise directions or tips about how to solve this 
>> please ?
>> Is this the regular situation, I mean this is the minimun NIFI consumption.
>>
>> The server is configure with 4 CPU's, 8 GB RAM - 4 of them dedicated to heap 
>> at bootstrap.conf.
>>
>> Thanks in advance.
>>
>> LC
>>
>>


Re: No data provenence after some time of inactivity

2019-11-29 Thread Luis Carmona


Hi Dieter,

I got that problem once, with version 1.10, and was finally solved once I 
corrected the configuration of Max Open Files in the operating system. Should 
be 65536.

Can't assure it is that, but check it with 'ulimit -n'.

Regards,

LC




- Mensaje original -
De: "Dieter Scholz" 
Para: "users" 
Enviados: Viernes, 29 de Noviembre 2019 7:31:39
Asunto: No data provenence after some time of inactivity

Hello,

I'm currently evaluating Nifi Version 1.10 and I must say I'm quite
impressed.

But there's one issue I'm not able to solve.

I hava a test flow with 3 processors (GetFile, ExecuteStreamCommand and
LogAttribute). When I start nifi everthing works as expected. But after
some time of inactivity (1 day) when I restart the flow no data
provenence entries are generated even though the flow produces flowfiles
with the expected data. After a restart of nifi data provenence entries
are produced as before.

What can I do?

Thanks for your help.

Regards, Dieter


Re: No data provenence after some time of inactivity

2019-11-29 Thread Luis Carmona
No no 

It is at Linux parameter, read about "Setting max open files", the command is 
'ulimit -n' 

LC 



De: "Joe Witt"  
Para: "users"  
Enviados: Viernes, 29 de Noviembre 2019 15:40:57 
Asunto: Re: No data provenence after some time of inactivity 

It is likely the default settings in nifi.properties should be changed for 
provenance. Have they? 

Thanks 

On Fri, Nov 29, 2019 at 1:39 PM Luis Carmona < [ mailto:lcarm...@openpartner.cl 
| lcarm...@openpartner.cl ] > wrote: 



Hi Dieter, 

I got that problem once, with version 1.10, and was finally solved once I 
corrected the configuration of Max Open Files in the operating system. Should 
be 65536. 

Can't assure it is that, but check it with 'ulimit -n'. 

Regards, 

LC 




- Mensaje original - 
De: "Dieter Scholz" < [ mailto:rd-d...@gmx.net | rd-d...@gmx.net ] > 
Para: "users" < [ mailto:users@nifi.apache.org | users@nifi.apache.org ] > 
Enviados: Viernes, 29 de Noviembre 2019 7:31:39 
Asunto: No data provenence after some time of inactivity 

Hello, 

I'm currently evaluating Nifi Version 1.10 and I must say I'm quite 
impressed. 

But there's one issue I'm not able to solve. 

I hava a test flow with 3 processors (GetFile, ExecuteStreamCommand and 
LogAttribute). When I start nifi everthing works as expected. But after 
some time of inactivity (1 day) when I restart the flow no data 
provenence entries are generated even though the flow produces flowfiles 
with the expected data. After a restart of nifi data provenence entries 
are produced as before. 

What can I do? 

Thanks for your help. 

Regards, Dieter 






Re: No data provenence after some time of inactivity

2019-12-01 Thread Luis Carmona


You are welcome !

LC

- Mensaje original -
De: "Dieter Scholz" 
Para: "users" 
Enviados: Domingo, 1 de Diciembre 2019 5:20:04
Asunto: Re: No data provenence after some time of inactivity

Hello,

I think changing the Linux parameter solved my problem.

Thanks for your help.

Regards, Dieter

Am 29.11.2019 um 20:51 schrieb Luis Carmona:
> No no
>
> It is at Linux parameter, read about "Setting max open files", the command is 
> 'ulimit -n'
>
> LC
>
>
>
> De: "Joe Witt" 
> Para: "users" 
> Enviados: Viernes, 29 de Noviembre 2019 15:40:57
> Asunto: Re: No data provenence after some time of inactivity
>
> It is likely the default settings in nifi.properties should be changed for 
> provenance. Have they?
>
> Thanks
>
> On Fri, Nov 29, 2019 at 1:39 PM Luis Carmona < [ 
> mailto:lcarm...@openpartner.cl | lcarm...@openpartner.cl ] > wrote:
>
>
>
> Hi Dieter,
>
> I got that problem once, with version 1.10, and was finally solved once I 
> corrected the configuration of Max Open Files in the operating system. Should 
> be 65536.
>
> Can't assure it is that, but check it with 'ulimit -n'.
>
> Regards,
>
> LC
>
>
>
>
> - Mensaje original -
> De: "Dieter Scholz" < [ mailto:rd-d...@gmx.net | rd-d...@gmx.net ] >
> Para: "users" < [ mailto:users@nifi.apache.org | users@nifi.apache.org ] >
> Enviados: Viernes, 29 de Noviembre 2019 7:31:39
> Asunto: No data provenence after some time of inactivity
>
> Hello,
>
> I'm currently evaluating Nifi Version 1.10 and I must say I'm quite
> impressed.
>
> But there's one issue I'm not able to solve.
>
> I hava a test flow with 3 processors (GetFile, ExecuteStreamCommand and
> LogAttribute). When I start nifi everthing works as expected. But after
> some time of inactivity (1 day) when I restart the flow no data
> provenence entries are generated even though the flow produces flowfiles
> with the expected data. After a restart of nifi data provenence entries
> are produced as before.
>
> What can I do?
>
> Thanks for your help.
>
> Regards, Dieter
>
>
>
>
>


POST multipart/form-data with Invokehttp

2020-04-27 Thread Luis Carmona
Hi everyone,

Hoping everybody is doing ok, wherever you are, need some help please.

Does anyone has sent a file and parameters to a REST point using
Invokehhtp with multipart/form-data as mime-type ?

I can't figure out how to include the -F , speaking in terms
of curl syntax.

I really need this done throught NIFIso any help will be highly
apreciated.

Thanks in advance.

LC



Re: POST multipart/form-data with Invokehttp

2020-04-27 Thread Luis Carmona
Hi Wesley,

no, haven't used any processor related to do things out of NIFI itself.

Will give it a try thanks.

LC



On Mon, 2020-04-27 at 14:26 -0300, Wesley C. Dias de Oliveira wrote:
> Hello, Luis.
> 
> Have you tried to send with ExecuteProcessor?
> 
> 
> 
> Using that way you can invoke curl explicit to run your command.
> 
> Em seg., 27 de abr. de 2020 às 14:21, Luis Carmona <
> lcarm...@openpartner.cl> escreveu:
> > Hi everyone,
> > 
> > Hoping everybody is doing ok, wherever you are, need some help
> > please.
> > 
> > Does anyone has sent a file and parameters to a REST point using
> > Invokehhtp with multipart/form-data as mime-type ?
> > 
> > I can't figure out how to include the -F , speaking in
> > terms
> > of curl syntax.
> > 
> > I really need this done throught NIFIso any help will be highly
> > apreciated.
> > 
> > Thanks in advance.
> > 
> > LC
> > 
> 
> 



Re: POST multipart/form-data with Invokehttp

2020-04-27 Thread Luis Carmona
Thank you all,

Wesley and Etienne, is there any documentation source about how to
connect a script in javascript to nifi resources, InputStream,
OutputStream, Erros, and so on ?


Otto, sure I can give it a try, I am desperate for this solution. What
you mention means I have to look for a tutorial about adding a custom
processor right ?


Thanks again,

LC




On Mon, 2020-04-27 at 14:52 -0300, Wesley C. Dias de Oliveira wrote:
> Owh!
> 
> Great, Otto!
> 
> Good news!
> 
> Em seg., 27 de abr. de 2020 às 14:50, Otto Fowler <
> ottobackwa...@gmail.com> escreveu:
> > What good timing, I just did : 
> > https://github.com/apache/nifi/pull/4234
> > If you can build and try that would be sweet!  or maybe a review! 
> > 
> > On April 27, 2020 at 13:45:42, Etienne Jouvin (
> > lapinoujou...@gmail.com) wrote:
> > > Hello.
> > > 
> > > I did it with a processor ExecuteGroovyScript.
> > > 
> > > The script body is something like :
> > > 
> > > import org.apache.http.entity.mime.MultipartEntityBuilder
> > > import org.apache.http.entity.ContentType
> > > 
> > > flowFileList = session.get(100)
> > > if(!flowFileList.isEmpty()) {
> > >   flowFileList.each { flowFile -> 
> > > def multipart
> > > String text = flowFile.read().getText("UTF-8")
> > > 
> > > flowFile.write{streamIn, streamOut->
> > >   multipart = MultipartEntityBuilder.create()
> > > //specify multipart entries here
> > > .addTextBody("object", text,
> > > ContentType.APPLICATION_JSON)
> > > .addBinaryBody("content", new
> > > File(flowFile.'document.content.path'),
> > > ContentType.create(flowFile.'document.mime.type'),
> > > flowFile.'document.name')
> > > .build()
> > >   multipart.writeTo(streamOut)
> > > }
> > > //set the `documentum.action.rest.content.type` attribute to
> > > be used as `Content-Type` in InvokeHTTP
> > > flowFile.'document.content.type' =
> > > multipart.getContentType().getValue()
> > > session.transfer(flowFile, REL_SUCCESS)
> > >   }
> > > }
> > > 
> > > 
> > > Attributes are :
> > > document.content.path : content path
> > > document.mime.type : content mime type
> > > document.name : binaire content name
> > > 
> > > Output update attribute
> > > document.content.type : multipart content type.
> > > 
> > > You need some extra librairies :
> > > httpcore-4.4.12.jar
> > > httpmime-4.5.10.jar
> > > 
> > > This will build a multipart as the flowfile content and you can
> > > use it for the call after.
> > > 
> > > 
> > > Etienne
> > > 
> > > 
> > > Le lun. 27 avr. 2020 à 19:21, Luis Carmona <
> > > lcarm...@openpartner.cl> a écrit :
> > > > Hi everyone,
> > > > 
> > > > Hoping everybody is doing ok, wherever you are, need some help
> > > > please.
> > > > 
> > > > Does anyone has sent a file and parameters to a REST point
> > > > using
> > > > Invokehhtp with multipart/form-data as mime-type ?
> > > > 
> > > > I can't figure out how to include the -F , speaking
> > > > in terms
> > > > of curl syntax.
> > > > 
> > > > I really need this done throught NIFIso any help will be highly
> > > > apreciated.
> > > > 
> > > > Thanks in advance.
> > > > 
> > > > LC
> > > > 
> 
> 



Re: POST multipart/form-data with Invokehttp

2020-04-27 Thread Luis Carmona
OK Otto,

got it.

LC



On Mon, 2020-04-27 at 11:05 -0700, Otto Fowler wrote:
> No, Luis,  if the PR is accepted and lands, it will be in the next
> released version of nifi after that.
> 
> If you build nifi yourself, it will be available when you build
> master after it lands. 
> 
> On April 27, 2020 at 13:56:36, Luis Carmona (lcarm...@openpartner.cl)
> wrote:
> > Thank you all, 
> > 
> > Wesley and Etienne, is there any documentation source about how to 
> > connect a script in javascript to nifi resources, InputStream, 
> > OutputStream, Erros, and so on ? 
> > 
> > 
> > Otto, sure I can give it a try, I am desperate for this solution.
> > What 
> > you mention means I have to look for a tutorial about adding a
> > custom 
> > processor right ? 
> > 
> > 
> > Thanks again, 
> > 
> > LC 
> > 
> > 
> > 
> > 
> > On Mon, 2020-04-27 at 14:52 -0300, Wesley C. Dias de Oliveira
> > wrote: 
> > > Owh! 
> > > br/>> Great, Otto!! 
> > > br/>> Good news!! 
> > > br/>> Em seg., 27 de abr. de 2020 às 14:50, Ottto Fowler < 
> > > ottobackwa...@gmail.com> escreveu: 
> > > > What good timing, I just did : br/>> >
> > https:://github.com/apache/nifi/pull/4234 
> > > > If you can build and try that would be sweet! or maybe a
> > review! br/>> > br/>> > On April 27, 2020 at 13:45:42, Etienne
> > Jouvin ( < 
> > > > lapinoujou...@gmail.com) wrote: 
> > > > > Hello. 
> > > > > br/>> > > I did it with a processor EExecuteGroovyScript. 
> > > > > br/>> > > The script body is somethinng like : 
> > > > > br/>> > > import
> > org.apache.http.entiity.mime.MultipartEntityBuilder 
> > > > > import org.apache.http.entity.ContentType 
> > > > > br/>> > > flowFFileList = session.get(100) 
> > > > > if(!flowFileList.isEmpty()) { 
> > > > > flowFileList.each { flowFile -> br//>> > > def multipart 
> > > > > String text = flowFile.read().getText("UTF-8") 
> > > > > br/>> > > flowFFile.write{streamIn, streamOut-> 
> > > > > multipart = MultipartEntityBuilder.create() 
> > > > > //specify multipart entries here 
> > > > > .addTextBody("object", text, 
> > > > > ContentType.APPLICATION_JSON) 
> > > > > .addBinaryBody("content", new 
> > > > > File(flowFile.'document.content.path'), 
> > > > > ContentType.create(flowFile.'document.mime.type'), 
> > > > > flowFile.'document.name') 
> > > > > .build() 
> > > > > multipart.writeTo(streamOut) 
> > > > > } 
> > > > > //set the `documentum.action.rest.content.type` attribute to 
> > > > > be used as `Content-Type` in InvokeHTTP 
> > > > > flowFile.'document.content.type' = 
> > > > > multipart.getContentType().getValue() 
> > > > > session.transfer(flowFile, REL_SUCCESS) 
> > > > > } 
> > > > > } 
> > > > > br/>> > > br/>> > > Attributes are : < 
> > > > > document.content.path : content path 
> > > > > document.mime.type : content mime type 
> > > > > document.name : binaire content name 
> > > > > br/>> > > Output update attribute < 
> > > > > document.content.type : multipart content type. 
> > > > > br/>> > > You need some extra librairries : 
> > > > > httpcore-4.4.12.jar 
> > > > > httpmime-4.5.10.jar 
> > > > > br/>> > > This will build a multipartt as the flowfile
> > content and you can 
> > > > > use it for the call after. 
> > > > > br/>> > > br/>> > > Etienne < 
> > > > > br/>> > > br/>> > > Le lun. 27 avr. 2020 à 19:21, Luis
> > Carmona < 
> > > > > lcarm...@openpartner.cl> a écrit : 
> > > > > > Hi everyone, 
> > > > > > br/>> > > > Hoping everybody is doing ok, wherever you are,
> > need some help 
> > > > > > please. 
> > > > > > br/>> > > > Does anyone has ssent a file and parameters to
> > a REST point 
> > > > > > using 
> > > > > > Invokehhtp with multipart/form-data as mime-type ? 
> > > > > > br/>> > > > I can't figure ouut how to include the -F
> > , speaking 
> > > > > > in terms 
> > > > > > of curl syntax. 
> > > > > > br/>> > > > I really need thiis done throught NIFIso any
> > help will be highly 
> > > > > > apreciated. 
> > > > > > br/>> > > > Thanks in advancee. 
> > > > > > br/>> > > > LC < 
> > > > > > br/>> br/>> br/> < 



Re: POST multipart/form-data with Invokehttp

2020-04-27 Thread Luis Carmona
Hi Wesley,

couldn't use Execute Processor as it doesn't receive an input FlowFile
( or I didn't find out how to connect it ) and I need to give to the
procesor the file that should be sent.

Thanks anyway.

Will try with a script now.

LC




On Mon, 2020-04-27 at 14:26 -0300, Wesley C. Dias de Oliveira wrote:
> Hello, Luis.
> 
> Have you tried to send with ExecuteProcessor?
> 
> 
> 
> Using that way you can invoke curl explicit to run your command.
> 
> Em seg., 27 de abr. de 2020 às 14:21, Luis Carmona <
> lcarm...@openpartner.cl> escreveu:
> > Hi everyone,
> > 
> > Hoping everybody is doing ok, wherever you are, need some help
> > please.
> > 
> > Does anyone has sent a file and parameters to a REST point using
> > Invokehhtp with multipart/form-data as mime-type ?
> > 
> > I can't figure out how to include the -F , speaking in
> > terms
> > of curl syntax.
> > 
> > I really need this done throught NIFIso any help will be highly
> > apreciated.
> > 
> > Thanks in advance.
> > 
> > LC
> > 
> 
> 



Re: POST multipart/form-data with Invokehttp

2020-04-27 Thread Luis Carmona
Hi Otto,

Compiled your version, and It DID WORK !!.

How risky is to use this "version" in a production environment ?

Thanks a lot.

LC




On Mon, 2020-04-27 at 11:05 -0700, Otto Fowler wrote:
> No, Luis,  if the PR is accepted and lands, it will be in the next
> released version of nifi after that.
> 
> If you build nifi yourself, it will be available when you build
> master after it lands. 
> 
> On April 27, 2020 at 13:56:36, Luis Carmona (lcarm...@openpartner.cl)
> wrote:
> > Thank you all, 
> > 
> > Wesley and Etienne, is there any documentation source about how to 
> > connect a script in javascript to nifi resources, InputStream, 
> > OutputStream, Erros, and so on ? 
> > 
> > 
> > Otto, sure I can give it a try, I am desperate for this solution.
> > What 
> > you mention means I have to look for a tutorial about adding a
> > custom 
> > processor right ? 
> > 
> > 
> > Thanks again, 
> > 
> > LC 
> > 
> > 
> > 
> > 
> > On Mon, 2020-04-27 at 14:52 -0300, Wesley C. Dias de Oliveira
> > wrote: 
> > > Owh! 
> > > br/>> Great, Otto!! 
> > > br/>> Good news!! 
> > > br/>> Em seg., 27 de abr. de 2020 às 14:50, Ottto Fowler < 
> > > ottobackwa...@gmail.com> escreveu: 
> > > > What good timing, I just did : br/>> >
> > https:://github.com/apache/nifi/pull/4234 
> > > > If you can build and try that would be sweet! or maybe a
> > review! br/>> > br/>> > On April 27, 2020 at 13:45:42, Etienne
> > Jouvin ( < 
> > > > lapinoujou...@gmail.com) wrote: 
> > > > > Hello. 
> > > > > br/>> > > I did it with a processor EExecuteGroovyScript. 
> > > > > br/>> > > The script body is somethinng like : 
> > > > > br/>> > > import
> > org.apache.http.entiity.mime.MultipartEntityBuilder 
> > > > > import org.apache.http.entity.ContentType 
> > > > > br/>> > > flowFFileList = session.get(100) 
> > > > > if(!flowFileList.isEmpty()) { 
> > > > > flowFileList.each { flowFile -> br//>> > > def multipart 
> > > > > String text = flowFile.read().getText("UTF-8") 
> > > > > br/>> > > flowFFile.write{streamIn, streamOut-> 
> > > > > multipart = MultipartEntityBuilder.create() 
> > > > > //specify multipart entries here 
> > > > > .addTextBody("object", text, 
> > > > > ContentType.APPLICATION_JSON) 
> > > > > .addBinaryBody("content", new 
> > > > > File(flowFile.'document.content.path'), 
> > > > > ContentType.create(flowFile.'document.mime.type'), 
> > > > > flowFile.'document.name') 
> > > > > .build() 
> > > > > multipart.writeTo(streamOut) 
> > > > > } 
> > > > > //set the `documentum.action.rest.content.type` attribute to 
> > > > > be used as `Content-Type` in InvokeHTTP 
> > > > > flowFile.'document.content.type' = 
> > > > > multipart.getContentType().getValue() 
> > > > > session.transfer(flowFile, REL_SUCCESS) 
> > > > > } 
> > > > > } 
> > > > > br/>> > > br/>> > > Attributes are : < 
> > > > > document.content.path : content path 
> > > > > document.mime.type : content mime type 
> > > > > document.name : binaire content name 
> > > > > br/>> > > Output update attribute < 
> > > > > document.content.type : multipart content type. 
> > > > > br/>> > > You need some extra librairries : 
> > > > > httpcore-4.4.12.jar 
> > > > > httpmime-4.5.10.jar 
> > > > > br/>> > > This will build a multipartt as the flowfile
> > content and you can 
> > > > > use it for the call after. 
> > > > > br/>> > > br/>> > > Etienne < 
> > > > > br/>> > > br/>> > > Le lun. 27 avr. 2020 à 19:21, Luis
> > Carmona < 
> > > > > lcarm...@openpartner.cl> a écrit : 
> > > > > > Hi everyone, 
> > > > > > br/>> > > > Hoping everybody is doing ok, wherever you are,
> > need some help 
> > > > > > please. 
> > > > > > br/>> > > > Does anyone has ssent a file and parameters to
> > a REST point 
> > > > > > using 
> > > > > > Invokehhtp with multipart/form-data as mime-type ? 
> > > > > > br/>> > > > I can't figure ouut how to include the -F
> > , speaking 
> > > > > > in terms 
> > > > > > of curl syntax. 
> > > > > > br/>> > > > I really need thiis done throught NIFIso any
> > help will be highly 
> > > > > > apreciated. 
> > > > > > br/>> > > > Thanks in advancee. 
> > > > > > br/>> > > > LC < 
> > > > > > br/>> br/>> br/> < 



ConsumeIMAP certificates issue

2020-05-06 Thread Luis Carmona
Hi guys,

I have a project that needs to receive the mails flow from an Imap
server.

If I try to read from port 993, get the error:

sun.security.provider.certpath.SunCertPathBuilderException: unable to
find valid certification path to requested target

If I try to read from port 143, get the error:

Unrecognized SSL message, plaintext connection



As my mail server accepts only secure login, I presume it is claiming
about the corresponding certificate.

The question is how to configure from where it has to read the
certificate ?


Thanks in advance.

Regards,

LC






Re: ExecuteSQL not working

2020-05-08 Thread Luis Carmona
Hi juan Pablo,

I did, but jTDS was the only way achive the connection. With the
offical jdbc driver always issued error about TSL protocol problems.

After some reading, seems to be it is cause the SWL server is too old.

And with jTDS I got the coneection, and was able to execute Database
list Tables. But the processor ExecuteSQL is not working.

Regards,

LC




On Fri, 2020-05-08 at 02:27 -0300, Juan Pablo Gardella wrote:
> Did you try using mssql official jdbc driver?
> 
> On Fri, 8 May 2020 at 01:34, Luis Carmona 
> wrote:
> > Hi everyone,
> > 
> > I am trying to execute a query to an MS SQL Server, through jTDS
> > driver, but can't figure why is it giving me error all the time.
> > 
> > If I let the processor as it is, setting the controller service
> > obviously, throws the error of image saying "empty name".
> > 
> > If I set the processor with Normaliza Table/Columns and Use Avro
> > Types
> > to TRUE, then throws the error of the image saying "Index out of
> > range"
> > 
> > Th query is as simple as this:
> > 
> > SELECT MAX(ORDEN)  
> > FROM demo_planta2.dbo.ORDEN_VENTA_CAB
> >   WHERE
> >   CODEMPRESA=2
> >   AND
> >   CODTEMPO=1;
> > 
> > Please some tip about what could be wrong in my settings.
> > 
> > Regards,
> > 
> > LC
> > 
> > 



Re: ExecuteSQL not working

2020-05-08 Thread Luis Carmona



Thanks Juan Pablo.

It did work !!

Thanks.

LC



On Fri, 2020-05-08 at 13:54 -0300, Juan Pablo Gardella wrote:
> Try again by adding a column alias name tonthe results.
> 
> On Fri, May 8, 2020, 12:21 PM Luis Carmona 
> wrote:
> > Hi juan Pablo,
> > 
> > I did, but jTDS was the only way achive the connection. With the
> > offical jdbc driver always issued error about TSL protocol
> > problems.
> > 
> > After some reading, seems to be it is cause the SWL server is too
> > old.
> > 
> > And with jTDS I got the coneection, and was able to execute
> > Database
> > list Tables. But the processor ExecuteSQL is not working.
> > 
> > Regards,
> > 
> > LC
> > 
> > 
> > 
> > 
> > On Fri, 2020-05-08 at 02:27 -0300, Juan Pablo Gardella wrote:
> > > Did you try using mssql official jdbc driver?
> > > 
> > > On Fri, 8 May 2020 at 01:34, Luis Carmona <
> > lcarm...@openpartner.cl>
> > > wrote:
> > > > Hi everyone,
> > > > 
> > > > I am trying to execute a query to an MS SQL Server, through
> > jTDS
> > > > driver, but can't figure why is it giving me error all the
> > time.
> > > > 
> > > > If I let the processor as it is, setting the controller service
> > > > obviously, throws the error of image saying "empty name".
> > > > 
> > > > If I set the processor with Normaliza Table/Columns and Use
> > Avro
> > > > Types
> > > > to TRUE, then throws the error of the image saying "Index out
> > of
> > > > range"
> > > > 
> > > > Th query is as simple as this:
> > > > 
> > > > SELECT MAX(ORDEN)  
> > > > FROM demo_planta2.dbo.ORDEN_VENTA_CAB
> > > >   WHERE
> > > >   CODEMPRESA=2
> > > >   AND
> > > >   CODTEMPO=1;
> > > > 
> > > > Please some tip about what could be wrong in my settings.
> > > > 
> > > > Regards,
> > > > 
> > > > LC
> > > > 
> > > > 
> > 



Fwd: Nifi 2.0-M3 cannot set attributes

2024-06-05 Thread Luis Carmona


Hi everybody,

As I'm not a java programmer, the new about Python becoming a first 
class citizen in NIFI 2 was a great thing.


I am creating processors, but there is an issue I can not figure out 
about controlling an Exception in python code and therefore sending the 
flowfile to 'failure' way.


Specifically when I catch an exception it does send the ff to failure, 
but completely ignores the paramaters I set for content (string)  and 
attributes(dict) and all the time just sends the original content plus 
the standard attributes.


I have tried:

- Creating FlowFileTransform instance and then trying to modify its 
properties to finally use it in the return statement


- Returning a FlowFileTransform just at the point of the return, having 
several return points according to code conditions.


- Creating a variable (dict), that changes during the transformation 
lines of code, and finally assigning the variable items to 
FlowFileTransform new instance (flowfile, contents, attributes).


But nothing, doesn't work, goes to 'failure' but ignores the content I 
try to put.



I am using:

- Java 21.0.2

- python 3.10.12

- nifi 2.0.0-M3


I am attaching a reduced version that recreates the behavior.


Can anyone give me tip about this or maybe I missed something on 
documentation, or any idea, what could be happening ?



Thanks in advance

LC

from nifiapi.flowfiletransform import FlowFileTransform, FlowFileTransformResult
from nifiapi.properties import PropertyDescriptor, StandardValidators
from nifiapi.documentation import use_case, multi_processor_use_case, ProcessorConfiguration

import json
from pydantic import BaseModel, ValidationError


@use_case(
description="Create request to get Access Token from Authorization System",
notes="The input for this use case is expected to be a FlowFile whose content is \
   a JSON document, indicating user's id & secret",
keywords=["auth", "bs"],
configuration="""
"""
)

class BS_UserToken(FlowFileTransform):
class Java:
implements = ['org.apache.nifi.python.processor.FlowFileTransform']
class ProcessorDetails:
version = '0.7.a'
description = """Create User Authorization Token"""
tags = ["python", "bs", "auth", "zitadel"]

class UserId(BaseModel):
client_id: str
client_secret: str

PROCESSOR_NAME = 'BS_API_UserToken'
PROCESSOR_VERSION = '0.7.a'
LOG_PREFIX_BASE = f'\n{PROCESSOR_NAME}-v{PROCESSOR_VERSION}'

def __init__(self, jvm=None, **kwargs):
self.jvm = jvm
""" ### PROPERTIES START  """
self.authServerDomain = PropertyDescriptor(
name="Auth Server Url",
description="The Auth server base URL",
required=True,
# default_value="http://localhost:8010";,
# allowable_values=["http://localhost:8010";],
# dependencies=[]
)
self.authServerTokenUri = PropertyDescriptor(
name="Auth Server Token-Endpoint Uri",
description="The Auth Server token endpoint URI",
required=True,
# default_value="/token",
# allowable_values=["/token"],
# dependencies=[]
)
self.projectId = PropertyDescriptor(
name="Project Id",
description="Identification of service to ask for access",
required=True,
# default_value="",
# allowable_values=[],
# dependencies=[]
)
self.property_descriptors = [self.authServerDomain,
 self.authServerTokenUri,
 self.projectId]
""" ### PROPERTIES END  """

def onScheduled(self, context):
try:
   # import pydevd_pycharm
   # pydevd_pycharm.settrace('localhost', port=5678, stdoutToServer=True, stderrToServer=True)
   self.LOG_PREFIX_BASE = f'{self.LOG_PREFIX_BASE}-[{self.__dict__["identifier"]}]'
   self.logger.info(f'\n{self.LOG_PREFIX_BASE} Debugger Not Enabled\n')
except Exception as e:
self.logger.error(f'\n{self.LOG_PREFIX_BASE} Failed to connect to python debug listener: {e}\n')

def getPropertyDescriptors(self):
return self.property_descriptors

def transform(self, context, flowfile) -> FlowFileTransformResult:
LOG_PREFIX = f'{self.LOG_PREFIX_BASE} FF:{flowfile.getAttribute("filename")} >>'
self.logger.info(f'\n\n\n{LOG_PREFIX} START Processing ...')

if flowfile.getSize():
ff_content = flowfile.getContentsAsBytes().decode()

try:
ff_content_dict = json.loads(ff_content)
self.logger.info(f'{LOG_PREFIX} ff_content: {ff_content}')
# self.UserId( **ff_content_dict )

authServerUrl = context.getProperty(self.aut

Re: Nifi 2.0-M3 cannot set attributes

2024-06-05 Thread Luis Carmona



Thank you for the quick answer David.

I guess the only option for now is to wait...


Regards,


LC





El 05-06-24 a las 17:29, David Handermann escribió:

Hi Luis,

Thanks for reporting this problem and providing the background
details. This is a known issue related to applying FlowFile attributes
with the failure relationship, and it is being tracked in the follow
Jira issue:

https://issues.apache.org/jira/browse/NIFI-13324

Regards,
David Handermann

On Wed, Jun 5, 2024 at 4:20 PM Luis Carmona  wrote:


Hi everybody,

As I'm not a java programmer, the new about Python becoming a first
class citizen in NIFI 2 was a great thing.

I am creating processors, but there is an issue I can not figure out
about controlling an Exception in python code and therefore sending the
flowfile to 'failure' way.

Specifically when I catch an exception it does send the ff to failure,
but completely ignores the paramaters I set for content (string)  and
attributes(dict) and all the time just sends the original content plus
the standard attributes.

I have tried:

- Creating FlowFileTransform instance and then trying to modify its
properties to finally use it in the return statement

- Returning a FlowFileTransform just at the point of the return, having
several return points according to code conditions.

- Creating a variable (dict), that changes during the transformation
lines of code, and finally assigning the variable items to
FlowFileTransform new instance (flowfile, contents, attributes).

But nothing, doesn't work, goes to 'failure' but ignores the content I
try to put.


I am using:

- Java 21.0.2

- python 3.10.12

- nifi 2.0.0-M3


I am attaching a reduced version that recreates the behavior.


Can anyone give me tip about this or maybe I missed something on
documentation, or any idea, what could be happening ?


Thanks in advance

LC



Processor Python Creation

2024-06-07 Thread Luis Carmona

Hi guys,

according to what I understand from the documentation, every time I 
create a python processor and manage to add it to nifi it should be 
started from a new instance of the selected python interpreter. To be 
more accurate, it should be started from it's own new virtual 
environment - created under .../work/..., and this python should start 
the python module as a new process.


But there are a couple of things I'm doing wrong or I am not 
understanding well from the documentation, because:


1. Everytime I modify the python code of a processor and stop it, all 
the dependencies are re-installed. Doesn't matter if I just changed code 
or added any new library. Is that ok ?


2. When I stop and/or disable the custom processor, the os process 
doesn't die, it stays running or in some state. The sequence I think 
happens is watching Linux htop:


- Java starts a controller (python)

- The custom python component starts the python controller as well ¿?

I am going nuts with some strange behavior with this cause it is not 
easy to understand if the process hung up, or I have a bug, or the 
processor hasn't actually been reloaded and restarted.



Is there any document about the mechanics of this I can read ?


Thanks in advance


LC