Thanks for replying it. I'm afraid this approach do not work with
transaction. How can I process all records in the same transaction?

Em sex., 10 de fev. de 2023 às 13:05, Chris Sampson <
chris.samp...@naimuri.com> escreveu:

> I don't use the database/SQL processors much, but see questions about
> these on the Apache NiFi Slack channels quite regularly - you might have
> better look using ExecuteSQLRecord (can output in Avro or JSON, etc. using
> your wanted RecordSetWriter Controller Service) then feed that through to a
> PutDatabaseRecord (can read Avro, JSON, etc. depending upon your configured
> RecordReader).
>
> If you want to change the data in between then consider other Record based
> processors such as UpdateRecord or QueryRecord.
>
> On Fri, 10 Feb 2023, 15:57 João Marcos Polo Junior, <jpolojun...@gmail.com>
> wrote:
>
>> I’m trying to create a flow (nifi 1.18) to query a database (ExecuteSQL),
>> transform it records to json (ConvertAvroToJson), split it into json
>> objects (SplitJson) and then perform the necessary actions into another
>> database (PutSQL). All json objects splitted from the same original
>> flowfile needs to be processed in a transaction and for that i’m using a
>> PutSQL with Fragmented Transactions set it to true.
>>
>> First problem: I cant set the Transaction Timeout to more than “30 sec”
>> because my flowfiles (waiting in the upstream queue) dont ever get
>> processed and dont get to go to the failure connection. They stay stuck in
>> the upstream connection, get penalized, but never processed or redirected
>> to failure when the timeout (more than 30sec) reaches the end.
>>
>>
>>
>> Second problem: I want to combine the Transaction Timeout attribute with
>> the Penalty Time, Yield Time or maybe Run Schedule but thats not working
>> either.
>>
>> Is there a solution for these problem? Is there something I have to
>> configure in the DBCPConnectionPool for that to work?
>>
>> Here’s a similar problem in version 1.12:
>> https://issues.apache.org/jira/browse/NIFI-8733
>>
>>
>>
>> Thanks in advance!
>>
>>
>>
>

Reply via email to