On Tue, Nov 15, 2011 at 11:02 AM, Achim Nierbeck
<bcanh...@googlemail.com> wrote:
> Hi Clause,
>
> even though I'm fully with you that this sounds reasonable I would like to
> see this
> feature/improvement optional or at least being able to be disabled since
> right now
> I'm actually using this feature of a complete rollback of all changes done,
> because one of the
> incoming messages is "corrupt" :)
>

Yeah we could add an option to control this. And then in your case,
you can enable this option.
I wonder what a good option name would be?


> Regards, Achim
>
> 2011/11/15 Claus Ibsen <claus.ib...@gmail.com>
>
>> Hi
>>
>> The JPA consumer is a bit special as its consuming messages from a
>> database, as they were from a JMS queue.
>> So its basically a queue based solution on a database table.
>>
>> So I frankly think we should alter the concept of transaction, as IMHO
>> it does not make sense to consume X number of rows from a datase table
>> and have them all act in the same transaction. The reason is the JPA
>> consumer is scheduled based, and will pick up X rows which is
>> currently in the table. (basically a SELECT * form MY_TABLE).
>>
>> I think the behavior should be changed to
>> - on first exception, commit the previous good messages
>> - break out, and log a WARN that the current message could not be processed
>>
>> Then upon next poll from the JPA consumer, all the previous good
>> messages was committed, and it will only pickup the previous bad row
>> (+ any additional new rows).
>>
>>
>> With the current behavior you can end up with a situation like
>> 1) There is 100 rows in the table
>> 2) JPA consumer pickup a ResultSet with 100 rows. And will process
>> each row one by one.
>> 3) Now after processing 69 messages, number 70th fails.
>> 4) All 100 rows will rollback
>> 5) JPA consumer is scheduled again, and pickup yet again a ResultSet
>> with 100 rows. And will process each row one by one.
>> 6) Now after processing 69 messages (the same 69 messages from
>> previous poll), number 70th fails yet again.
>> 7) All 100 rows will rollback
>> ... and it will repeat itself.
>>
>> What I propose is to change the behavior to (which you could argue the
>> old situation was, albeit there was a bug, causing to commit
>> regardless what, and not break out if there was an unhandled
>> exception)
>> 1) There is 100 rows in the table
>> 2) JPA consumer pickup a ResultSet with 100 rows. And will process
>> each row one by one.
>> 3) Now after processing 69 messages, number 70th fails.
>> 4) The 69 good rows will commit
>> 5) A WARN is logged about the failure of processing the 70th message
>> 6) JPA consumer is scheduled again, and pickup a new ResultSet with 41
>> rows. And will process each row one by one.
>> 7) Now processing the 1st message fails, because on last poll that
>> message failed as well
>> 8) There is no good messages to commit
>> 9) A WARN is logged about the failure of processing the 1th message
>> ... and it will repeat itself.
>>
>>
>>
>>
>>
>> --
>> Claus Ibsen
>> -----------------
>> FuseSource
>> Email: cib...@fusesource.com
>> Web: http://fusesource.com
>> Twitter: davsclaus, fusenews
>> Blog: http://davsclaus.blogspot.com/
>> Author of Camel in Action: http://www.manning.com/ibsen/
>>
>
>
>
> --
> *Achim Nierbeck*
>
> Apache Karaf <http://karaf.apache.org/> Committer & PMC
> OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer &
> Project Lead
> blog <http://notizblog.nierbeck.de/>
>



-- 
Claus Ibsen
-----------------
FuseSource
Email: cib...@fusesource.com
Web: http://fusesource.com
Twitter: davsclaus, fusenews
Blog: http://davsclaus.blogspot.com/
Author of Camel in Action: http://www.manning.com/ibsen/

Reply via email to