+1 That sounds good
Christian Am 15.11.2011 10:26, schrieb Claus Ibsen:
Hi The JPA consumer is a bit special as its consuming messages from a database, as they were from a JMS queue. So its basically a queue based solution on a database table. So I frankly think we should alter the concept of transaction, as IMHO it does not make sense to consume X number of rows from a datase table and have them all act in the same transaction. The reason is the JPA consumer is scheduled based, and will pick up X rows which is currently in the table. (basically a SELECT * form MY_TABLE). I think the behavior should be changed to - on first exception, commit the previous good messages - break out, and log a WARN that the current message could not be processed Then upon next poll from the JPA consumer, all the previous good messages was committed, and it will only pickup the previous bad row (+ any additional new rows). With the current behavior you can end up with a situation like 1) There is 100 rows in the table 2) JPA consumer pickup a ResultSet with 100 rows. And will process each row one by one. 3) Now after processing 69 messages, number 70th fails. 4) All 100 rows will rollback 5) JPA consumer is scheduled again, and pickup yet again a ResultSet with 100 rows. And will process each row one by one. 6) Now after processing 69 messages (the same 69 messages from previous poll), number 70th fails yet again. 7) All 100 rows will rollback ... and it will repeat itself. What I propose is to change the behavior to (which you could argue the old situation was, albeit there was a bug, causing to commit regardless what, and not break out if there was an unhandled exception) 1) There is 100 rows in the table 2) JPA consumer pickup a ResultSet with 100 rows. And will process each row one by one. 3) Now after processing 69 messages, number 70th fails. 4) The 69 good rows will commit 5) A WARN is logged about the failure of processing the 70th message 6) JPA consumer is scheduled again, and pickup a new ResultSet with 41 rows. And will process each row one by one. 7) Now processing the 1st message fails, because on last poll that message failed as well 8) There is no good messages to commit 9) A WARN is logged about the failure of processing the 1th message ... and it will repeat itself.
-- Christian Schneider http://www.liquid-reality.de Open Source Architect Talend Application Integration Division http://www.talend.com