Alexei,

Can you describe the semantics of the task in greater detail?
 * What if two jobs on different nodes access and try to put the same key?
(this can be resolved by allowing a job only access local primary keys)
 * How do you define the lock acquisition order and prevent deadlocks? I
assume that running such a compute task will involve a lot of keys, so it
is quite unlikely that OPTIMISTIC SERIALIZABLE transactions are applicable
here
 * Currently a started transaction locks the cache topology, so job
failover in the case of node crash will be very hard to implement

2017-01-26 23:36 GMT+03:00 Alexei Scherbakov <alexey.scherbak...@gmail.com>:

> Guys,
>
> How difficult would be to support transactional tasks?
>
> What means every job in task executed in it's own transaction.
>
> In case of single job failure or reduce phase failure all transaction
> started by jobs are rolled back.
>
> Only if all jobs are successfully executed, corresponding transactions are
> commited.
>
> Also it would be very desirable to implement tasks failover in the similar
> way how jobs failover is implemented.
>
> In case of master's failure jobs are rolled back, and task is restarted on
> another node.
>
> This should greatly simplify implementing complex business processes.
>
> --
>
> Best regards,
> Alexei Scherbakov
>

Reply via email to