Hi Ilya,

If you have Spring in your classpath you may look at Spring Batch.

For our projects we've built something similar -- a custom jobs framework
on top of PostgreSQL.

The idea is that there a coordinator service (Tapestry service) that runs
in a thread pool and constantly polls special DB tables for new records.
For every new unit of work it creates instance of a worker (using
`ObjectLocator.autobuild()`) that's capable of processing the job.

The polling can be optimised well for performance using row-level locks &
DB indexing.

Coordinator runs in the same JVM as the rest of the app so there's no
dedicated process.
It integrates with tapestry's EntityManager so that you could create a job
in transaction.

When running in a cluster every JVM has its own coordinator -- this it how
the jobs get distributed.

But you're saying that row-level locking doesn't work for some of your
use-cases, can you be more concrete here?


On Tue, Jun 27, 2017 at 9:35 PM, Ilya Obshadko <ilya.obsha...@gmail.com>
wrote:

> I’ve recently expanded my Tapestry application to run multiple hosts. While
> it’s quite OK for the web-faced part (sticky load balancer does most of the
> job), it’s not very straightforward with background jobs.
>
> Some of them can be quite easily distributed using database row-level
> locks, but this doesn’t work for every use case I have.
>
> Are there any suggestions about this? I’d prefer not to have a dedicated
> process running background tasks. Ideally, I want to dynamically distribute
> background jobs between hosts in cluster, based on current load status.
>
>
> --
> Ilya Obshadko
>



-- 
Dmitry Gusev

AnjLab Team
http://anjlab.com

Reply via email to