Let the scheduler run and execute your task  but inside of the task itself
check if you want to execute your logic or short circuit it to noop. Since
you do not run it often should not be an overhead and it will let you fail
over for any mode to execute it as long as you have a mechanism to lock on
something and record execution result to avoid simultaneous execution or
double exexution

On Thu, Jan 24, 2019, 12:37 PM Juri Berlanda <[email protected]
wrote:

> Hi,
>
> I am currently trying to implement scheduled jobs using DeltaSpike's
> Scheduler module, and I really like how little boilerplate I need for
> getting it up and running.
>
> Our application runs on multiple nodes, but the tasks are very
> inexpensive, run only once a day, and I don't need failover - if they
> fail once, and succeed the day after its totally fine. Therefore I'd
> like to avoid setting up Quartz in clustered mode. But I still want the
> Jobs to only run once. So my idea was to restrict the execution of the
> jobs to a single scheduler node.
>
> So my question is: Is it possible to somehow hook into the Scheduler
> module to say something like:
>
> if (isSchedulerNode())
>    startScheduler();
> else
>    doNothing();
>
> It would be perfect if said isSchedulerNode() could be evaluated on
> system startup (e.g. acquire a distributed lock) and would not rely on
> static values (e.g. config files, environment variables, etc.).
>
> I can see how this is a bad idea in general (no load-balancing, no
> failover) and I do have some ideas on how I would implement that. But
> for these jobs I just don't care about any of this, so I'd like to avoid
> having to set up a whole lot of infrastructure around my application
> just to see this working.
>
> Is there a possibility to achieve this without patching
> deltaspike-scheduler-module-impl?
>
> Kind regards,
>
> Juri
>
>

Reply via email to