Hi,
However even with the Job being persisted in the repo the call to 
JobManager.getJobById(…) sometimes returns null.
This is in my case most probably due to the fact the the underlying search 
index is being generated asynchronously with 
https://jackrabbit.apache.org/oak/docs/query/indexing.html#async-indexing.

The javadoc on Sling side does not mention anything in this regard, but I find 
it highly confusing that after a job has been scheduled it is not accessible 
via JobManager.getJobById(…) at certain points in time despite the fact that 
the job is:
- queued,
- active or
- finished (and persisted)

just because the underlying index takes has not yet been updated.
Is the consumer supposed to retry several times until one can be really sure 
that the job is really no longer there or is the index connected with jobs 
rather supposed to be synchronous?

Thanks,
Konrad

> On 19. Aug 2025, at 10:51, Konrad Windszus <[email protected]> wrote:
> 
> Sorry for the noise, found the option in the queue configuration meanwhile: 
> https://github.com/apache/sling-org-apache-sling-event-api/blob/5355bc5675b1afc8b26006ea9188d5ef6e25bd7c/src/main/java/org/apache/sling/event/jobs/QueueConfiguration.java#L89
> 
> However this option is not exposed for the main queue 
> https://github.com/apache/sling-org-apache-sling-event/blob/master/src/main/java/org/apache/sling/event/impl/jobs/config/MainQueueConfiguration.java.
> 
> Will try to clarify the documentation a bit in this regard,
> Konrad
> 
>> On 19. Aug 2025, at 10:16, Konrad Windszus <[email protected]> wrote:
>> 
>> Hi,
>> It seems that all Job metadata is immediately removed from the repository 
>> once the job is finished successfully. Is there any way (through dedicated 
>> job properties or a queue configuration) to somehow defer this cleanup and 
>> also keep the completed jobs in the repo for a little while (similar to the 
>> failed ones)?
>> 
>> Thanks in advance,
>> Konrad
> 

Reply via email to