unless your tasks have a very narrow period they shouldn't go on timeout on 
SQLite (remember to commit if acting on the tables within a task). 
W2P_TVSeries uses 500-600 tasks with a "watcher" task coordinating those 
and it doesn't block.
"Upgrading" to Mysql or PostgreSQL will fix the issue if the issue is 
dependant on database locking. It's fairly easy to install both, so it's 
surely worth the effort.
Anyway, I'm more and more curious to see logs (and possibly a packed app to 
reproduce) ^_^

On Thursday, December 13, 2012 9:16:27 PM UTC+1, Mike D wrote:
>
> I have only this one task. I am certainly going to change the retry_failed 
> and hopefully that will be a sufficient solution. I actually tried your 
> other solution a while ago and that "monitoring" task actually ended up in 
> a TIMEOUT state as well. Sad face. Any idea on that one?
>
> Do you think that switching database providers would help or is it not 
> worth the effort?
>
> Thanks again for your help.
>
> On Thursday, December 13, 2012 11:57:01 AM UTC-8, Niphlod wrote:
>>
>> All my statements were made under the assumption that the scheduler_run 
>> table showed absolutely no trace of the TIMOUTted task.
>>
>> In any case, running the scheduler on SQLite is "safe" only for 1 or 2 
>> workers and not with a zillion of tasks. Concurrency was never a friend of 
>> SQLite. 
>>
>> BTW, at this point enable the debug logging of the scheduler: can be 
>> quite a long log keep it for a day or two, but if you can't reproduce the 
>> error at will, it's the only hope to see what is really going on.
>>
>> PS: confirmed: TIMEOUTted tasks do end up requeued with retry_failed != 0
>>
>> PS2: don't underestimate the flexibility of the scheduler. Even if 
>> retry_failed was not available, you can always use another task that checks 
>> on the scheduler_task table for the "original" task and requeue it :P
>>
>>
>>

-- 



Reply via email to