I don't find that this is a common scenario, anyway my idea was put the 
insertion/update of the task at cron @reboot, wrap that code in a 
try:except pass block and let the maintenance task to do the honours of 
checking if other 3 functions are ok. maintenance will run let's say, with 
a period of 40 secs, so no problem there.

btw, scheduler may need to be run outside web2py -K, so there is no 
"without locks" solution; we'll be forced to add also to that call all 
scheduler_task parameters (except for repeats=0) to maximize flexibility.

maybe adding a unique column to scheduler tasks that gets evaluated by 
default to a uuid will solve the problem.......e.g.
db.scheduler_task.insert(function_name='abcd', repeats=0, ....., 
unique_column='abcd') would close the deal for your requirements.
you'd have to wrap that in a try:except call for obvious reason.

as of parameters passed to scheduler, if you have in models 
Scheduler(parameters) they're correctly evaluated.
For greater flexibility you'd have to hack something...... but it's not 
reeeeeally difficult. 
for w2p_tvseries I needed to define a Scheduler() in models and have to run 
the same Scheduler with other parameters for a cron task "executed outside 
web2py" like a "normal cron" . 
Check 
https://github.com/niphlod/w2p_tvseries/blob/master/private/w2p_tvseries.py#L40


  

On Wednesday, June 27, 2012 8:52:33 PM UTC+2, Michael Toomim wrote:
>
> I'm totally interested in solutions! It's a big problem I need to solve.
>
> The recurring maintenance task does not fix the initialization 
> problem—because now you need to initialize the recurring maintenance task. 
> This results in the same race condition. It does fine with the 40,000 
> records problem. But it's just a lot of complexity we're introducing to 
> solve a simple problem (looping tasks) with a complex solution (scheduler).
>
> I'd still love to find a clean way to do this. Maybe we should extend the 
> scheduler like this:
>   • Add a daemon_tasks parameter when you call it from models 
> "Scheduler(db, daemon_tasks=[func1, func2])"
>   • When scheduler boots up, it handles locks and everything and ensures 
> there are two tasks that just call these functions
>   • Then it dispatches the workers processes as usual
>
> ...ah, shoot, looking in widget.py, it looks like the code that starts 
> schedulers doesn't have access to the parameters passed to Scheduler() 
> because models haven't been run yet. Hmph.
>
> On Wednesday, June 27, 2012 12:56:52 AM UTC-7, Niphlod wrote:
>>
>> I don't know if continuing to give you fixes and alternative 
>> implementations is to be considered as harassment at this point, stop me if 
>> you're not interested into those.
>>
>> There is a very biiig problem in your statements: if your vision is 
>>
>> Woohoo, this scheduler will *automatically handle locks*—so I don't need 
>> to worry about stray background processes running in parallel 
>> automatically, and it will *automatically start/stop the processes* with 
>> the web2py server with -K, which makes it much easier to deploy the code!
>>
>> then the scheduler is the right tool for you. it's your app that doesn't 
>> handle locks, because of your initialization code put into models.
>>
>> At least 2 of your problems (initialization and 40,000 scheduler_run 
>> records) could be fixed by a "recurring maintenance" task that will do 
>> check_daemon() without advisory locks and prune the scheduler_run table.
>>
>> BTW: I'm pretty sure that when you say "scheduler should be terminated 
>> alongside web2py" you're not perfectly grasping how webdevelopment in 
>> production works. If you're using "standalone" versions, i.e. not mounted 
>> on a webserver, you can start your instances as web2py -a mypassword & 
>> web2py -K myapp and I'm pretty sure when hitting ctrl+c both will shutdown. 
>>
>

Reply via email to