10 4 - thanks

On Tue, Aug 14, 2012 at 7:48 PM, niphlod <niph...@gmail.com> wrote:

> Nope, that goes waaaayyy over the scheduler "responsibility". Prune all
> records, prune only completed, prune only failed, requeue timeoutted, prune
> every day, every hour, etc, etc, etc.... these are implementation details
> that belongs to the application.
>
> We though that since it is all recorded and timestamped it's a matter of:
>
> timelimit = datetime.datetime.utcnow() - datetime.timedelta(days=15)
> db((db.scheduler_task.status == 'COMPLETED') &
> (db.scheduler_task.last_run_time < timelimit)).delete()
>
> that is actually not so hard (scheduler_run records should be pruned away
> automatically because they are referenced)
>
> (I like to have a function "maintenance" fired off every now and then with
> these things on it.)
>
>
>
> 2012/8/15 Yarin <ykess...@gmail.com>
>
>> Niphlod- has there been any discussion about a param for clearing out old
>> records on the runs and tasks tables? Maybe a retain_results or
>> retain_completed value that specifies a period for which records will be
>> kept?
>>
>>
>> On Thursday, July 12, 2012 4:36:38 PM UTC-4, Niphlod wrote:
>>
>>> Hello everybody, in the last month several changes were commited to the
>>> scheduler, in order to improve it.
>>> Table schemas were changed, to add some features that were missed by
>>> some users.
>>> On the verge of releasing web2py v.2.0.0, and seeing that the scheduler
>>> potential is often missed by regular web2py users, I created a test app
>>> with two main objectives: documenting the new scheduler and test the
>>> features.
>>>
>>> App is available on github (https://github.com/niphlod/**
>>> w2p_scheduler_tests <https://github.com/niphlod/w2p_scheduler_tests>).
>>> All you need is download the trunk version of web2py, download the app and
>>> play with it.
>>>
>>> Current features:
>>> - one-time-only tasks
>>> - recurring tasks
>>> - possibility to schedule functions at a given time
>>> - possibility to schedule recurring tasks with a stop_time
>>> - can operate distributed among machines, given a database reachable for
>>> all workers
>>> - group_names to "divide" tasks among different workers
>>> - group_names can also influence the "percentage" of assigned tasks to
>>> similar workers
>>> - simple integration using modules for "embedded" tasks (i.e. you can
>>> use functions defined in modules directly in your app or have them
>>> processed in background)
>>> - configurable heartbeat to reduce latency: with sane defaults and not
>>> toooo many tasks queued normally a queued task doesn't exceed 5 seconds
>>> execution times
>>> - option to start it, process all available tasks and then die
>>> automatically
>>> - integrated tracebacks
>>> - monitorable as state is saved on the db
>>> - integrated app environment if started as web2py.py -K
>>> - stop processes immediately (set them to "KILL")
>>> - stop processes gracefully (set them to "TERMINATE")
>>> - disable processes (set them to "DISABLED")
>>> - functions that doesn't return results do not generate a scheduler_run
>>> entry
>>> - added a discard_results parameter that doesn't store results "no
>>> matter what"
>>> - added a uuid record to tasks to simplify checkings of "unique" tasks
>>> - task_name is not required anymore
>>> - you can skip passing the function to the scheduler istantiation:
>>> functions can be dinamically retrieved in the app's environment
>>>
>>> So, your mission is:
>>> - test the scheduler with the app and familiarize with it
>>> Secondary mission is:
>>> - report any bug you find here or on github (https://github.com/niphlod/
>>> **w2p_scheduler_tests/issues<https://github.com/niphlod/w2p_scheduler_tests/issues>
>>> )
>>> - propose new examples to be embedded in the app, or correct the current
>>> docs (English is not my mother tongue)
>>>
>>> Once approved, docs will be probably embedded in the book (
>>> http://web2py.com/book)
>>>
>>> Feel free to propose features you'd like to see in the scheduler, I have
>>> some time to spend implementing it.
>>>
>>>
>>>
>>>  --
>>
>>
>>
>>
>
>  --
>
>
>
>

-- 



Reply via email to