ok, I think I got it ..... most of it is cumbersome and fruit of multiple 
reiteration (e.g. lots of trials and errors). There's no way in hell to put 
up celery docs pertaining the particular usecase in web2py.
Pleeeeeease, watch it carefully, may burn your house to the ground.
@Bruno: maybe the same thing can be applied to rq (I still have problems 
with running that one with functions not defined in models outside web2py 
environment)
Chosen broker and result backend --> redis (absolutely no time to install 
rabbitmq, sorry)

1st issue: models are executed at every request, so a celery instance 
defined in models continues to instantiate new connections to Redis. 
Although redis is fast as hell, it just mean wasting resources (a single 
redis instance makes use by default of a connection pool, but if the object 
is recycled and istantiated at every request the pool doesn't survice). So, 
celery instance must be defined in a *module* and imported
1st step, hence --> create modules/w2p_celery.py with

from celery import Celery
mycelery = Celery('tasks', broker='redis://localhost:6379/0', backend=
'redis://localhost:6379/0')

2nd issue unresolved: the default task decorator computes a name based on 
the current execution instance (for newcomers to web2py, __restricted__)
Celery has a task decorator that takes a name parameter, so it's quite easy 
to patch (even so, I'd really like to be able to customize that. 
Suggestions ?)
However, if you put this in models/thecelerymodel.py
from w2p_celery import mycelery
celery = mycelery
@celery.task(name='tasks.gen_url')
def gen_url(x):
    return A(x, _href=URL('rule_the_world'))

@celery.task(name='tasks.add_user')
def add_user():
    try:
       db.auth_user.insert(first_name='John')
       db.commit()
    except:
       db.rollback()

you'll have two tasks registered as 'tasks.add' and 'tasks.add_user'

3rd step: f*******k the world. having something runnable by a celery 
worker. This part was the absolute nightmare of the last two evenings. I 
really didn't get the problems in it, but seems that task "code" is 
somewhat-where-when shipped around (or, my nightmares were in fact the 
reality....)
Given that, let's assume that copying the tasks defined before elsewhere 
won't get you killed in action (i.e. yes, I'd like too to have celery 
workers figuring out where to pick something from, but I just couldn't get 
there)
3rd step --> create a tasks.py in the same folder as web2py.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import gluon.widget        #forgot: why this is apparently the only way to 
fix custom imports ?
from gluon.shell import env
from gluon import current
from celery import Celery

def make_celery(app):
    celery = Celery('tasks', broker='redis://localhost:6379/0', backend=
'redis://localhost:6379/0')
    TaskBase = celery.Task
    class ContextTask(TaskBase):
        abstract = True
        def __call__(self, *args, **kwargs):
            _env = env(a=app, import_models=True)
            globals().update(_env)
            return TaskBase.__call__(self, *args, **kwargs)
    celery.Task = ContextTask
    return celery

celery = make_celery('yourappname') #be sure that you write this correctly

@celery.task(name='tasks.gen_url')
def gen_url(x):
    return A(x, _href=URL(x, 'rule_the_world'))

@celery.task(name='tasks.add_user')
def add_user():   #yes, for the love of your apps, wrap all db operations!
    try:
        db.auth_user.insert(first_name='miao')
        db.commit()
    except:
        db.rollback()

now, this **seems** to work without issues, e.g. in a controller you can do 
without harm

def myfunction():
    async = add_user.delay()
    return dict(async=async)

and start an instance of celery, cding into web2py.py folder and doing 
>>> celery worker -A tasks

Again, this is not the recommended way nor the "documented somewhere" one. 
It is just the first **iteration** that didn't lend in exceptions all over 
the place.
If someone wants to pick this up, go to ask (solem) and get it 
corrected/revised, I'll be glad to have spent 6 hours for the web2py 
community :P

@PS on the robust-widely-adopted argument: celery vs web2py's scheduler is 
like comparing web2py to web.py. They have VERY DIFFERENT goals in mind and 
celery is by far the most complete task queue solution out there (even 
comparing other programming languages task queues). With that in mind (i.e. 
even Niphlod in the need of a gigantic solution for out-of-the-band 
processes chooses celery), celery broke a few times backward compatibility. 
Web2py's scheduler got new features every iteration and just broke code 
when it had no API whatsoever to retrieve task results (has just changed 
the db schema once, to accomodate all engines reserved keywords)  

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to