On Tuesday, February 6, 2018 at 12:10:18 PM UTC-8, Andrea Fae' wrote:
>
> I created like in the book. It's a model. I'm scheduling from the database
> administrative interface. 2 days ago worked perfectly! I don't know!
>
>
I've never tried that. The production system I run the scheduler on
I created like in the book. It's a model. I'm scheduling from the database
administrative interface. 2 days ago worked perfectly! I don't know!
Il 06 feb 2018 9:05 PM, "Dave S" ha scritto:
>
>
> On Tuesday, February 6, 2018 at 7:58:15 AM UTC-8, Andrea Fae' wrote:
>>
>>
On Tuesday, February 6, 2018 at 7:58:15 AM UTC-8, Andrea Fae' wrote:
>
> Hello, days ago I started to use scheduler and all was working very well.
> I could see db.scheduler_run.task_id lines in the db with all the results
> and console.
>
> Now the scheduler works very well but I don't see
uhm. sqlite lockings then ?
On Monday, June 15, 2015 at 6:13:04 PM UTC+2, Michael Gheith wrote:
I am querying a remote PostgreSQL database, and then inserting certain
values into my local SQLite database, that's all...
--
Resources:
- http://web2py.com
- http://web2py.com/book
I am querying a remote PostgreSQL database, and then inserting certain
values into my local SQLite database, that's all...
--
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list
seems that the function that goes into timeout leaves zombie processes
behind. What are you doing in that function ?
On Friday, June 12, 2015 at 11:56:09 PM UTC+2, Michael Gheith wrote:
Hmmm. Looking in the database, I have 21 TIMEOUTS which corresponds to
the 21 workers with all the same
inspect the log to see why another worker gets started. Each worker started
can result AT MOST as two processes, the worker itself and the process that
actually executes the task.
On Friday, June 12, 2015 at 10:36:35 PM UTC+2, Michael Gheith wrote:
I have an application that uses the
Hmmm. Looking in the database, I have 21 TIMEOUTS which corresponds to the
21 workers with all the same process ID of 10978... a red herring?
$ pstree -p 10978
python(10978)─┬─python(1719)
├─python(2063)
├─python(2977)
├─python(5383)
On Thursday, January 16, 2014 5:13:59 AM UTC+1, Jayadevan M wrote:
A couple of questions about web2py scheduler -
1) I have configured web2py using nginx znd uwsgi on CentOS. These
services will automatically restart if the server reboots. How can I ensure
that web2py scheduler also
On Wed, Oct 2, 2013 at 4:13 PM, Niphlod niph...@gmail.com wrote:
Il giorno mercoledì 2 ottobre 2013 15:15:16 UTC+2, Marin Pranjić ha
scritto:
Hi,
I have a task queue that runs in the background. I want to switch to
scheduler because I need more workers.
Queued tasks are long running
Il giorno venerdì 4 ottobre 2013 11:31:33 UTC+2, Marin Pranjić ha scritto:
On Wed, Oct 2, 2013 at 4:13 PM, Niphlod nip...@gmail.com javascript:wrote:
Il giorno mercoledì 2 ottobre 2013 15:15:16 UTC+2, Marin Pranjić ha
scritto:
Hi,
I have a task queue that runs in the background. I
Dear All,
I want help to send sms with attachment, But i didn't succeed.Please help
me.
*Pankaj Pathak*
*Software Developer *
*Shrideva Technomech Pvt. Ltd.*
On Fri, Oct 4, 2013 at 9:24 PM, Niphlod niph...@gmail.com wrote:
Il giorno venerdì 4 ottobre 2013 11:31:33 UTC+2, Marin Pranjić ha
Hi Niphlod,
If I could add to the questions (I'm having some success with the scheduler
but there's a few gaps in my understanding):
What process removes the rows from the scheduler_worker table ?
Does it make any difference to Kill a worker by updating it's status or by
just Ctl-C (Or closing
Il giorno giovedì 3 ottobre 2013 08:19:48 UTC+2, Andrew W ha scritto:
Hi Niphlod,
If I could add to the questions (I'm having some success with the
scheduler but there's a few gaps in my understanding):
What process removes the rows from the scheduler_worker table ?
Does it make any
Il giorno mercoledì 2 ottobre 2013 15:15:16 UTC+2, Marin Pranjić ha scritto:
Hi,
I have a task queue that runs in the background. I want to switch to
scheduler because I need more workers.
Queued tasks are long running (video transcoding, 10-20 minutes each).
1. If I terminate (Ctrl+C)
seems perfectly legit in the snippet, maybe there's something else in your
complete code. Can you inspect the errors returned by queue_task (i.e.
print result_ID and see what does it contain) ?
On Thursday, March 7, 2013 11:34:23 AM UTC+1, Tim Richardson wrote:
I have an app using the
I believe I just stumbled on the same problem last night. I was upgrading
an app with an old scheduler (1.99.2) to current (2.4.2). I temporarily
solved it by continuing to insert the db record manually, as I needed to
finish this quickly and had no time to troubleshoot... I'll be able to do
On Thursday, 7 March 2013 21:49:12 UTC+11, Niphlod wrote:
seems perfectly legit in the snippet, maybe there's something else in your
complete code. Can you inspect the errors returned by queue_task (i.e.
print result_ID and see what does it contain) ?
My bad. Under the old way
pargs and pvars are automatically json-encoded, but needs to be real
python objects. If you want to pass strings already converted use
args='...' and vars='...' but the whole point of queue_task is to avoid
repetition (e.g. json.dumps(this), json.dumps(that))
On Thursday, March 7, 2013
Right, my vars dicts seemed a bit odd when I looked at them, now I know why
;) Same thing bit me, apparently. Thanks for reporting back!
Regards,
Ales
On Thursday, March 7, 2013 12:15:16 PM UTC+1, Tim Richardson wrote:
On Thursday, 7 March 2013 21:49:12 UTC+11, Niphlod wrote:
seems
20 matches
Mail list logo