In your scenario do you have 10 workers on multiple machines?   So having 5 
workers on 1 machine and 5 on another?  we can easily do 10 workers, but we 
run into issues when we have them split across machines

On Thursday, January 26, 2017 at 2:43:19 PM UTC-5, Niphlod wrote:
>
> I think I posted the relevant number of queries issued to the backend for 
> a given number of workers but I do daily use the scheduler on an mssql db 
> and it can easily handle at least 10 workers (with the default heartbeat). 
> Locking kicks in maybe once or twice a day, which means 1 or 2 on 28800 
> occasions, which is a pretty damn low number :P
> Of course the backend *should* be able to sustain concurrency, but on a 
> minimal server with very low specs 6 or 7 workers should absolutely pose no 
> threats at all. 
> For 5 workers all that is needed is a backend being able to handle 240 
> transactions per minute!
>
> On Thursday, January 26, 2017 at 7:47:51 PM UTC+1, Dave S wrote:
>>
>> On Thursday, January 26, 2017 at 9:45:20 AM UTC-8, Jason Solack wrote:
>>>
>>> using mssql, the code itself is in gluon scheduler.py - this happens 
>>> with no interaction from the app
>>>
>>>
>> How do you instantiate the Scheduler?
>>
>> Is the mssql engine on the same machine as any of the web2py nodes?  Are 
>> there non-web2py connections to it?
>>
>> /dps
>>  
>>
>>> On Thursday, January 26, 2017 at 12:03:41 PM UTC-5, Dave S wrote:
>>>>
>>>>
>>>>
>>>> On Thursday, January 26, 2017 at 8:44:25 AM UTC-8, Jason Solack wrote:
>>>>>
>>>>> So the issue is we run 6 workers on a machine and it works.  If we do 
>>>>> 3 workers on 2 machines we get deadlocks.  That is no exaggeration - 6 
>>>>> records in our worker table and we're getting dealocks.
>>>>>
>>>>>
>>>> Which DB are you using?  Can you show your relevant code?
>>>>
>>>> /dps
>>>>  
>>>>
>>>>> On Wednesday, January 25, 2017 at 3:05:37 AM UTC-5, Niphlod wrote:
>>>>>>
>>>>>> you *should* have one different db for each environment. Each 
>>>>>> scheduler tied to the same db will process incoming tasks, and it 
>>>>>> doesn't 
>>>>>> matter what app effectively pushes them.
>>>>>> This is good if you want to have a single scheduler (which can be 
>>>>>> composed by several workers) serving many apps, but *generally* you 
>>>>>> don't 
>>>>>> want to *merge* prod and beta apps.
>>>>>>
>>>>>> The is_ticker bit is fine: only one worker tied to a db is elegible 
>>>>>> to be a ticker, which is the one process than manages asssigning tasks 
>>>>>> (to 
>>>>>> itself AND to other available workers).
>>>>>> Locking, once in a while, can happen and is self-healed. Continuous 
>>>>>> locking is not good: either you have too many workers tied to the db OR 
>>>>>> your db isn't processing concurrency at the rate that it needs. 
>>>>>> SQLite can handle at most 2 or 3 workers. All the other "solid" 
>>>>>> backends can manage up to 10, 15 at most.
>>>>>> If you wanna go higher, you need to turn to the redis-backed 
>>>>>> scheduler.
>>>>>>
>>>>>> On Tuesday, January 24, 2017 at 10:59:31 PM UTC+1, Jason Solack wrote:
>>>>>>>
>>>>>>> Hello all, 
>>>>>>>
>>>>>>> I'm having some re-occurring issue with the scheduler.  We are 
>>>>>>> currently running multiple environments (production, beta) and have 
>>>>>>> several 
>>>>>>> nodes in each environment.  If we have scheduler services running on 
>>>>>>> all 
>>>>>>> machines on each node we get a lot of deadlock errors.  If we drop each 
>>>>>>> environment down to one node we get no deadlock errors.  I am noticing 
>>>>>>> the 
>>>>>>> field "is_ticker" in the worker table will only have one ticker across 
>>>>>>> all 
>>>>>>> the workers (spanning environments).  Is that the expected behavior?  I 
>>>>>>> don't see any documentation about the ticker field so i'm not sure what 
>>>>>>> to 
>>>>>>> expect from that.
>>>>>>>
>>>>>>> Also is there any best practices about running the scheduler in an 
>>>>>>> environment that i've described?  
>>>>>>>
>>>>>>> Thanks in advance
>>>>>>>
>>>>>>> Jason
>>>>>>>
>>>>>>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to