[web2py] Re: DEADLOCKs between two or more scheduler worker

2016-11-02 Thread Niphlod
ahem  until "it makes sense" any modification (and discussion about it) 
doesn't really help anyone ^_^ . 
The fact that you have 4 workers and a congestion problem gives me the hint 
that your db is on the lower side of the needed specs for a normal server. 
These kind of issues starts to show with 40-50 workers with a disk on an 
usb drive not on a production server (even built 10 years ago).

On Tuesday, November 1, 2016 at 2:42:32 PM UTC+1, Erwn Ltmann wrote:
>
> Hi Niphold,
>
> you are right: I have an extra database select in order to get the list of 
> dead workers.
>
> Usually I have four workers for example. They are static and shouldn't 
> terminate often. In this case, I call only once the database in order to 
> get the list of dead workers and I assume this list is always empty. In 
> this case nothing is to do. The inner part of my condition will be 
> important very rare and because of that I can ignore this within my runtime 
> complexity calculation. In our original code we call always twice the 
> question how many dead workers there are (update and delete). My suggestion 
> reflects a runtime rate of 1 for 2.
>
> Anyway, if I run the worker with my suggested extra condition I could 
> eliminate the deadlock cases. This works very well because the extra 
> condition. I am happy :)
>
> Thx a lot.
> Erwn
>
> On Monday, October 31, 2016 at 3:02:42 PM UTC+1, Niphlod wrote:
>>
>> sorry, but it doesn't really make sense. 
>> You're executing twice the same command (the call enclosed in len() and 
>> the actual .delete() call), which is the counter-arg for relaxing a 
>> pressured database environment. 
>>
>> On Monday, October 31, 2016 at 2:04:24 PM UTC+1, Erwn Ltmann wrote:
>>>
>>> Hi,
>>>
>>> thank you for your reply.
>>>
>>> @Pierre: MariaDB (in my case) handled deadlocks automaticly too. Good to 
>>> known, I don't have to be worry about that.
>>>
>>> @Niphlod: I tried to beef up my database host. No effects. Another 
>>> suggestion is to prevent the cases for such situation. I did it by an 
>>> another extra code line in your worker function send_heartbeat:
>>>
>>> *if len(db.executesql(dead_workers_name)) > 0:*
   db(
(st.assigned_worker_name.belongs(dead_workers_name)) &
(st.status == RUNNING)
   ).update(assigned_worker_name='', status=QUEUED)
   dead_workers.delete()

>>>
>>>
>>>
>>> Erwn
>>>
>>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: DEADLOCKs between two or more scheduler worker

2016-11-01 Thread Erwn Ltmann
Hi Niphold,

you are right: I have an extra database select in order to get the list of 
dead workers.

Usually I have four workers for example. They are static and shouldn't 
terminate often. In this case, I call only once the database in order to 
get the list of dead workers and I assume this list is always empty. In 
this case nothing is to do. The inner part of my condition will be 
important very rare and because of that I can ignore this within my runtime 
complexity calculation. In our original code we call always twice the 
question how many dead workers there are (update and delete). My suggestion 
reflects a runtime rate of 1 for 2.

Anyway, if I run the worker with my suggested extra condition I could 
eliminate the deadlock cases. This works very well because the extra 
condition. I am happy :)

Thx a lot.
Erwn

On Monday, October 31, 2016 at 3:02:42 PM UTC+1, Niphlod wrote:
>
> sorry, but it doesn't really make sense. 
> You're executing twice the same command (the call enclosed in len() and 
> the actual .delete() call), which is the counter-arg for relaxing a 
> pressured database environment. 
>
> On Monday, October 31, 2016 at 2:04:24 PM UTC+1, Erwn Ltmann wrote:
>>
>> Hi,
>>
>> thank you for your reply.
>>
>> @Pierre: MariaDB (in my case) handled deadlocks automaticly too. Good to 
>> known, I don't have to be worry about that.
>>
>> @Niphlod: I tried to beef up my database host. No effects. Another 
>> suggestion is to prevent the cases for such situation. I did it by an 
>> another extra code line in your worker function send_heartbeat:
>>
>> *if len(db.executesql(dead_workers_name)) > 0:*
>>>   db(
>>>(st.assigned_worker_name.belongs(dead_workers_name)) &
>>>(st.status == RUNNING)
>>>   ).update(assigned_worker_name='', status=QUEUED)
>>>   dead_workers.delete()
>>>
>>
>>
>>
>> Erwn
>>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: DEADLOCKs between two or more scheduler worker

2016-10-31 Thread Niphlod
sorry, but it doesn't really make sense. 
You're executing twice the same command (the call enclosed in len() and the 
actual .delete() call), which is the counter-arg for relaxing a pressured 
database environment. 

On Monday, October 31, 2016 at 2:04:24 PM UTC+1, Erwn Ltmann wrote:
>
> Hi,
>
> thank you for your reply.
>
> @Pierre: MariaDB (in my case) handled deadlocks automaticly too. Good to 
> known, I don't have to be worry about that.
>
> @Niphlod: I tried to beef up my database host. No effects. Another 
> suggestion is to prevent the cases for such situation. I did it by an 
> another extra code line in your worker function send_heartbeat:
>
> *if len(db.executesql(dead_workers_name)) > 0:*
>>   db(
>>(st.assigned_worker_name.belongs(dead_workers_name)) &
>>(st.status == RUNNING)
>>   ).update(assigned_worker_name='', status=QUEUED)
>>   dead_workers.delete()
>>
>
>
>
> Erwn
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: DEADLOCKs between two or more scheduler worker

2016-10-31 Thread Erwn Ltmann
Hi,

thank you for your reply.

@Pierre: MariaDB (in my case) handled deadlocks automaticly too. Good to 
known, I don't have to be worry about that.

@Niphlod: I tried to beef up my database host. No effects. Another 
suggestion is to prevent the cases for such situation. I did it by an 
another extra code line in your worker function send_heartbeat:

*if len(db.executesql(dead_workers_name)) > 0:*
>   db(
>(st.assigned_worker_name.belongs(dead_workers_name)) &
>(st.status == RUNNING)
>   ).update(assigned_worker_name='', status=QUEUED)
>   dead_workers.delete()
>



Erwn

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: DEADLOCKs between two or more scheduler worker

2016-10-27 Thread Niphlod
the only thing you can do is either beefing up the database instance (less 
deadlocks because of faster execution of queries) or lower the db pressure 
(lower number of workers, higher heartbeat).

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: DEADLOCKs between two or more scheduler worker

2016-10-27 Thread Pierre
I have got deadlocks too but  postgresql knows how to resolve this so i 
don't need worry about it.

take a look here:

https://www.postgresql.org/docs/9.1/static/explicit-locking.html

/*---excerpt--*/

13.3.3. Deadlocks 

The use of explicit locking can increase the likelihood of *deadlocks*, 
wherein two (or more) transactions each hold locks that the other wants. 
For example, if transaction 1 acquires an exclusive lock on table A and 
then tries to acquire an exclusive lock on table B, while transaction 2 has 
already exclusive-locked table B and now wants an exclusive lock on table 
A, then neither one can proceed. *PostgreSQL automatically detects deadlock 
situations and resolves them by aborting one of the transactions involved, 
allowing the other(s) to complete*. (Exactly which transaction will be 
aborted is difficult to predict and should not be relied upon.)


/*---*/

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.