[web2py] Re: Could this problem in production be related to web2py?

2019-06-07 Thread Tim Nyborg
You're exactly right - I'll probably wind up with two instances, once 
RAM-only for caching, and a persistent one for sessions.

On Friday, 7 June 2019 14:15:35 UTC+1, Lisandro wrote:
>
> I'm not exactly sure how many sessions my app is handling, but this 
> numbers will give you an idea:
>
>  - My websites receive about 500k visits (sessions) in an average day.
>  - The server handles about 2.5 million requests in an average day.
>  - I use RedisSession(session_expiry=36000), that is, sessions handled by 
> Redis expire after 10 hours.
>  - I also use Redis to store in cache the final HTML of public pages for 5 
> minutes.
>  - My Redis instance uses about 12gb of RAM. 
>  - My Redis instance consumes only about 8% of CPU (that is the 8% of one 
> single CPU, notice Redis is single-threaded).
>
>
> When you say "I'd want to ensure disk-persistence for them (but not for 
> cached things like search results)", how do you plan to achieve that? I'm 
> no expert, but I think the disk-persistance option in Redis is global. If 
> you want persistance for sessions and not for other cached things, I think 
> you will need to different instances of Redis. 
>
>
> El viernes, 7 de junio de 2019, 7:09:26 (UTC-3), Tim Nyborg escribió:
>>
>> Thanks for this.  Let me know if you find a resolution to the 'saving to 
>> disk' latency issue.  Redis sessions would be an improvement, but I'd want 
>> to ensure disk-persistence for them (but not for cached things like search 
>> results).  How many sessions are you storing, and how much RAM does it 
>> consume?
>>
>> On Thursday, 6 June 2019 20:33:28 UTC+1, Lisandro wrote:
>>>
>>> If you're going to add Redis, let me add a couple of comments about my 
>>> own experience:
>>>
>>>  - Using Redis to store sessions (not only to cache) was a huge 
>>> improvement in my case. I have public websites, some of them with much 
>>> traffic, so my app handles many sessions. I was using the database for 
>>> handling sessions, but when I changed to Redis, the performance improvement 
>>> was considerable. 
>>>
>>>  - Do some tests with the argument "with_lock" available in RedisCache 
>>> and RedisSessions (from gluon.contrib). In my specific case, using 
>>> with_lock=False is better, but of course this depends on each specific 
>>> scenario.
>>>
>>>  - An advise: choose proper values for "maxmemory" and 
>>> "maxmemory-policy" options from Redis configuration. The first one sets the 
>>> max amount of memory that Redis is allowed to use, and "maxmemory-policy" 
>>> allows you to choose how Redis should evict keys when it hits the 
>>> maxmemory: https://redis.io/topics/lru-cache. 
>>>
>>>
>>> El jueves, 6 de junio de 2019, 12:15:38 (UTC-3), Tim Nyborg escribió:

 This is really good to know.  I've a similar architecture to you, and 
 am planning to add redis to the stack soon.  Knowing about issues to be on 
 the lookout for is very helpful.

 On Friday, 24 May 2019 16:26:50 UTC+1, Lisandro wrote:
>
> I've found the root cause of the issue: the guilty was Redis.
>
> This is what was happening: Redis has an option for persistance 
>  wich stores the DB to the disk 
> every certain amount of time. The configuration I had was the one that 
> comes by default with Redis, that stores the DB every 15 minutes if at 
> least 1 key changed, every 5 minutes if at least 10 keys changed, and 
> every 
> 60 seconds if 1 keys changed. My Redis instance was saving DB to the 
> disk every minute, and the saving process was taking about 70 seconds. 
> Apparently, during that time, many of the requests were hanging. What I 
> did 
> was to simply disable the saving process (I can do it in my case because 
> I 
> don't need persistance). 
>
> I'm not sure why this happens. I know that Redis is single-threaded, 
> but its documentation states that many tasks (such as saving the DB) run 
> in 
> a separate thread that Redis creates. So I'm not sure how is that the 
> process of saving DB to the disk is making the other Redis operations 
> hang. 
> But this is what was happening, and I'm able to confirm that, after 
> disabling the DB saving process, my application response times have 
> decreased to expected values, no more timeouts :)
>
> I will continue to investigate this issue with Redis in the proper 
> forum. 
> I hope this helps anyone facing the same issue.
>
> Thanks for the help!
>
> El lunes, 13 de mayo de 2019, 13:49:26 (UTC-3), Lisandro escribió:
>>
>> After doing a lot of reading about uWSGI, I've discovered that "uWSGI 
>> cores are not CPU cores" (this was confirmed by unbit developers 
>> , 
>> the ones that wrote and mantain uWSGI). This makes me think that the 
>> issue 
>> I'm experiencing is due to 

[web2py] Re: Could this problem in production be related to web2py?

2019-06-07 Thread Lisandro
I'm not exactly sure how many sessions my app is handling, but this numbers 
will give you an idea:

 - My websites receive about 500k visits (sessions) in an average day.
 - The server handles about 2.5 million requests in an average day.
 - I use RedisSession(session_expiry=36000), that is, sessions handled by 
Redis expire after 10 hours.
 - I also use Redis to store in cache the final HTML of public pages for 5 
minutes.
 - My Redis instance uses about 12gb of RAM. 
 - My Redis instance consumes only about 8% of CPU (that is the 8% of one 
single CPU, notice Redis is single-threaded).


When you say "I'd want to ensure disk-persistence for them (but not for 
cached things like search results)", how do you plan to achieve that? I'm 
no expert, but I think the disk-persistance option in Redis is global. If 
you want persistance for sessions and not for other cached things, I think 
you will need to different instances of Redis. 


El viernes, 7 de junio de 2019, 7:09:26 (UTC-3), Tim Nyborg escribió:
>
> Thanks for this.  Let me know if you find a resolution to the 'saving to 
> disk' latency issue.  Redis sessions would be an improvement, but I'd want 
> to ensure disk-persistence for them (but not for cached things like search 
> results).  How many sessions are you storing, and how much RAM does it 
> consume?
>
> On Thursday, 6 June 2019 20:33:28 UTC+1, Lisandro wrote:
>>
>> If you're going to add Redis, let me add a couple of comments about my 
>> own experience:
>>
>>  - Using Redis to store sessions (not only to cache) was a huge 
>> improvement in my case. I have public websites, some of them with much 
>> traffic, so my app handles many sessions. I was using the database for 
>> handling sessions, but when I changed to Redis, the performance improvement 
>> was considerable. 
>>
>>  - Do some tests with the argument "with_lock" available in RedisCache 
>> and RedisSessions (from gluon.contrib). In my specific case, using 
>> with_lock=False is better, but of course this depends on each specific 
>> scenario.
>>
>>  - An advise: choose proper values for "maxmemory" and "maxmemory-policy" 
>> options from Redis configuration. The first one sets the max amount of 
>> memory that Redis is allowed to use, and "maxmemory-policy" allows you to 
>> choose how Redis should evict keys when it hits the maxmemory: 
>> https://redis.io/topics/lru-cache. 
>>
>>
>> El jueves, 6 de junio de 2019, 12:15:38 (UTC-3), Tim Nyborg escribió:
>>>
>>> This is really good to know.  I've a similar architecture to you, and am 
>>> planning to add redis to the stack soon.  Knowing about issues to be on the 
>>> lookout for is very helpful.
>>>
>>> On Friday, 24 May 2019 16:26:50 UTC+1, Lisandro wrote:

 I've found the root cause of the issue: the guilty was Redis.

 This is what was happening: Redis has an option for persistance 
  wich stores the DB to the disk 
 every certain amount of time. The configuration I had was the one that 
 comes by default with Redis, that stores the DB every 15 minutes if at 
 least 1 key changed, every 5 minutes if at least 10 keys changed, and 
 every 
 60 seconds if 1 keys changed. My Redis instance was saving DB to the 
 disk every minute, and the saving process was taking about 70 seconds. 
 Apparently, during that time, many of the requests were hanging. What I 
 did 
 was to simply disable the saving process (I can do it in my case because I 
 don't need persistance). 

 I'm not sure why this happens. I know that Redis is single-threaded, 
 but its documentation states that many tasks (such as saving the DB) run 
 in 
 a separate thread that Redis creates. So I'm not sure how is that the 
 process of saving DB to the disk is making the other Redis operations 
 hang. 
 But this is what was happening, and I'm able to confirm that, after 
 disabling the DB saving process, my application response times have 
 decreased to expected values, no more timeouts :)

 I will continue to investigate this issue with Redis in the proper 
 forum. 
 I hope this helps anyone facing the same issue.

 Thanks for the help!

 El lunes, 13 de mayo de 2019, 13:49:26 (UTC-3), Lisandro escribió:
>
> After doing a lot of reading about uWSGI, I've discovered that "uWSGI 
> cores are not CPU cores" (this was confirmed by unbit developers 
> , 
> the ones that wrote and mantain uWSGI). This makes me think that the 
> issue 
> I'm experiencing is due to a misconfiguration of uWSGI. But as I'm a 
> developer and not a sysadmin, it's being hard for me to figure out 
> exactly 
> what uWSGI options should I tweak. 
>
> I know this is out of the scope of this group, but I'll post my uWSGI 
> app configuration anyway, in case someone stil

[web2py] Re: Could this problem in production be related to web2py?

2019-06-07 Thread Tim Nyborg
Thanks for this.  Let me know if you find a resolution to the 'saving to 
disk' latency issue.  Redis sessions would be an improvement, but I'd want 
to ensure disk-persistence for them (but not for cached things like search 
results).  How many sessions are you storing, and how much RAM does it 
consume?

On Thursday, 6 June 2019 20:33:28 UTC+1, Lisandro wrote:
>
> If you're going to add Redis, let me add a couple of comments about my own 
> experience:
>
>  - Using Redis to store sessions (not only to cache) was a huge 
> improvement in my case. I have public websites, some of them with much 
> traffic, so my app handles many sessions. I was using the database for 
> handling sessions, but when I changed to Redis, the performance improvement 
> was considerable. 
>
>  - Do some tests with the argument "with_lock" available in RedisCache and 
> RedisSessions (from gluon.contrib). In my specific case, using 
> with_lock=False is better, but of course this depends on each specific 
> scenario.
>
>  - An advise: choose proper values for "maxmemory" and "maxmemory-policy" 
> options from Redis configuration. The first one sets the max amount of 
> memory that Redis is allowed to use, and "maxmemory-policy" allows you to 
> choose how Redis should evict keys when it hits the maxmemory: 
> https://redis.io/topics/lru-cache. 
>
>
> El jueves, 6 de junio de 2019, 12:15:38 (UTC-3), Tim Nyborg escribió:
>>
>> This is really good to know.  I've a similar architecture to you, and am 
>> planning to add redis to the stack soon.  Knowing about issues to be on the 
>> lookout for is very helpful.
>>
>> On Friday, 24 May 2019 16:26:50 UTC+1, Lisandro wrote:
>>>
>>> I've found the root cause of the issue: the guilty was Redis.
>>>
>>> This is what was happening: Redis has an option for persistance 
>>>  wich stores the DB to the disk 
>>> every certain amount of time. The configuration I had was the one that 
>>> comes by default with Redis, that stores the DB every 15 minutes if at 
>>> least 1 key changed, every 5 minutes if at least 10 keys changed, and every 
>>> 60 seconds if 1 keys changed. My Redis instance was saving DB to the 
>>> disk every minute, and the saving process was taking about 70 seconds. 
>>> Apparently, during that time, many of the requests were hanging. What I did 
>>> was to simply disable the saving process (I can do it in my case because I 
>>> don't need persistance). 
>>>
>>> I'm not sure why this happens. I know that Redis is single-threaded, but 
>>> its documentation states that many tasks (such as saving the DB) run in a 
>>> separate thread that Redis creates. So I'm not sure how is that the process 
>>> of saving DB to the disk is making the other Redis operations hang. But 
>>> this is what was happening, and I'm able to confirm that, after disabling 
>>> the DB saving process, my application response times have decreased to 
>>> expected values, no more timeouts :)
>>>
>>> I will continue to investigate this issue with Redis in the proper 
>>> forum. 
>>> I hope this helps anyone facing the same issue.
>>>
>>> Thanks for the help!
>>>
>>> El lunes, 13 de mayo de 2019, 13:49:26 (UTC-3), Lisandro escribió:

 After doing a lot of reading about uWSGI, I've discovered that "uWSGI 
 cores are not CPU cores" (this was confirmed by unbit developers 
 , the 
 ones that wrote and mantain uWSGI). This makes me think that the issue I'm 
 experiencing is due to a misconfiguration of uWSGI. But as I'm a developer 
 and not a sysadmin, it's being hard for me to figure out exactly what 
 uWSGI 
 options should I tweak. 

 I know this is out of the scope of this group, but I'll post my uWSGI 
 app configuration anyway, in case someone still wants to help:

 [uwsgi]
 pythonpath = /var/www/medios/
 mount = /=wsgihandler:application
 master = true
 workers = 40
 cpu-affinity = 3
 lazy-apps = true
 harakiri = 60
 reload-mercy = 8
 max-requests = 4000
 no-orphans = true
 vacuum = true
 buffer-size = 32768
 disable-logging = true
 ignore-sigpipe = true
 ignore-write-errors = true
 listen = 65535
 disable-write-exception = true


 Just to remember, this is running on a machine with 16 CPUs.
 Maybe I should *enable-threads*, set *processes* options and maybe 
 tweak *cpu-affinity. *
 My application uses Redis for caching, so I think I can enable threads 
 safely. 
 What do you think?


 El jueves, 9 de mayo de 2019, 21:10:57 (UTC-3), Lisandro escribió:
>
> I've checked my app's code once again and I can confirm that it 
> doesn't create threads. It only uses subprocess.cal() within functions 
> that 
> are called in the scheduler environment, I understand that's the proper 
> way 
> to do it because those calls d

[web2py] Re: Could this problem in production be related to web2py?

2019-06-06 Thread Lisandro
If you're going to add Redis, let me add a couple of comments about my own 
experience:

 - Using Redis to store sessions (not only to cache) was a huge improvement 
in my case. I have public websites, some of them with much traffic, so my 
app handles many sessions. I was using the database for handling sessions, 
but when I changed to Redis, the performance improvement was considerable. 

 - Do some tests with the argument "with_lock" available in RedisCache and 
RedisSessions (from gluon.contrib). In my specific case, using 
with_lock=False is better, but of course this depends on each specific 
scenario.

 - An advise: choose proper values for "maxmemory" and "maxmemory-policy" 
options from Redis configuration. The first one sets the max amount of 
memory that Redis is allowed to use, and "maxmemory-policy" allows you to 
choose how Redis should evict keys when it hits the maxmemory: 
https://redis.io/topics/lru-cache. 


El jueves, 6 de junio de 2019, 12:15:38 (UTC-3), Tim Nyborg escribió:
>
> This is really good to know.  I've a similar architecture to you, and am 
> planning to add redis to the stack soon.  Knowing about issues to be on the 
> lookout for is very helpful.
>
> On Friday, 24 May 2019 16:26:50 UTC+1, Lisandro wrote:
>>
>> I've found the root cause of the issue: the guilty was Redis.
>>
>> This is what was happening: Redis has an option for persistance 
>>  wich stores the DB to the disk 
>> every certain amount of time. The configuration I had was the one that 
>> comes by default with Redis, that stores the DB every 15 minutes if at 
>> least 1 key changed, every 5 minutes if at least 10 keys changed, and every 
>> 60 seconds if 1 keys changed. My Redis instance was saving DB to the 
>> disk every minute, and the saving process was taking about 70 seconds. 
>> Apparently, during that time, many of the requests were hanging. What I did 
>> was to simply disable the saving process (I can do it in my case because I 
>> don't need persistance). 
>>
>> I'm not sure why this happens. I know that Redis is single-threaded, but 
>> its documentation states that many tasks (such as saving the DB) run in a 
>> separate thread that Redis creates. So I'm not sure how is that the process 
>> of saving DB to the disk is making the other Redis operations hang. But 
>> this is what was happening, and I'm able to confirm that, after disabling 
>> the DB saving process, my application response times have decreased to 
>> expected values, no more timeouts :)
>>
>> I will continue to investigate this issue with Redis in the proper forum. 
>> I hope this helps anyone facing the same issue.
>>
>> Thanks for the help!
>>
>> El lunes, 13 de mayo de 2019, 13:49:26 (UTC-3), Lisandro escribió:
>>>
>>> After doing a lot of reading about uWSGI, I've discovered that "uWSGI 
>>> cores are not CPU cores" (this was confirmed by unbit developers 
>>> , the 
>>> ones that wrote and mantain uWSGI). This makes me think that the issue I'm 
>>> experiencing is due to a misconfiguration of uWSGI. But as I'm a developer 
>>> and not a sysadmin, it's being hard for me to figure out exactly what uWSGI 
>>> options should I tweak. 
>>>
>>> I know this is out of the scope of this group, but I'll post my uWSGI 
>>> app configuration anyway, in case someone still wants to help:
>>>
>>> [uwsgi]
>>> pythonpath = /var/www/medios/
>>> mount = /=wsgihandler:application
>>> master = true
>>> workers = 40
>>> cpu-affinity = 3
>>> lazy-apps = true
>>> harakiri = 60
>>> reload-mercy = 8
>>> max-requests = 4000
>>> no-orphans = true
>>> vacuum = true
>>> buffer-size = 32768
>>> disable-logging = true
>>> ignore-sigpipe = true
>>> ignore-write-errors = true
>>> listen = 65535
>>> disable-write-exception = true
>>>
>>>
>>> Just to remember, this is running on a machine with 16 CPUs.
>>> Maybe I should *enable-threads*, set *processes* options and maybe 
>>> tweak *cpu-affinity. *
>>> My application uses Redis for caching, so I think I can enable threads 
>>> safely. 
>>> What do you think?
>>>
>>>
>>> El jueves, 9 de mayo de 2019, 21:10:57 (UTC-3), Lisandro escribió:

 I've checked my app's code once again and I can confirm that it doesn't 
 create threads. It only uses subprocess.cal() within functions that are 
 called in the scheduler environment, I understand that's the proper way to 
 do it because those calls don't run in uwsgi environment.

 In the other hand, I can't disable the master process, I use 
 "lazy-apps" and "touch-chain-reload" options of uwsgi in order to achieve 
 graceful reloading, because acordingly to the documentation about 
 graceful reloading 
 
 :
 *"All of the described techniques assume a modern (>= 1.4) uWSGI 
 release with the master process enabled."*

 Grace

[web2py] Re: Could this problem in production be related to web2py?

2019-06-06 Thread Tim Nyborg
This is really good to know.  I've a similar architecture to you, and am 
planning to add redis to the stack soon.  Knowing about issues to be on the 
lookout for is very helpful.

On Friday, 24 May 2019 16:26:50 UTC+1, Lisandro wrote:
>
> I've found the root cause of the issue: the guilty was Redis.
>
> This is what was happening: Redis has an option for persistance 
>  wich stores the DB to the disk 
> every certain amount of time. The configuration I had was the one that 
> comes by default with Redis, that stores the DB every 15 minutes if at 
> least 1 key changed, every 5 minutes if at least 10 keys changed, and every 
> 60 seconds if 1 keys changed. My Redis instance was saving DB to the 
> disk every minute, and the saving process was taking about 70 seconds. 
> Apparently, during that time, many of the requests were hanging. What I did 
> was to simply disable the saving process (I can do it in my case because I 
> don't need persistance). 
>
> I'm not sure why this happens. I know that Redis is single-threaded, but 
> its documentation states that many tasks (such as saving the DB) run in a 
> separate thread that Redis creates. So I'm not sure how is that the process 
> of saving DB to the disk is making the other Redis operations hang. But 
> this is what was happening, and I'm able to confirm that, after disabling 
> the DB saving process, my application response times have decreased to 
> expected values, no more timeouts :)
>
> I will continue to investigate this issue with Redis in the proper forum. 
> I hope this helps anyone facing the same issue.
>
> Thanks for the help!
>
> El lunes, 13 de mayo de 2019, 13:49:26 (UTC-3), Lisandro escribió:
>>
>> After doing a lot of reading about uWSGI, I've discovered that "uWSGI 
>> cores are not CPU cores" (this was confirmed by unbit developers 
>> , the 
>> ones that wrote and mantain uWSGI). This makes me think that the issue I'm 
>> experiencing is due to a misconfiguration of uWSGI. But as I'm a developer 
>> and not a sysadmin, it's being hard for me to figure out exactly what uWSGI 
>> options should I tweak. 
>>
>> I know this is out of the scope of this group, but I'll post my uWSGI app 
>> configuration anyway, in case someone still wants to help:
>>
>> [uwsgi]
>> pythonpath = /var/www/medios/
>> mount = /=wsgihandler:application
>> master = true
>> workers = 40
>> cpu-affinity = 3
>> lazy-apps = true
>> harakiri = 60
>> reload-mercy = 8
>> max-requests = 4000
>> no-orphans = true
>> vacuum = true
>> buffer-size = 32768
>> disable-logging = true
>> ignore-sigpipe = true
>> ignore-write-errors = true
>> listen = 65535
>> disable-write-exception = true
>>
>>
>> Just to remember, this is running on a machine with 16 CPUs.
>> Maybe I should *enable-threads*, set *processes* options and maybe tweak 
>> *cpu-affinity. *
>> My application uses Redis for caching, so I think I can enable threads 
>> safely. 
>> What do you think?
>>
>>
>> El jueves, 9 de mayo de 2019, 21:10:57 (UTC-3), Lisandro escribió:
>>>
>>> I've checked my app's code once again and I can confirm that it doesn't 
>>> create threads. It only uses subprocess.cal() within functions that are 
>>> called in the scheduler environment, I understand that's the proper way to 
>>> do it because those calls don't run in uwsgi environment.
>>>
>>> In the other hand, I can't disable the master process, I use "lazy-apps" 
>>> and "touch-chain-reload" options of uwsgi in order to achieve graceful 
>>> reloading, because acordingly to the documentation about graceful 
>>> reloading 
>>> 
>>> :
>>> *"All of the described techniques assume a modern (>= 1.4) uWSGI release 
>>> with the master process enabled."*
>>>
>>> Graceful reloading allows me to update my app's code and reload uwsgi 
>>> workers smoothly, without downtime or errors. What can I do if I can't 
>>> disable master process?
>>>
>>> You mentioned the original problem seems to be a locking problem due to 
>>> threads. If my app doesn't open threads, where else could be the cause of 
>>> the issue? 
>>>
>>> The weirdest thing for me is that the timeouts are always on core 0. I 
>>> mean, uwsgi runs between 30 and 45 workers over 16 cores, isn't too much of 
>>> a coincidence that requests that hang correspond to a few workers always 
>>> assigned on core 0?
>>>
>>>
>>> El jueves, 9 de mayo de 2019, 17:10:19 (UTC-3), Leonel Câmara escribió:

 Yes I meant stuff exactly like that.

>>>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, se

[web2py] Re: Could this problem in production be related to web2py?

2019-05-24 Thread Leonel Câmara
Thanks a lot for explaining what was happening.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/c9f659ff-1661-4ace-84cc-f321bf003057%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Could this problem in production be related to web2py?

2019-05-24 Thread Lisandro
I've found the root cause of the issue: the guilty was Redis.

This is what was happening: Redis has an option for persistance 
 wich stores the DB to the disk every 
certain amount of time. The configuration I had was the one that comes by 
default with Redis, that stores the DB every 15 minutes if at least 1 key 
changed, every 5 minutes if at least 10 keys changed, and every 60 seconds 
if 1 keys changed. My Redis instance was saving DB to the disk every 
minute, and the saving process was taking about 70 seconds. Apparently, 
during that time, many of the requests were hanging. What I did was to 
simply disable the saving process (I can do it in my case because I don't 
need persistance). 

I'm not sure why this happens. I know that Redis is single-threaded, but 
its documentation states that many tasks (such as saving the DB) run in a 
separate thread that Redis creates. So I'm not sure how is that the process 
of saving DB to the disk is making the other Redis operations hang. But 
this is what was happening, and I'm able to confirm that, after disabling 
the DB saving process, my application response times have decreased to 
expected values, no more timeouts :)

I will continue to investigate this issue with Redis in the proper forum. 
I hope this helps anyone facing the same issue.

Thanks for the help!

El lunes, 13 de mayo de 2019, 13:49:26 (UTC-3), Lisandro escribió:
>
> After doing a lot of reading about uWSGI, I've discovered that "uWSGI 
> cores are not CPU cores" (this was confirmed by unbit developers 
> , the 
> ones that wrote and mantain uWSGI). This makes me think that the issue I'm 
> experiencing is due to a misconfiguration of uWSGI. But as I'm a developer 
> and not a sysadmin, it's being hard for me to figure out exactly what uWSGI 
> options should I tweak. 
>
> I know this is out of the scope of this group, but I'll post my uWSGI app 
> configuration anyway, in case someone still wants to help:
>
> [uwsgi]
> pythonpath = /var/www/medios/
> mount = /=wsgihandler:application
> master = true
> workers = 40
> cpu-affinity = 3
> lazy-apps = true
> harakiri = 60
> reload-mercy = 8
> max-requests = 4000
> no-orphans = true
> vacuum = true
> buffer-size = 32768
> disable-logging = true
> ignore-sigpipe = true
> ignore-write-errors = true
> listen = 65535
> disable-write-exception = true
>
>
> Just to remember, this is running on a machine with 16 CPUs.
> Maybe I should *enable-threads*, set *processes* options and maybe tweak 
> *cpu-affinity. *
> My application uses Redis for caching, so I think I can enable threads 
> safely. 
> What do you think?
>
>
> El jueves, 9 de mayo de 2019, 21:10:57 (UTC-3), Lisandro escribió:
>>
>> I've checked my app's code once again and I can confirm that it doesn't 
>> create threads. It only uses subprocess.cal() within functions that are 
>> called in the scheduler environment, I understand that's the proper way to 
>> do it because those calls don't run in uwsgi environment.
>>
>> In the other hand, I can't disable the master process, I use "lazy-apps" 
>> and "touch-chain-reload" options of uwsgi in order to achieve graceful 
>> reloading, because acordingly to the documentation about graceful 
>> reloading 
>> 
>> :
>> *"All of the described techniques assume a modern (>= 1.4) uWSGI release 
>> with the master process enabled."*
>>
>> Graceful reloading allows me to update my app's code and reload uwsgi 
>> workers smoothly, without downtime or errors. What can I do if I can't 
>> disable master process?
>>
>> You mentioned the original problem seems to be a locking problem due to 
>> threads. If my app doesn't open threads, where else could be the cause of 
>> the issue? 
>>
>> The weirdest thing for me is that the timeouts are always on core 0. I 
>> mean, uwsgi runs between 30 and 45 workers over 16 cores, isn't too much of 
>> a coincidence that requests that hang correspond to a few workers always 
>> assigned on core 0?
>>
>>
>> El jueves, 9 de mayo de 2019, 17:10:19 (UTC-3), Leonel Câmara escribió:
>>>
>>> Yes I meant stuff exactly like that.
>>>
>>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/375c1190-4a48-40b8-bf9a-bd51e4d2289a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Could this problem in production be related to web2py?

2019-05-13 Thread Lisandro
After doing a lot of reading about uWSGI, I've discovered that "uWSGI cores 
are not CPU cores" (this was confirmed by unbit developers 
, the ones 
that wrote and mantain uWSGI). This makes me think that the issue I'm 
experiencing is due to a misconfiguration of uWSGI. But as I'm a developer 
and not a sysadmin, it's being hard for me to figure out exactly what uWSGI 
options should I tweak. 

I know this is out of the scope of this group, but I'll post my uWSGI app 
configuration anyway, in case someone still wants to help:

[uwsgi]
pythonpath = /var/www/medios/
mount = /=wsgihandler:application
master = true
workers = 40
cpu-affinity = 3
lazy-apps = true
harakiri = 60
reload-mercy = 8
max-requests = 4000
no-orphans = true
vacuum = true
buffer-size = 32768
disable-logging = true
ignore-sigpipe = true
ignore-write-errors = true
listen = 65535
disable-write-exception = true


Just to remember, this is running on a machine with 16 CPUs.
Maybe I should *enable-threads*, set *processes* options and maybe tweak 
*cpu-affinity. *
My application uses Redis for caching, so I think I can enable threads 
safely. 
What do you think?


El jueves, 9 de mayo de 2019, 21:10:57 (UTC-3), Lisandro escribió:
>
> I've checked my app's code once again and I can confirm that it doesn't 
> create threads. It only uses subprocess.cal() within functions that are 
> called in the scheduler environment, I understand that's the proper way to 
> do it because those calls don't run in uwsgi environment.
>
> In the other hand, I can't disable the master process, I use "lazy-apps" 
> and "touch-chain-reload" options of uwsgi in order to achieve graceful 
> reloading, because acordingly to the documentation about graceful 
> reloading 
> 
> :
> *"All of the described techniques assume a modern (>= 1.4) uWSGI release 
> with the master process enabled."*
>
> Graceful reloading allows me to update my app's code and reload uwsgi 
> workers smoothly, without downtime or errors. What can I do if I can't 
> disable master process?
>
> You mentioned the original problem seems to be a locking problem due to 
> threads. If my app doesn't open threads, where else could be the cause of 
> the issue? 
>
> The weirdest thing for me is that the timeouts are always on core 0. I 
> mean, uwsgi runs between 30 and 45 workers over 16 cores, isn't too much of 
> a coincidence that requests that hang correspond to a few workers always 
> assigned on core 0?
>
>
> El jueves, 9 de mayo de 2019, 17:10:19 (UTC-3), Leonel Câmara escribió:
>>
>> Yes I meant stuff exactly like that.
>>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/b7bf5665-4a2e-4ebd-a147-9f82d2318820%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Could this problem in production be related to web2py?

2019-05-09 Thread Lisandro
I've checked my app's code once again and I can confirm that it doesn't 
create threads. It only uses subprocess.cal() within functions that are 
called in the scheduler environment, I understand that's the proper way to 
do it because those calls don't run in uwsgi environment.

In the other hand, I can't disable the master process, I use "lazy-apps" 
and "touch-chain-reload" options of uwsgi in order to achieve graceful 
reloading, because acordingly to the documentation about graceful reloading 

:
*"All of the described techniques assume a modern (>= 1.4) uWSGI release 
with the master process enabled."*

Graceful reloading allows me to update my app's code and reload uwsgi 
workers smoothly, without downtime or errors. What can I do if I can't 
disable master process?

You mentioned the original problem seems to be a locking problem due to 
threads. If my app doesn't open threads, where else could be the cause of 
the issue? 

The weirdest thing for me is that the timeouts are always on core 0. I 
mean, uwsgi runs between 30 and 45 workers over 16 cores, isn't too much of 
a coincidence that requests that hang correspond to a few workers always 
assigned on core 0?


El jueves, 9 de mayo de 2019, 17:10:19 (UTC-3), Leonel Câmara escribió:
>
> Yes I meant stuff exactly like that.
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/226da65d-9072-44b6-9a42-4fb268e7fd4b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Could this problem in production be related to web2py?

2019-05-09 Thread Leonel Câmara
Yes I meant stuff exactly like that.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/2bdfe5ec-0261-4c49-b5f9-9751f4606226%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Could this problem in production be related to web2py?

2019-05-09 Thread Lisandro
Hi Leonel, thank you very much for your time.

uWSGI docs confirm what you suggest:
*"The emperor should generally not be run with --master, unless master 
features like advanced logging are specifically needed."*

Allow me to make one last question: what do you mean by "create any thread 
in your application"? Do you mean using subprocess.call() or something like 
that? 
If that's the case, I think I've taken care of that and I only use 
subprocess within scheduler environment, but not in my controller 
functions. 
Is that what you meant?

El jueves, 9 de mayo de 2019, 15:25:36 (UTC-3), Leonel Câmara escribió:
>
> Seems like a locking problem due to threads. Do you create any thread in 
> your application? If so you need to remove master=true from your uwsgi .ini 
> config.
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/e72c0976-aace-467f-90bf-a95960d7f228%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Could this problem in production be related to web2py?

2019-05-09 Thread Leonel Câmara
Seems like a locking problem due to threads. Do you create any thread in 
your application? If so you need to remove master=true from your uwsgi .ini 
config.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/5c3bd62d-ba6c-4542-983e-1670a2fa7b80%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.