Re: [web2py] How to perform a DB migration from outside the app where DB is defined?

2023-11-29 Thread Lisandro
Ok great! Thanks for the advice.
I already made the changes so I already don't depend on having to make a 
request with migrate=True.
I tested it and it works ok: from my main application I connect to the 
secondary app's database and I use .executesql() to make the changes I need 
before upgrading the app's code.

One more question: should I run the app once with fake_migrate=True? 
After manually performing the database migration and updating the app's 
code, it works great. I mean, the database migration involved creating and 
deleting some table fields, those changes were made using .executesql(). 
After that, I updated the app's code, including models/db.py which already 
has those changes in its definitions. At this point, although the 
application works great, the files inside /databases/ folder are not up to 
date. So, should I run the app one with fake_migrate=True? Or I don't need 
it anymore as I'm going to perform manually database migrations from now on?

Thanks in advance!
Best regards,
Lisandro


El martes, 28 de noviembre de 2023 a la(s) 10:06:52 UTC-3, Carlos Correia 
escribió:

> Às 11:34 de 28/11/23, Lisandro escreveu:
>
> Hey there!  
> I have several web2py apps running, and one of them acts as the main one, 
> taking care of mantainance and upgrading the apps when needed. Sometimes 
> the upgrade requires a database migration.
>
> All the process is controlled by the main app, which takes the app that 
> needs to be upgraded and does this:
> 1) Puts the app in maitenance mode (this is, returns http 503 to any 
> requests that doesn't come from localhost)
> 2) Upgrades the app's code.
> 3) Sets migrate=True
> 4) Uses requests module to perform a GET so the migration is done in the 
> database app.
> 5) Sets migrate=False
> 6) Reactivates the app to serve any request.
>
> The problem I,
>
>
> 'm facing with this technique is that some migrations take a lot of time 
> (sometimes because there is a lot of changes, some other times because it's 
> a small change in a really big database). In this cases, using requests to 
> perform a GET isn't ideal because it hits the timeout (a normal timeout set 
> for any http request).
>
> Is there any way to perform the database migration from outside the app 
> where the database is defined that doesn't involve using http requests?
>
> Thanks in advance!
> Warm regards,
> Lisandro
>
> -- 
> Resources:
> - http://web2py.com
> - http://web2py.com/book (Documentation)
> - http://github.com/web2py/web2py (Source code)
> - https://code.google.com/p/web2py/issues/list (Report Issues)
> --- 
> You received this message because you are subscribed to the Google Groups 
> "web2py-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to web2py+un...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/web2py/33dd2781-ad2d-4f21-b458-ff54ee665723n%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/web2py/33dd2781-ad2d-4f21-b458-ff54ee665723n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
> Hi,
>
>
> With complex databases I recomend using a SQL script or executing the 
> required SQL commands from Web2py, via db.executesql(), in particular in 
> the case theres is also data changes associated.
>
> Regards,
>
> Carlos Correia
> =
> MEMÓRIA PERSISTENTE
> GSM:  917 157 146 (Signal)
> e-mail: ge...@memoriapersistente.pt
> URL: http://www.memoriapersistente.pt
>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/2fcf96d4-27e6-481c-b540-003b0c34be04n%40googlegroups.com.


[web2py] How to perform a DB migration from outside the app where DB is defined?

2023-11-28 Thread Lisandro
Hey there! 
I have several web2py apps running, and one of them acts as the main one, 
taking care of mantainance and upgrading the apps when needed. Sometimes 
the upgrade requires a database migration.

All the process is controlled by the main app, which takes the app that 
needs to be upgraded and does this:
1) Puts the app in maitenance mode (this is, returns http 503 to any 
requests that doesn't come from localhost)
2) Upgrades the app's code.
3) Sets migrate=True
4) Uses requests module to perform a GET so the migration is done in the 
database app.
5) Sets migrate=False
6) Reactivates the app to serve any request.

The problem I'm facing with this technique is that some migrations take a 
lot of time (sometimes because there is a lot of changes, some other times 
because it's a small change in a really big database). In this cases, using 
requests to perform a GET isn't ideal because it hits the timeout (a normal 
timeout set for any http request).

Is there any way to perform the database migration from outside the app 
where the database is defined that doesn't involve using http requests?

Thanks in advance!
Warm regards,
Lisandro

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/33dd2781-ad2d-4f21-b458-ff54ee665723n%40googlegroups.com.


[web2py] Error in gluon parse_all_vars(): TypeError: write() argument must be str, not bytes

2023-06-23 Thread Lisandro
I'm running web2py Version 2.24.1-stable+timestamp.2023.03.23.05.07.17 with 
Python3.9 on Linux RHEL9. I have a web2py application that exposes a 
website to the public, it succesfully handles about 50 requests per second 
:)

Every day I see a few errors with this traceback:
Traceback (most recent call last):
File "/var/www/medios/gluon/restricted.py", line 219, in restricted
exec(ccode, environment)
File "applications/lmdiario/compiled/controllers.default.index.py", line 9, in 

File "/var/www/medios/applications/lmdiario/modules/globales.py", line 2300, 
in get_publicidades_response
layout = request.vars.layout or ''
File "/var/www/medios/gluon/globals.py", line 325, in vars
self.parse_all_vars()
File "/var/www/medios/gluon/globals.py", line 296, in parse_all_vars
for key, value in iteritems(self.post_vars):
File "/var/www/medios/gluon/globals.py", line 317, in post_vars
self.parse_post_vars()
File "/var/www/medios/gluon/globals.py", line 253, in parse_post_vars
dpost = cgi.FieldStorage(fp=body, environ=env, headers=headers, 
keep_blank_values=1)
File "/usr/lib64/python3.9/cgi.py", line 482, in __init__
self.read_single()
File "/usr/lib64/python3.9/cgi.py", line 675, in read_single
self.read_binary()
File "/usr/lib64/python3.9/cgi.py", line 697, in read_binary
self.file.write(data)
TypeError: write() argument must be str, not bytes


I believe the error is related to this python bug 
<https://bugs.python.org/issue2>: apparently *the error occurs when 
receiving requests with Content-Length but without Content-Disposition 
headers*. 

The errors are produced by random clients that send random POST requests. 
As the python bug is still opened, is there any way I can avoid this ticket 
errors? Using nginx to add the missing header? Changing web2py source code? 
What could I do?

Thanks in advance!
Warm regards,
Lisandro


-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/afdf34cf-ad90-419c-b738-2329bba5c324n%40googlegroups.com.


Re: [web2py] Scheduler error on last stable web2py version 2.24.1

2023-06-22 Thread Lisandro
I've posted an issue <https://github.com/web2py/web2py/issues/2468> and 
I've already proposed a small fix and made a pull request :)

El miércoles, 21 de junio de 2023 a la(s) 11:25:45 UTC-3, Lisandro escribió:

> (sorry, I've deleted my previous message because I have more detailed 
> information).
>
> Thank you for pointing that out, I had seen that issue but I missed out 
> that I had to use Scheduler(..., with_spawn=True) so I did that change and, 
> apparently, scheduler is stable (hasn't crashed so far). However, since 
> that change, all the tasks that are from a different app that the one from 
> where the scheduler was run, fail with this traceback:
> Traceback (most recent call last): File 
> "/home/limon/medios/gluon/scheduler.py", line 494, in executor functions = 
> current._scheduler.tasks AttributeError: '_thread._local' object has no 
> attribute '_scheduler'
>
>
> Let me explain a bit more: I have one web2py instance with several 
> applications, each one with its own database. One of these apps is the main 
> one and the scheduler connects to its database.
> ...web2py/applications/main/...
> ...web2py/applications/app1/...
> ...web2py/applications/app2/...
>
>
> I run three scheduler workers for the main app with this command:
> /opt/virtualenvs/py39/bin/python /home/limon/web2py/web2py.py -K 
> main,main,main
>
>
> In applications/main/models/scheduler.py I instantiate the Scheduler like 
> this:
> scheduler = Scheduler(db, max_empty_runs=0, heartbeat=5, use_spawn=True)
>
>
> And then, from several parts of my application I queue tasks. Some of 
> these tasks are defined in the "main" application, but some others are 
> defined in "app1" or "app2". 
> Well, since the change to Scheduler(..., use_spawn=True) all the tasks 
> within "main" application run ok, but all the other ones fail with the 
> traceback I showed before. 
> Notice I run the three scheduler workers for "main" application, I'm not 
> sure if that has something to do with the issue. But I can confirm all this 
> setup was working smoothly before use_spawn=True.
>
> What could be happening?
> Any help will be much appreciated.
> I'll keep investigating and post here if I find something.
>
> Thanks!
> El miércoles, 21 de junio de 2023 a la(s) 07:08:37 UTC-3, Massimiliano 
> escribió:
>
>> There was an issue but should be fixed now.
>>
>> https://github.com/web2py/web2py/issues/1999
>>
>>
>> Il giorno lun 19 giu 2023 alle ore 20:57 Lisandro  
>> ha scritto:
>>
>>> I've recently upgraded to web2py Version 
>>> 2.24.1-stable+timestamp.2023.03.23.05.07.17
>>> It's running on python 3.9.14, Rocky Linux RHEL9, using PostgreSQL 15.2 
>>> for database.
>>>
>>> Since I did the upgrade, the scheduler fails from time to time with this 
>>> traceback:
>>>
>>> ERROR:web2py.scheduler.main#1531711:error storing result
>>> Traceback (most recent call last):
>>>   File "/var/www/medios/gluon/scheduler.py", line 1077, in 
>>> wrapped_report_task
>>> self.report_task(task, task_report)
>>>   File "/var/www/medios/gluon/scheduler.py", line 1101, in report_task
>>> db(sr.id == task.run_id).update(
>>>   File "/var/www/medios/gluon/packages/dal/pydal/objects.py", line 2789, 
>>> in update
>>> ret = db._adapter.update(table, self.query, row.op_values())
>>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
>>> 586, in update
>>> raise e
>>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
>>> 581, in update
>>> self.execute(sql)
>>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/__init__.py", 
>>> line 69, in wrap
>>> return f(*args, **kwargs)
>>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
>>> 468, in execute
>>> rv = self.cursor.execute(command, *args[1:], **kwargs)
>>> psycopg2.OperationalError: server closed the connection unexpectedly
>>> This probably means the server terminated abnormally
>>> before or while processing the request.
>>> Traceback (most recent call last):
>>>   File "/var/www/medios/gluon/scheduler.py", line 1077, in 
>>> wrapped_report_task
>>> self.report_task(task, task_report)
>>>   File "/var/www/medios/gluon/scheduler.py", line 1101, in

Re: [web2py] Scheduler error on last stable web2py version 2.24.1

2023-06-21 Thread Lisandro
(sorry, I've deleted my previous message because I have more detailed 
information).

Thank you for pointing that out, I had seen that issue but I missed out 
that I had to use Scheduler(..., with_spawn=True) so I did that change and, 
apparently, scheduler is stable (hasn't crashed so far). However, since 
that change, all the tasks that are from a different app that the one from 
where the scheduler was run, fail with this traceback:
Traceback (most recent call last): File 
"/home/limon/medios/gluon/scheduler.py", line 494, in executor functions = 
current._scheduler.tasks AttributeError: '_thread._local' object has no 
attribute '_scheduler'


Let me explain a bit more: I have one web2py instance with several 
applications, each one with its own database. One of these apps is the main 
one and the scheduler connects to its database.
...web2py/applications/main/...
...web2py/applications/app1/...
...web2py/applications/app2/...


I run three scheduler workers for the main app with this command:
/opt/virtualenvs/py39/bin/python /home/limon/web2py/web2py.py -K 
main,main,main


In applications/main/models/scheduler.py I instantiate the Scheduler like 
this:
scheduler = Scheduler(db, max_empty_runs=0, heartbeat=5, use_spawn=True)


And then, from several parts of my application I queue tasks. Some of these 
tasks are defined in the "main" application, but some others are defined in 
"app1" or "app2". 
Well, since the change to Scheduler(..., use_spawn=True) all the tasks 
within "main" application run ok, but all the other ones fail with the 
traceback I showed before. 
Notice I run the three scheduler workers for "main" application, I'm not 
sure if that has something to do with the issue. But I can confirm all this 
setup was working smoothly before use_spawn=True.

What could be happening?
Any help will be much appreciated.
I'll keep investigating and post here if I find something.

Thanks!
El miércoles, 21 de junio de 2023 a la(s) 07:08:37 UTC-3, Massimiliano 
escribió:

> There was an issue but should be fixed now.
>
> https://github.com/web2py/web2py/issues/1999
>
>
> Il giorno lun 19 giu 2023 alle ore 20:57 Lisandro  
> ha scritto:
>
>> I've recently upgraded to web2py Version 
>> 2.24.1-stable+timestamp.2023.03.23.05.07.17
>> It's running on python 3.9.14, Rocky Linux RHEL9, using PostgreSQL 15.2 
>> for database.
>>
>> Since I did the upgrade, the scheduler fails from time to time with this 
>> traceback:
>>
>> ERROR:web2py.scheduler.main#1531711:error storing result
>> Traceback (most recent call last):
>>   File "/var/www/medios/gluon/scheduler.py", line 1077, in 
>> wrapped_report_task
>> self.report_task(task, task_report)
>>   File "/var/www/medios/gluon/scheduler.py", line 1101, in report_task
>> db(sr.id == task.run_id).update(
>>   File "/var/www/medios/gluon/packages/dal/pydal/objects.py", line 2789, 
>> in update
>> ret = db._adapter.update(table, self.query, row.op_values())
>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
>> 586, in update
>> raise e
>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
>> 581, in update
>> self.execute(sql)
>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/__init__.py", 
>> line 69, in wrap
>> return f(*args, **kwargs)
>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
>> 468, in execute
>> rv = self.cursor.execute(command, *args[1:], **kwargs)
>> psycopg2.OperationalError: server closed the connection unexpectedly
>> This probably means the server terminated abnormally
>> before or while processing the request.
>> Traceback (most recent call last):
>>   File "/var/www/medios/gluon/scheduler.py", line 1077, in 
>> wrapped_report_task
>> self.report_task(task, task_report)
>>   File "/var/www/medios/gluon/scheduler.py", line 1101, in report_task
>> db(sr.id == task.run_id).update(
>>   File "/var/www/medios/gluon/packages/dal/pydal/objects.py", line 2789, 
>> in update
>> ret = db._adapter.update(table, self.query, row.op_values())
>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
>> 586, in update
>> raise e
>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
>> 581, in update
>> self.execute(sql)
>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/__init__.py", 
>> line 69, in 

Re: [web2py] Scheduler error on last stable web2py version 2.24.1

2023-06-21 Thread Lisandro
Hey there! Thanks for pointing that out. 
I had already seen that issue but I missed out that I had to instantiate 
Scheduler() with use_spawn=True. I've made that change, but now I have this 
error:

Traceback (most recent call last): File 
"/var/www/medios/gluon/scheduler.py", line 491, in executor functions = 
current._scheduler.tasks AttributeError: '_thread._local' object has no 
attribute '_scheduler'

I think this happens because I store the scheduler in the "current" object 
after instantiating it, and then I use it from within some of my scheduled 
tasks. Should I achieve this in a different way?

El miércoles, 21 de junio de 2023 a la(s) 07:08:37 UTC-3, Massimiliano 
escribió:

> There was an issue but should be fixed now.
>
> https://github.com/web2py/web2py/issues/1999
>
>
> Il giorno lun 19 giu 2023 alle ore 20:57 Lisandro  
> ha scritto:
>
>> I've recently upgraded to web2py Version 
>> 2.24.1-stable+timestamp.2023.03.23.05.07.17
>> It's running on python 3.9.14, Rocky Linux RHEL9, using PostgreSQL 15.2 
>> for database.
>>
>> Since I did the upgrade, the scheduler fails from time to time with this 
>> traceback:
>>
>> ERROR:web2py.scheduler.main#1531711:error storing result
>> Traceback (most recent call last):
>>   File "/var/www/medios/gluon/scheduler.py", line 1077, in 
>> wrapped_report_task
>> self.report_task(task, task_report)
>>   File "/var/www/medios/gluon/scheduler.py", line 1101, in report_task
>> db(sr.id == task.run_id).update(
>>   File "/var/www/medios/gluon/packages/dal/pydal/objects.py", line 2789, 
>> in update
>> ret = db._adapter.update(table, self.query, row.op_values())
>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
>> 586, in update
>> raise e
>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
>> 581, in update
>> self.execute(sql)
>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/__init__.py", 
>> line 69, in wrap
>> return f(*args, **kwargs)
>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
>> 468, in execute
>> rv = self.cursor.execute(command, *args[1:], **kwargs)
>> psycopg2.OperationalError: server closed the connection unexpectedly
>> This probably means the server terminated abnormally
>> before or while processing the request.
>> Traceback (most recent call last):
>>   File "/var/www/medios/gluon/scheduler.py", line 1077, in 
>> wrapped_report_task
>> self.report_task(task, task_report)
>>   File "/var/www/medios/gluon/scheduler.py", line 1101, in report_task
>> db(sr.id == task.run_id).update(
>>   File "/var/www/medios/gluon/packages/dal/pydal/objects.py", line 2789, 
>> in update
>> ret = db._adapter.update(table, self.query, row.op_values())
>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
>> 586, in update
>> raise e
>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
>> 581, in update
>> self.execute(sql)
>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/__init__.py", 
>> line 69, in wrap
>> return f(*args, **kwargs)
>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
>> 468, in execute
>> rv = self.cursor.execute(command, *args[1:], **kwargs)
>> psycopg2.OperationalError: server closed the connection unexpectedly
>> This probably means the server terminated abnormally
>> before or while processing the request.
>> During handling of the above exception, another exception occurred:
>> Traceback (most recent call last):
>>   File "/var/www/medios/gluon/shell.py", line 321, in run
>> exec(python_code, _env)
>>   File "", line 1, in 
>>   File "/var/www/medios/gluon/scheduler.py", line 949, in loop
>> self.wrapped_report_task(task, self.execute(task))
>>   File "/var/www/medios/gluon/scheduler.py", line 1082, in 
>> wrapped_report_task
>> db.rollback()
>>   File "/var/www/medios/gluon/packages/dal/pydal/base.py", line 825, in 
>> rollback
>> self._adapter.rollback()
>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/__init__.py", 
>> line 57, in wrap
>> return f(*args, **kwargs)
>>   File "/var/www/medios/gluon/packages/dal/pydal/adapters/base

[web2py] Scheduler error on last stable web2py version 2.24.1

2023-06-19 Thread Lisandro
I've recently upgraded to web2py Version 
2.24.1-stable+timestamp.2023.03.23.05.07.17
It's running on python 3.9.14, Rocky Linux RHEL9, using PostgreSQL 15.2 for 
database.

Since I did the upgrade, the scheduler fails from time to time with this 
traceback:

ERROR:web2py.scheduler.main#1531711:error storing result
Traceback (most recent call last):
  File "/var/www/medios/gluon/scheduler.py", line 1077, in 
wrapped_report_task
self.report_task(task, task_report)
  File "/var/www/medios/gluon/scheduler.py", line 1101, in report_task
db(sr.id == task.run_id).update(
  File "/var/www/medios/gluon/packages/dal/pydal/objects.py", line 2789, in 
update
ret = db._adapter.update(table, self.query, row.op_values())
  File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
586, in update
raise e
  File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
581, in update
self.execute(sql)
  File "/var/www/medios/gluon/packages/dal/pydal/adapters/__init__.py", 
line 69, in wrap
return f(*args, **kwargs)
  File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
468, in execute
rv = self.cursor.execute(command, *args[1:], **kwargs)
psycopg2.OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Traceback (most recent call last):
  File "/var/www/medios/gluon/scheduler.py", line 1077, in 
wrapped_report_task
self.report_task(task, task_report)
  File "/var/www/medios/gluon/scheduler.py", line 1101, in report_task
db(sr.id == task.run_id).update(
  File "/var/www/medios/gluon/packages/dal/pydal/objects.py", line 2789, in 
update
ret = db._adapter.update(table, self.query, row.op_values())
  File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
586, in update
raise e
  File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
581, in update
self.execute(sql)
  File "/var/www/medios/gluon/packages/dal/pydal/adapters/__init__.py", 
line 69, in wrap
return f(*args, **kwargs)
  File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
468, in execute
rv = self.cursor.execute(command, *args[1:], **kwargs)
psycopg2.OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/var/www/medios/gluon/shell.py", line 321, in run
exec(python_code, _env)
  File "", line 1, in 
  File "/var/www/medios/gluon/scheduler.py", line 949, in loop
self.wrapped_report_task(task, self.execute(task))
  File "/var/www/medios/gluon/scheduler.py", line 1082, in 
wrapped_report_task
db.rollback()
  File "/var/www/medios/gluon/packages/dal/pydal/base.py", line 825, in 
rollback
self._adapter.rollback()
  File "/var/www/medios/gluon/packages/dal/pydal/adapters/__init__.py", 
line 57, in wrap
return f(*args, **kwargs)
  File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
1012, in rollback
return self.connection.rollback()
psycopg2.InterfaceError: connection already closed


I've checked PostgreSQL logs but there is no error or apparent problem at 
the time scheduler fails. The database instance has several databases and 
no error log is reported, everything runs smoothly. It's just the scheduler 
that reports that error (and after it, it doesn't run anymore). Where else 
should I look?

Any help will be much appreciated.
Warm regards,
Lisandro

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/9ffdc6f1-8448-4784-a344-7f176545e9d9n%40googlegroups.com.


[web2py] Re: Error with redis and web2py 2.24.1: Invalid input of type: 'NoneType'. Convert to a bytes, string, int or float first.

2023-05-16 Thread Lisandro
I've done some more investigation regarding this error. 
Apparently, the error "redis.exceptions.ConnectionError: Connection closed 
by server" is potentially related to the fact that the Python app doesn't 
exchange data with Redis for a certain time and the connection closes 
automatically. When the app tries to exchange again with Redis, the 
connection doesn't work anymore and you get the error "Connection closed by 
server". To resolve this, you can pass the *health_check_interval*.

More on this error:
https://github.com/redis/redis-py/issues/1186
https://github.com/redis/redis-py/issues/1232
https://stackoverflow.com/questions/70647697/handle-redis-connection-reset

In my web2py app I'm connecting to Redis using RConn provided by 
gluon.contrib.redis_utils. I've check the source code of RConn: it 
instantiates a RedisStrict connection with the provided parameters. As 
right now I'm not providing any other parameter besides host/port, I think 
the problem will solve providing *health_check_interval* and 
*socket_timeout* arguments. I will make those changes in production and see 
what happens, but I'm very confident :)

El miércoles, 10 de mayo de 2023 a la(s) 16:18:25 UTC-3, Lisandro escribió:

> I see what you mean. I'll investigate why the socket was closed, I presume 
> it could be a network problem, because the Redis server is separated from 
> the one where the app resides, they communicate through the local network. 
> Considering the servers have plenty of available resources, the app serves 
> around 60 requests per second and the error only happens 3 or 4 times per 
> day, I will look into the network.
>
> Thank you for your time!
> Best regards,
> Lisandro
>
> El martes, 9 de mayo de 2023 a la(s) 23:22:06 UTC-3, snide...@gmail.com 
> escribió:
>
>> On Monday, May 8, 2023 at 6:36:13 AM UTC-7 Lisandro wrote:
>>
>> Hey there! 
>> I recently updated to Web2py Version 
>> 2.24.1-stable+timestamp.2023.03.23.05.07.17
>> It uses python 3.9.14, running in production serving around 60 requests 
>> per second, using resources efficiently and running really smoothly :D
>>
>> Since the update, I'm seeing this error sporadically: 
>> *redis.exceptions.DataError: 
>> Invalid input of type: 'NoneType'. Convert to a bytes, string, int or float 
>> first.*
>> This is the traceback:
>>
>> Traceback (most recent call last):
>> File "applications/eldia/compiled/models.db.py", line 113, in 
>> File "/var/www/medios/gluon/globals.py", line 979, in connect
>> row = table(record_id, unique_key=unique_key)
>>
>>  [...]
>>
>>
>> File 
>> "/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/connection.py",
>>  
>> line 318, in read_response
>> raw = self._buffer.readline()
>> File 
>> "/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/connection.py",
>>  
>> line 249, in readline
>> self._read_from_socket()
>> File 
>> "/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/connection.py",
>>  
>> line 195, in _read_from_socket
>> raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)
>> redis.exceptions.ConnectionError: Connection closed by server.
>>
>> During handling of the above exception, another exception occurred:
>>
>> Traceback (most recent call last):
>> File "/var/www/medios/gluon/main.py", line 439, in wsgibase
>> serve_controller(request, response, session)
>> File "/var/www/medios/gluon/main.py", line 173, in serve_controller
>> run_models_in(environment)
>> File "/var/www/medios/gluon/compileapp.py", line 563, in run_models_in
>> restricted(ccode, environment, layer=model)
>> File "/var/www/medios/gluon/restricted.py", line 219, in restricted
>> exec(ccode, environment)
>> File "applications/eldia/compiled/models.db.py", line 116, in 
>> File "applications/eldia/compiled/models.db.py", line 47, in raise_error
>> gluon.http.HTTP: 500 INTERNAL SERVER ERROR
>>
>>
>> tripped while already falling, methinks
>>  
>>
>>
>> During handling of the above exception, another exception occurred:
>>
>> Traceback (most recent call last):
>> File "/var/www/medios/gluon/main.py", line 455, in wsgibase
>> session._try_store_in_db(request, response)
>> File "/var/www/medios/gluon/globals.py", line 1254, in _try_store_in_db
>> record_id = table.insert(**dd)
>> File "/var/www/medios/gluon/contrib/redis_session.py", line 167, in 
>>

[web2py] Re: Error with redis and web2py 2.24.1: Invalid input of type: 'NoneType'. Convert to a bytes, string, int or float first.

2023-05-10 Thread Lisandro
I see what you mean. I'll investigate why the socket was closed, I presume 
it could be a network problem, because the Redis server is separated from 
the one where the app resides, they communicate through the local network. 
Considering the servers have plenty of available resources, the app serves 
around 60 requests per second and the error only happens 3 or 4 times per 
day, I will look into the network.

Thank you for your time!
Best regards,
Lisandro

El martes, 9 de mayo de 2023 a la(s) 23:22:06 UTC-3, snide...@gmail.com 
escribió:

> On Monday, May 8, 2023 at 6:36:13 AM UTC-7 Lisandro wrote:
>
> Hey there! 
> I recently updated to Web2py Version 
> 2.24.1-stable+timestamp.2023.03.23.05.07.17
> It uses python 3.9.14, running in production serving around 60 requests 
> per second, using resources efficiently and running really smoothly :D
>
> Since the update, I'm seeing this error sporadically: 
> *redis.exceptions.DataError: 
> Invalid input of type: 'NoneType'. Convert to a bytes, string, int or float 
> first.*
> This is the traceback:
>
> Traceback (most recent call last):
> File "applications/eldia/compiled/models.db.py", line 113, in 
> File "/var/www/medios/gluon/globals.py", line 979, in connect
> row = table(record_id, unique_key=unique_key)
>
>  [...]
>
>
> File 
> "/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/connection.py",
>  
> line 318, in read_response
> raw = self._buffer.readline()
> File 
> "/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/connection.py",
>  
> line 249, in readline
> self._read_from_socket()
> File 
> "/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/connection.py",
>  
> line 195, in _read_from_socket
> raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)
> redis.exceptions.ConnectionError: Connection closed by server.
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
> File "/var/www/medios/gluon/main.py", line 439, in wsgibase
> serve_controller(request, response, session)
> File "/var/www/medios/gluon/main.py", line 173, in serve_controller
> run_models_in(environment)
> File "/var/www/medios/gluon/compileapp.py", line 563, in run_models_in
> restricted(ccode, environment, layer=model)
> File "/var/www/medios/gluon/restricted.py", line 219, in restricted
> exec(ccode, environment)
> File "applications/eldia/compiled/models.db.py", line 116, in 
> File "applications/eldia/compiled/models.db.py", line 47, in raise_error
> gluon.http.HTTP: 500 INTERNAL SERVER ERROR
>
>
> tripped while already falling, methinks
>  
>
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
> File "/var/www/medios/gluon/main.py", line 455, in wsgibase
> session._try_store_in_db(request, response)
> File "/var/www/medios/gluon/globals.py", line 1254, in _try_store_in_db
> record_id = table.insert(**dd)
> File "/var/www/medios/gluon/contrib/redis_session.py", line 167, in insert
> pipe.execute()
> File 
> "/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/client.py", 
> line 2078, in execute
> return conn.retry.call_with_retry(
> File 
> "/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/retry.py", 
> line 46, in call_with_retry
> return do()
> File 
> "/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/client.py", 
> line 2079, in 
> lambda: execute(conn, stack, raise_on_error),
> File 
> "/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/client.py", 
> line 1922, in _execute_transaction
> all_cmds = connection.pack_commands(
> File 
> "/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/connection.py",
>  
> line 895, in pack_commands
> for chunk in self.pack_command(*cmd):
> File 
> "/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/connection.py",
>  
> line 858, in pack_command
> for arg in map(self.encoder.encode, args):
> File 
> "/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/connection.py",
>  
> line 108, in encode
> raise DataError(
> redis.exceptions.DataError: Invalid input of type: 'NoneType'. Convert to 
> a bytes, string, int or float first.
>
>
> This looks like it involves one of these being None:
>
> dd = dict(locked=0,
>   client_ip=response.session_client,
>   modified_datetime=request.now.isoformat(),
>  

[web2py] Error with redis and web2py 2.24.1: Invalid input of type: 'NoneType'. Convert to a bytes, string, int or float first.

2023-05-08 Thread Lisandro
Hey there! 
I recently updated to Web2py Version 
2.24.1-stable+timestamp.2023.03.23.05.07.17
It uses python 3.9.14, running in production serving around 60 requests per 
second, using resources efficiently and running really smoothly :D

Since the update, I'm seeing this error sporadically: 
*redis.exceptions.DataError: 
Invalid input of type: 'NoneType'. Convert to a bytes, string, int or float 
first.*
This is the traceback:

Traceback (most recent call last):
File "applications/eldia/compiled/models.db.py", line 113, in 
File "/var/www/medios/gluon/globals.py", line 979, in connect
row = table(record_id, unique_key=unique_key)
File "/var/www/medios/gluon/contrib/redis_session.py", line 134, in __call__
row = q.select()
File "/var/www/medios/gluon/contrib/redis_session.py", line 206, in select
rtn = {to_native(k): v for k, v in self.db.r_server.hgetall(key).items()}
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/commands/core.py",
 
line 4776, in hgetall
return self.execute_command("HGETALL", name)
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/client.py", 
line 
1238, in execute_command
return conn.retry.call_with_retry(
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/retry.py", line 
49, in call_with_retry
fail(error)
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/client.py", 
line 
1242, in 
lambda error: self._disconnect_raise(conn, error),
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/client.py", 
line 
1228, in _disconnect_raise
raise error
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/retry.py", line 
46, in call_with_retry
return do()
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/client.py", 
line 
1239, in 
lambda: self._send_command_parse_response(
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/client.py", 
line 
1215, in _send_command_parse_response
return self.parse_response(conn, command_name, **options)
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/client.py", 
line 
1254, in parse_response
response = connection.read_response()
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/connection.py", 
line 824, in read_response
response = self._parser.read_response(disable_decoding=disable_decoding)
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/connection.py", 
line 318, in read_response
raw = self._buffer.readline()
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/connection.py", 
line 249, in readline
self._read_from_socket()
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/connection.py", 
line 195, in _read_from_socket
raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)
redis.exceptions.ConnectionError: Connection closed by server.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/var/www/medios/gluon/main.py", line 439, in wsgibase
serve_controller(request, response, session)
File "/var/www/medios/gluon/main.py", line 173, in serve_controller
run_models_in(environment)
File "/var/www/medios/gluon/compileapp.py", line 563, in run_models_in
restricted(ccode, environment, layer=model)
File "/var/www/medios/gluon/restricted.py", line 219, in restricted
exec(ccode, environment)
File "applications/eldia/compiled/models.db.py", line 116, in 
File "applications/eldia/compiled/models.db.py", line 47, in raise_error
gluon.http.HTTP: 500 INTERNAL SERVER ERROR

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/var/www/medios/gluon/main.py", line 455, in wsgibase
session._try_store_in_db(request, response)
File "/var/www/medios/gluon/globals.py", line 1254, in _try_store_in_db
record_id = table.insert(**dd)
File "/var/www/medios/gluon/contrib/redis_session.py", line 167, in insert
pipe.execute()
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/client.py", 
line 
2078, in execute
return conn.retry.call_with_retry(
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/retry.py", line 
46, in call_with_retry
return do()
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/client.py", 
line 
2079, in 
lambda: execute(conn, stack, raise_on_error),
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/client.py", 
line 
1922, in _execute_transaction
all_cmds = connection.pack_commands(
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/connection.py", 
line 895, in pack_commands
for chunk in self.pack_command(*cmd):
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/connection.py", 
line 858, in pack_command
for arg in map(self.encoder.encode, args):
File 
"/opt/virtualenvs/medios_py39/lib/python3.9/site-packages/redis/connection.py", 
line 108, in encode
raise DataError(
redis.exceptions.Dat

[web2py] Re: xmlrpc python3 error

2023-03-20 Thread Lisandro
Just in case it helps someone, I was able to change my code to make it 
work. It's not a complete solution: I just started using args instead of 
vars.
I'll explain: in the client I was calling the service with some query 
string arguments, like this:

from gluon.contrib.simplejsonrpc import ServerProxy
ws = 
ServerProxy('http://mysite.com/ws/call/jsonrpc?token=ce3e1298-b7b3-40f6-83f2-f78c4360db8d')

... and in the function exposing the service I was using the token to 
identify the client:

from gluon.tools import Service
service = Service()

def call():
if not db(db.clients.token == request.vars.token).count():
raise HTTP(403)
return service()


So I just had to make this change in the client:

ws = 
ServerProxy('http://mysite.com/ws/call/jsonrpc/ce3e1298-b7b3-40f6-83f2-f78c4360db8d')


... and this change in the service:

def call():
if not db(db.clients.token == request.args(1)).count():
raise HTTP(403)
return service()

I hope it helps someone.
Best regards,
Lisandro
El domingo, 19 de marzo de 2023 a la(s) 19:56:20 UTC-3, Lisandro escribió:

> Hey there! 
> I'm experiencing the same error after migrating to python3 and web2py 
> version 2.23.1-stable+timestamp.2023.01.31.08.01.46
> The traceback is the same, the error is originated when I try to access 
> request.vars:
>
> Traceback (most recent call last):
> File "/home/limon/medios/gluon/restricted.py", line 219, in restricted
> exec(ccode, environment)
> File "/home/limon/medios/applications/webmedios/controllers/ws.py", line 
> 1423, in 
> File "/home/limon/medios/gluon/globals.py", line 430, in 
> self._caller = lambda f: f()
> File "/home/limon/medios/applications/webmedios/controllers/ws.py", line 
> 13, in call
> if not request.vars.token or not db(db.sitios.token == request.vars.token
> ).count():
> File "/home/limon/medios/gluon/globals.py", line 325, in vars
> self.parse_all_vars()
> File "/home/limon/medios/gluon/globals.py", line 296, in parse_all_vars
>
> for key, value in iteritems(self.post_vars):
> File "/home/limon/medios/gluon/globals.py", line 317, in post_vars
> self.parse_post_vars()
> File "/home/limon/medios/gluon/globals.py", line 253, in parse_post_vars
> dpost = cgi.FieldStorage(fp=body, environ=env, headers=headers, 
> keep_blank_values=1)
> File "/usr/lib64/python3.6/cgi.py", line 569, in __init__
> self.read_single()
> File "/usr/lib64/python3.6/cgi.py", line 761, in read_single
> self.read_binary()
> File "/usr/lib64/python3.6/cgi.py", line 783, in read_binary
> self.file.write(data)
> TypeError: write() argument must be str, not bytes
>
> I've found that this was related to a python bug 
> <https://bugs.python.org/issue2> that is still opened, but at the 
> same time I've seen that web2py received a workaround fix 
> <https://github.com/web2py/web2py/pull/2309/commits/5490480906cf48360629baa68b55e517ff3621b6>
> . 
> I'm running the last web2py stable version. And just in case I checked the 
> source code and it is running with the workaround fix. But the error still 
> remains.
> What should I try? I'm a bit lost :/
>
> El lunes, 7 de octubre de 2019 a la(s) 18:08:12 UTC-3, Mark escribió:
>
>> I submitted the bug report to the github. 
>>
>> Thank you very much.
>>
>>
>> On Sunday, October 6, 2019 at 5:29:31 PM UTC-4, Dave S wrote:
>>>
>>>
>>>
>>> On Friday, September 27, 2019 at 6:39:00 AM UTC-7, Mark wrote:
>>>>
>>>> I am using either Rocket or Azure, and get the same error.
>>>>
>>>> Yes, there is a ticket, which I didn't realize before:
>>>>
>>>> Traceback (most recent call last):
>>>>   File "R:\web2py\gluon\restricted.py", line 219, in restricted
>>>> exec(ccode, environment)
>>>>   File "R:\web2py\applications\myapp\models\db.py", line 321, in 
>>>> 
>>>> '')
>>>>   File "R:\web2py\gluon\tools.py", line 884, in __init__
>>>> self.request_vars = request and request.vars or current.request.vars
>>>>   File "R:\web2py\gluon\globals.py", line 316, in vars
>>>> self.parse_all_vars()
>>>>   File "R:\web2py\gluon\globals.py", line 287, in parse_all_vars
>>>> for key, value in iteritems(self.post_vars):
>>>>   File "R:\web2py\gluon\globals.py", line 308, in post_vars
>>>> self.parse_post_vars()
>>>>   File "R:\web2py\gluon\globals.py

[web2py] Re: xmlrpc python3 error

2023-03-19 Thread Lisandro
Hey there! 
I'm experiencing the same error after migrating to python3 and web2py 
version 2.23.1-stable+timestamp.2023.01.31.08.01.46
The traceback is the same, the error is originated when I try to access 
request.vars:

Traceback (most recent call last):
File "/home/limon/medios/gluon/restricted.py", line 219, in restricted
exec(ccode, environment)
File "/home/limon/medios/applications/webmedios/controllers/ws.py", line 
1423, in 
File "/home/limon/medios/gluon/globals.py", line 430, in 
self._caller = lambda f: f()
File "/home/limon/medios/applications/webmedios/controllers/ws.py", line 13, 
in call
if not request.vars.token or not db(db.sitios.token == request.vars.token).
count():
File "/home/limon/medios/gluon/globals.py", line 325, in vars
self.parse_all_vars()
File "/home/limon/medios/gluon/globals.py", line 296, in parse_all_vars
for key, value in iteritems(self.post_vars):
File "/home/limon/medios/gluon/globals.py", line 317, in post_vars
self.parse_post_vars()
File "/home/limon/medios/gluon/globals.py", line 253, in parse_post_vars
dpost = cgi.FieldStorage(fp=body, environ=env, headers=headers, 
keep_blank_values=1)
File "/usr/lib64/python3.6/cgi.py", line 569, in __init__
self.read_single()
File "/usr/lib64/python3.6/cgi.py", line 761, in read_single
self.read_binary()
File "/usr/lib64/python3.6/cgi.py", line 783, in read_binary
self.file.write(data)
TypeError: write() argument must be str, not bytes

I've found that this was related to a python bug 
 that is still opened, but at the same 
time I've seen that web2py received a workaround fix 

. 
I'm running the last web2py stable version. And just in case I checked the 
source code and it is running with the workaround fix. But the error still 
remains.
What should I try? I'm a bit lost :/

El lunes, 7 de octubre de 2019 a la(s) 18:08:12 UTC-3, Mark escribió:

> I submitted the bug report to the github. 
>
> Thank you very much.
>
>
> On Sunday, October 6, 2019 at 5:29:31 PM UTC-4, Dave S wrote:
>>
>>
>>
>> On Friday, September 27, 2019 at 6:39:00 AM UTC-7, Mark wrote:
>>>
>>> I am using either Rocket or Azure, and get the same error.
>>>
>>> Yes, there is a ticket, which I didn't realize before:
>>>
>>> Traceback (most recent call last):
>>>   File "R:\web2py\gluon\restricted.py", line 219, in restricted
>>> exec(ccode, environment)
>>>   File "R:\web2py\applications\myapp\models\db.py", line 321, in 
>>> '')
>>>   File "R:\web2py\gluon\tools.py", line 884, in __init__
>>> self.request_vars = request and request.vars or current.request.vars
>>>   File "R:\web2py\gluon\globals.py", line 316, in vars
>>> self.parse_all_vars()
>>>   File "R:\web2py\gluon\globals.py", line 287, in parse_all_vars
>>> for key, value in iteritems(self.post_vars):
>>>   File "R:\web2py\gluon\globals.py", line 308, in post_vars
>>> self.parse_post_vars()
>>>   File "R:\web2py\gluon\globals.py", line 244, in parse_post_vars
>>> dpost = cgi.FieldStorage(fp=body, environ=env, keep_blank_values=1)
>>>   File "c:\python37\lib\cgi.py", line 491, in __init__
>>> self.read_single()
>>>   File "c:\python37\lib\cgi.py", line 682, in read_single
>>> self.read_binary()
>>>   File "c:\python37\lib\cgi.py", line 704, in read_binary
>>> self.file.write(data)
>>>   File "c:\python37\lib\tempfile.py", line 481, in func_wrapper
>>> return func(*args, **kwargs)
>>> TypeError: write() argument must be str, not bytes
>>>
>>>
>>> Thanks!
>>>
>>
>> This looks like a place where something got missed in the Py3 work.  I 
>> suspect in cgi.py, maybe because testing used a uwsgi setup.  But I'm not 
>> ready to go into to it at this time.   Try filing a bug report at  <
>> https://github.com/web2py>
>>
>> Also, make sure we know what the front-end and middle-ware parts of the 
>> configuration are.
>>
>> /dps
>>
>>  
>>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/3fcfb97d-139b-491d-b799-b7e43c8e8500n%40googlegroups.com.


[web2py] Re: Memory leak using Redis for sessions

2022-08-06 Thread Lisandro
It's not a memory leak. You have to run sessions2trash.py periodically, 
even if you're using Redis and setting and expiration time for every 
session:
https://groups.google.com/g/web2py/c/IFjr-VQoyAE/m/VoihkT1NAgAJ

El lunes, 25 de julio de 2022 a la(s) 18:23:37 UTC-3, Lisandro escribió:

> I'm running web2py in production and I use a Redis server to store a 
> couple of millons sessions, but I'm facing a memory problem that I haven't 
> been able to fix.
>
> I use session_expiry=172800 (two days). The application load is very 
> stable (it handles about 60 requests per second). I thought that after a 
> few weeks running I would know how much memory I need for Redis. However 
> the memory used by Redis keeps increasing indefinitely.
>
> I don't use sessions for anything more than a very small percentage of 
> users that can login and do some administrative tasks. The vast majority of 
> users can't login. 
>
> Just in case, I checked and every key in redis has an expiration time:
> $ redis-cli info keyspace
> # Keyspace
> db0:keys=1549547,expires=1548249,avg_ttl=89380135
>
> I've also checked a few session keys and I saw that a session ocuppies 
> about 250 bytes. However, the memory used by Redis grows slowly and 
> constantly: in a whole year it reached the 24 gigabytes of RAM that the 
> server has, which is insane, right? 
>
> I had Redis configured to limit the amount of RAM it can use:
> maxmemory 20gb
> maxmemory-policy volatile-lru
>
> However, as I commented before, after a whole year it reached that limit, 
> and my app started throwing this error:
>
> Traceback (most recent call last): File "/var/www/medios/gluon/main.py", 
> line 462, in wsgibase session._try_store_in_db(request, response) File 
> "/var/www/medios/gluon/globals.py", line 1226, in _try_store_in_db 
> record_id = table.insert(**dd) File 
> "/var/www/medios/gluon/contrib/redis_session.py", line 138, in insert newid 
> = str(self.db.r_server.incr(self.serial)) File 
> "/var/www/medios/venv_medios/lib/python2.7/site-packages/redis/client.py", 
> line 651, in incr return self.execute_command('INCRBY', name, amount) File 
> "/var/www/medios/venv_medios/lib/python2.7/site-packages/redis/client.py", 
> line 394, in execute_command return self.parse_response(connection, 
> command_name, **options) File 
> "/var/www/medios/venv_medios/lib/python2.7/site-packages/redis/client.py", 
> line 404, in parse_response response = connection.read_response() File 
> "/var/www/medios/venv_medios/lib/python2.7/site-packages/redis/connection.py",
>  
> line 316, in read_response raise response ResponseError: OOM command not 
> allowed when used memory > 'maxmemory'.
>
>
> I thought that error was impossible giving that Redis has a maxmemory 
> limit and it is instructed to evict keys when the limit is reached. However 
> I realised that these types of scenarios (lot of keys being written and 
> also lot of keys being deleted) can lead to memory fragmentation. And redis 
> has a defragmentation option, so I made some changes.
>
> I reduced the maxmemory Redis limit and activated the auto defragmentation:
> maxmemory 1gb
> maxmemory-policy volatile-lru
> activedefrag yes
>
> Redis auto defragmentation works like a charm: when it wasn't active I 
> could see that the mem_fragmentation_ratio was slowly increasing. After 
> activating it, it stayed in a stable and optimal value of 1.05.
>
> After a week running (remember all the sessions expire in two days) Redis 
> was using about 600mb of RAM. But the usage kept growing and reached the 
> maxmemory limit a few days later. 
> At that point, I could verify that Redis started evicting keys to make 
> space (that was expected accordingly to the configuration). 
> However a few days later my apps again started to throw the error with the 
> exact same traceback I posted before :/
>
> What could be happening? I'm pretty sure that I don't need more than 1 or 
> 2gb of RAM for handling the sessions with Redis. So why does it crash? 
> Could it be a memory leak in gluon/contrib/redis_session.py adapter?
>
> One thing: I've never run sessions2trash.py
> But if I understand the documentation 
> <http://web2py.com/books/default/chapter/29/13?search=cache#Sessions-in-Redis>
>  
> right, I don't need to run it as I set an expiration time for every session.
>
> Let me know what you think, any help will be much appreciated.
> Thanks!
> Warm regards,
> Lisandro
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/0e846f32-2471-46a9-8e39-6c3f0b80aea3n%40googlegroups.com.


[web2py] Re: Should I run sessions2trash.py even if I use Redis for sessions and set an expiration time?

2022-08-06 Thread Lisandro
I'll try to answer my own question, in case it helps someone else.
Yes, you should run sessions2trash periodically, even if you're using Redis 
and you're setting an expiration time for every session.

As the documentation states: *"... when session_expiry is set [...] you 
should ocassionally run sessions2trash.py just to clean the key holding the 
set of all the sessions previously issued..."*

In my deployment I have around 1000 applications running in one web2py 
instance. All of them set session_expiry to 2 days. The load is very 
stable: about 2 millon sessions in memory, using about 1gb of memory. 
However, while the number of sessions stays stable in time, memory usage 
increases slowly. I've found that running sessions2trash makes memory usage 
to go down to 1gb. And yes, you have to run it once for every installed 
app. I've inspected the source code of sessions2trash.py and I can't 
realise what is it that it deletes (besides the expired sessions). 

It would be nice to have a script that only takes care of cleaning "the 
other stuff", so it doesn't take too much time to run. Consider that 
sessions2trash.py iterates over all the stored sessions, and if you're 
using Redis and you're setting an expiration time for every session, it 
doesn't make sense to iterate over all the dataset just to clean "that 
other stuff".

Hope it helps someone :)

El martes, 26 de julio de 2022 a la(s) 08:40:04 UTC-3, Lisandro escribió:

> Hey there! 
> I'm using Redis to store sessions. Every session has an expiration time of 
> two days. Should I still run sessions2trash.py from time to time?
> The answer is not clear to me. The documentation says:
>
>
> *If session_expiry is not set, sessions will be handled as usual, you'd 
> need to cleanup sessions as usual once a while.*
> *However, when session_expiry is set will delete automatically sessions 
> after n seconds (e.g. if set to 3600, session will expire exactly one hour 
> later having been updated the last time), you should occasionally run 
> sessions2trash.py just to clean the key holding the set of all the sessions 
> previously issued.*
>
> If the answer is affirmative:
>  - How frequently should I run sessions2trash.py? 
>  - If my web2py instance runs several applications, lets say, a thousand 
> apps, will I have to run sessions2trash.py a thousand times, one time for 
> each app?
>
> Thanks in advance!
> Warm regards,
> Lisandro 
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/82604472-35a7-48d1-bf45-72dbde6c24e4n%40googlegroups.com.


[web2py] Should I run sessions2trash.py even if I use Redis for sessions and set an expiration time?

2022-07-26 Thread Lisandro
Hey there! 
I'm using Redis to store sessions. Every session has an expiration time of 
two days. Should I still run sessions2trash.py from time to time?
The answer is not clear to me. The documentation says:


*If session_expiry is not set, sessions will be handled as usual, you'd 
need to cleanup sessions as usual once a while.*
*However, when session_expiry is set will delete automatically sessions 
after n seconds (e.g. if set to 3600, session will expire exactly one hour 
later having been updated the last time), you should occasionally run 
sessions2trash.py just to clean the key holding the set of all the sessions 
previously issued.*

If the answer is affirmative:
 - How frequently should I run sessions2trash.py? 
 - If my web2py instance runs several applications, lets say, a thousand 
apps, will I have to run sessions2trash.py a thousand times, one time for 
each app?

Thanks in advance!
Warm regards,
Lisandro 

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/1b64e690-348d-4a58-81aa-f8f00910c2d4n%40googlegroups.com.


[web2py] Memory leak using Redis for sessions

2022-07-25 Thread Lisandro
I'm running web2py in production and I use a Redis server to store a couple 
of millons sessions, but I'm facing a memory problem that I haven't been 
able to fix.

I use session_expiry=172800 (two days). The application load is very stable 
(it handles about 60 requests per second). I thought that after a few weeks 
running I would know how much memory I need for Redis. However the memory 
used by Redis keeps increasing indefinitely.

I don't use sessions for anything more than a very small percentage of 
users that can login and do some administrative tasks. The vast majority of 
users can't login. 

Just in case, I checked and every key in redis has an expiration time:
$ redis-cli info keyspace
# Keyspace
db0:keys=1549547,expires=1548249,avg_ttl=89380135

I've also checked a few session keys and I saw that a session ocuppies 
about 250 bytes. However, the memory used by Redis grows slowly and 
constantly: in a whole year it reached the 24 gigabytes of RAM that the 
server has, which is insane, right? 

I had Redis configured to limit the amount of RAM it can use:
maxmemory 20gb
maxmemory-policy volatile-lru

However, as I commented before, after a whole year it reached that limit, 
and my app started throwing this error:

Traceback (most recent call last): File "/var/www/medios/gluon/main.py", 
line 462, in wsgibase session._try_store_in_db(request, response) File 
"/var/www/medios/gluon/globals.py", line 1226, in _try_store_in_db 
record_id = table.insert(**dd) File 
"/var/www/medios/gluon/contrib/redis_session.py", line 138, in insert newid 
= str(self.db.r_server.incr(self.serial)) File 
"/var/www/medios/venv_medios/lib/python2.7/site-packages/redis/client.py", 
line 651, in incr return self.execute_command('INCRBY', name, amount) File 
"/var/www/medios/venv_medios/lib/python2.7/site-packages/redis/client.py", 
line 394, in execute_command return self.parse_response(connection, 
command_name, **options) File 
"/var/www/medios/venv_medios/lib/python2.7/site-packages/redis/client.py", 
line 404, in parse_response response = connection.read_response() File 
"/var/www/medios/venv_medios/lib/python2.7/site-packages/redis/connection.py", 
line 316, in read_response raise response ResponseError: OOM command not 
allowed when used memory > 'maxmemory'.


I thought that error was impossible giving that Redis has a maxmemory limit 
and it is instructed to evict keys when the limit is reached. However I 
realised that these types of scenarios (lot of keys being written and also 
lot of keys being deleted) can lead to memory fragmentation. And redis has 
a defragmentation option, so I made some changes.

I reduced the maxmemory Redis limit and activated the auto defragmentation:
maxmemory 1gb
maxmemory-policy volatile-lru
activedefrag yes

Redis auto defragmentation works like a charm: when it wasn't active I 
could see that the mem_fragmentation_ratio was slowly increasing. After 
activating it, it stayed in a stable and optimal value of 1.05.

After a week running (remember all the sessions expire in two days) Redis 
was using about 600mb of RAM. But the usage kept growing and reached the 
maxmemory limit a few days later. 
At that point, I could verify that Redis started evicting keys to make 
space (that was expected accordingly to the configuration). 
However a few days later my apps again started to throw the error with the 
exact same traceback I posted before :/

What could be happening? I'm pretty sure that I don't need more than 1 or 
2gb of RAM for handling the sessions with Redis. So why does it crash? 
Could it be a memory leak in gluon/contrib/redis_session.py adapter?

One thing: I've never run sessions2trash.py
But if I understand the documentation 
<http://web2py.com/books/default/chapter/29/13?search=cache#Sessions-in-Redis> 
right, I don't need to run it as I set an expiration time for every session.

Let me know what you think, any help will be much appreciated.
Thanks!
Warm regards,
Lisandro

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/9775cb12-e932-4223-aedd-f1fd5feae1f6n%40googlegroups.com.


[web2py] Issue with redis sessions

2022-07-04 Thread Lisandro
Hey there! 
I'm using web2py in production to serve websites. I have two Redis 
instances: one for cache and one for sessions. Each Redis instance runs in 
a VPS with 24gb of RAM. A few days ago suddently the Redis instance 
rejected all write attempts, this is the traceback:

Traceback (most recent call last):
  File "/var/www/medios/gluon/main.py", line 462, in wsgibase
session._try_store_in_db(request, response)
  File "/var/www/medios/gluon/globals.py", line 1226, in _try_store_in_db
record_id = table.insert(**dd)
  File "/var/www/medios/gluon/contrib/redis_session.py", line 138, in insert
newid = str(self.db.r_server.incr(self.serial))
  File 
"/var/www/medios/venv_medios/lib/python2.7/site-packages/redis/client.py", 
line 651, in incr
return self.execute_command('INCRBY', name, amount)
  File 
"/var/www/medios/venv_medios/lib/python2.7/site-packages/redis/client.py", 
line 394, in execute_command
return self.parse_response(connection, command_name, **options)
  File 
"/var/www/medios/venv_medios/lib/python2.7/site-packages/redis/client.py", 
line 404, in parse_response
response = connection.read_response()
  File 
"/var/www/medios/venv_medios/lib/python2.7/site-packages/redis/connection.py", 
line 316, in read_response
raise response
ResponseError: OOM command not allowed when used memory > 'maxmemory'.


But it doesn't make sense, as the server has plenty of RAM available, 
checked with command "free -m":
total: 23940
used: 1617
free: 21784
shared: 96
buff/cache: 538
available: 22018


Also, Redis memory usage is at normal levels, checked with command 
"redis-cli info memory":
used_memory_human:1.26G
used_memory_rss_human:1.33G
total_system_memory_human:23.38G
maxmemory_human:20.00G
maxmemory_policy:allkeys-lru
allocator_frag_ratio:1.03
mem_fragmentation_ratio:1.05


I don't use sessions for anything out of the ordinary. Most of the requests 
don't modify the session and execute session.forget(response). I only use 
sessions to handle a few logged in users. This is how my app connects to 
session in models/db.py:

sessiondb = RedisSession(
session_expiry=172800,  # two days
redis_conn=RConn(
host=REDIS_SESSION_HOST,
port=REDIS_SESSION_PORT,
password=REDIS_SESSION_PASSWORD,
application=request.application))
session.connect(request, response, db=sessiondb)


I'm using this web2py version:
Version 2.17.1-stable+timestamp.2018.08.06.01.02.56
I know it's old but it's been running so smoothly :) serving ~60 rps.

I use virtualenv and this is what I see when I run 'pip freeze | grep 
"redis"':
redis==2.8.0


What could be the reason of the issue? 
Thanks in advance!
Warm regards,
Lisandro.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/60520256-5efd-49da-8396-e5278e193adfn%40googlegroups.com.


[web2py] Re: Redis: what's the proper way to clear all the cache keys of a given app?

2020-02-29 Thread Lisandro
I have a theory about what's going on.

The implementation of Redis in web2py stores the keys in "buckets" or sets, 
and the name of those sets are stored in cache *with no expiration time* 
accordingly 
to the code documentation 
<https://github.com/web2py/web2py/blob/master/gluon/contrib/redis_cache.py#L76>
: 
"all buckets are indexed in a fixed key that never expires"

For what I see, this is how web2py stores keys and buckets for a given 
appname:

127.0.0.1:6379> SMEMBERS w2p:appname:___cache_set
 1) "w2p:appname:___cache_set:26319660"
 2) "w2p:appname:___cache_set:26377475"
 3) "w2p:appname:___cache_set:26336873"
 4) "w2p:appname:___cache_set:26318136"


Every key set can be accessed with the same command, for example:

127.0.0.1:6379> SMEMBERS w2p:appname:___cache_set:26319660
 1) "w2p:cache:appname:url_logo_movil"
 2) "w2p:cache:appname:menu1"
 3) "w2p:cache:appname:url_imagen_default"
 4) "w2p:cache:appname:CT"
 5) "w2p:cache:appname:url_imagen_logo_newsletter"



At the other hand:

   - My Redis instance is configured to use no more than 20 GB of RAM, with 
   an eviction policy of "*allkeys-lru*", that is evict keys lest recently 
   used. 
   - The *limit of 20gb has been reached* a few times in the past.


Accordingly to the Redis docs 
<https://redis.io/topics/lru-cache#eviction-policies>, one alternative to 
allkeys-lru is "*volatile-lru*". This option only evict keys that *have an 
expiration time*. So I deduce that *my confiuguration (allkeys-lru) evics 
keys no matter they have or not an expiration time*. And considering web2py 
sets the bucket names with keys that never expire, *my theory is that some 
keys storing the name of the buckets were evicted*. That's why my code 
doesn't find some keys that are still indeed cached: that's because my code 
does the search from bucket names.
That's my best guess.. 

Anyway, I'll try to adapt the code of my app that clears the cache of an 
application: I think I'll go for subprocess with a command like:

$ redis-cli --scan --pattern w2p:cache:appname:* | xargs redis-cli del


Any comment or suggestion will be much appreciated.
Warm regards,
Lisandro.

El sábado, 29 de febrero de 2020, 20:19:29 (UTC-3), Lisandro escribió:
>
> I'm having some trouble with Redis and I thought some of you could 
> enlighten me :)
>
> I have a web2py instance running with several apps. 
> I have a Redis instance working as a cache, with maxmemory set to 20gb and 
> LRU eviction policy.
> Every application connects to the same Redis instance.
>
> One of those apps is the main one, from wich I perform some administrative 
> tasks like:
>  - getting a list of the cache keys of a specific app
>  - clearing all the cache keys of a specific app.
>
> In order to get the list of the keys cached by a specific app I use this 
> custom code:
>
>
> def get_cache_keys(application):
> import re
>
> result = []
> regex = ':*'
> prefix = 'w2p:%s' % application
> cache_set = 'w2p:%s:___cache_set' % application
> r = re.compile(regex)
> buckets = redis_conn.smembers(cache_set)
> if buckets:
> keys = redis_conn.sunion(buckets)
> for key in keys:
> if r.match(str(key).replace(prefix, '', 1)):
> result.append(key)
> return result
>
> In the other hand, if I need to clear all the cached keys of a specific 
> app, I get the key list (using the above funciton) and then iterate those 
> keys calling redis_conn.delete(key)
> I thought this was ok, but today I've found a problem: *I've found a key 
> that was generated by an app but it isn't returned by the code above*. 
>
> To be sure, I connected to Redis from the terminal and in fact I was able 
> to get the key and its value, but running the code above the key isn't 
> listed. I'v checked the TTL of the key and it is still valid. 
> It's not the only case: I've found several cases.
>
> For example, I have a web2py app called "transmedia".
> From redis-cli, I want to know the SETs (buckets) that web2py created to 
> store the keys of that specific app, so I run:
>
> 127.0.0.1:6379> SMEMBERS w2p:transmedia:___cache_set
>  1) "w2p:transmedia:___cache_set:26383112"
>  2) "w2p:transmedia:___cache_set:26384550"
>  3) "w2p:transmedia:___cache_set:26383115"
>  4) "w2p:transmedia:___cache_set:26383117"
>  5) "w2p:transmedia:___cache_set:26383436"
>  6) "w2p:transmedia:___cache_set:26383118"
>  7) "w2p:transmedia:___cache_set:26383113"
>  8) &qu

[web2py] Redis: what's the proper way to clear all the cache keys of a given app?

2020-02-29 Thread Lisandro
I'm having some trouble with Redis and I thought some of you could 
enlighten me :)

I have a web2py instance running with several apps. 
I have a Redis instance working as a cache, with maxmemory set to 20gb and 
LRU eviction policy.
Every application connects to the same Redis instance.

One of those apps is the main one, from wich I perform some administrative 
tasks like:
 - getting a list of the cache keys of a specific app
 - clearing all the cache keys of a specific app.

In order to get the list of the keys cached by a specific app I use this 
custom code:


def get_cache_keys(application):
import re

result = []
regex = ':*'
prefix = 'w2p:%s' % application
cache_set = 'w2p:%s:___cache_set' % application
r = re.compile(regex)
buckets = redis_conn.smembers(cache_set)
if buckets:
keys = redis_conn.sunion(buckets)
for key in keys:
if r.match(str(key).replace(prefix, '', 1)):
result.append(key)
return result

In the other hand, if I need to clear all the cached keys of a specific 
app, I get the key list (using the above funciton) and then iterate those 
keys calling redis_conn.delete(key)
I thought this was ok, but today I've found a problem: *I've found a key 
that was generated by an app but it isn't returned by the code above*. 

To be sure, I connected to Redis from the terminal and in fact I was able 
to get the key and its value, but running the code above the key isn't 
listed. I'v checked the TTL of the key and it is still valid. 
It's not the only case: I've found several cases.

For example, I have a web2py app called "transmedia".
>From redis-cli, I want to know the SETs (buckets) that web2py created to 
store the keys of that specific app, so I run:

127.0.0.1:6379> SMEMBERS w2p:transmedia:___cache_set
 1) "w2p:transmedia:___cache_set:26383112"
 2) "w2p:transmedia:___cache_set:26384550"
 3) "w2p:transmedia:___cache_set:26383115"
 4) "w2p:transmedia:___cache_set:26383117"
 5) "w2p:transmedia:___cache_set:26383436"
 6) "w2p:transmedia:___cache_set:26383118"
 7) "w2p:transmedia:___cache_set:26383113"
 8) "w2p:transmedia:___cache_set:26383111"
 9) "w2p:transmedia:___cache_set:26383495"
10) "w2p:transmedia:___cache_set:26383440"
11) "w2p:transmedia:___cache_set:26383170"
12) "w2p:transmedia:___cache_set:26383116"


Then I checked all those SETs to get all the keys stored by the app:

127.0.0.1:6379> SMEMBERS w2p:transmedia:___cache_set:26384550
 1) "w2p:cache:transmedia:url_logo_movil"
 2) "w2p:cache:transmedia:menu1"
 3) "w2p:cache:transmedia:url_imagen_default"
 4) "w2p:cache:transmedia:CT"
 5) "w2p:cache:transmedia:url_imagen_logo_newsletter"
 6) "w2p:cache:transmedia:html_head"
 7) "w2p:cache:transmedia:menu0"
 8) "w2p:cache:transmedia:TEMPLATE"
 9) "w2p:cache:transmedia:url_fondo_personalizado"
10) "w2p:cache:transmedia:url_favicon"
11) "w2p:cache:transmedia:C"
12) "w2p:cache:transmedia:CONFIG"
13) "w2p:cache:transmedia:url_logo_grande"
14) "w2p:cache:transmedia:html_body"
15) "w2p:cache:transmedia:url_logo"

There we can see *15 keys*. 
All the other sets are "(empty list or set)".

But here is the weird part: the application "transmedia" stored a key that 
I don't see in the list: 
*w2p:cache:transmedia:url_imagen_publicidad_newsletter*
Checking from redis-cli I can see that the key is still there and its TTL 
is still valid:

127.0.0.1:6379> GET w2p:cache:transmedia:url_imagen_publicidad_newsletter
"\x80\x02U\x00."
127.0.0.1:6379> TTL w2p:cache:transmedia:url_imagen_publicidad_newsletter
(integer) 18711


I can confirm this discrepancy if I perform a SCAN like this:

$ redis-cli --scan --pattern w2p:cache:transmedia:*
w2p:cache:transmedia:url_logo_movil
w2p:cache:transmedia:url_favicon
w2p:cache:transmedia:url_fondo_personalizado
w2p:cache:transmedia:TEMPLATE
w2p:cache:transmedia:menu1
w2p:cache:transmedia:html_head
w2p:cache:transmedia:url_imagen_logo_newsletter
w2p:cache:transmedia:html_body
w2p:cache:transmedia:url_logo
w2p:cache:transmedia:CT
w2p:cache:transmedia:url_logo_grande
w2p:cache:transmedia:CONFIG
w2p:cache:transmedia:C
w2p:cache:transmedia:url_imagen_publicidad_newsletter
w2p:cache:transmedia:url_imagen_default
w2p:cache:transmedia:menu0


Notice the output now includes the *16 keys*, not 15.
So why isn't the key listed when scanning all the SETS (buckets) that 
web2py created for the app? Is there something wrong with my code?

My main concern is about *clearing all the keys of a given app*. 
Right now I can't trust that my code will delete all the keys, because the 
function that gets the key list doesn't always include all the keys that 
the app stored. 

So another question: *is there a better way to delete all the keys cached 
by an app?* 
I thought I could use redis-cli from the commandline like this:

$ redis-cli --scan --pattern w2p:cache:transmedia:* | xargs redis-cli del


This would work, but in order to run it from within my main app I would 
have to use subprocess.
What do you thin

[web2py] Re: Is it possible to change the auto-generated key used to cache an specific URL?

2019-10-19 Thread Lisandro
I've opened an issue because I think it could be a bug:
https://github.com/web2py/web2py/issues/2266

I'll try to suggest a fix. 

El viernes, 18 de octubre de 2019, 21:16:17 (UTC-3), Lisandro escribió:
>
> When using @cache.action in controller functions, the key used for storing 
> the content is auto-generated based on the request URL. To be more 
> specific, the key is generated based in *current.request.env.path_info* and 
> *current.response.view:*
>
> https://github.com/web2py/web2py/blob/1ce316609a7a70c42dbd586c4a264193608880ba/gluon/cache.py#L614
>
> However, I'm seeing this issue.
> Let's say we have a website that has articles splitted into several 
> categories. 
> We also have a controller function that exposes the articles given a 
> category ID passed as first argument in the URL:
>
> @cache.action(cache_model=cache.redis, session=False, vars=False, public=
> True)
> def category():
> cat = db.categories(request.args(0))
> articles = cat.articles.select()
> return response.render(dict(cat=cat, articles=articles))
>
>
> This works as expected, however I've noticed that a different key is 
> generated for this two URLs:
> /default/category/10
> /default/category/10/
>
> Notice one of the URLs has a trailing slash. 
> I'm using Redis for caching, and I've checked the stored keys and they are 
> different. 
> *The issue here is that both URLs produce the exact same content, but the 
> content is cached twice with different keys.*
>
>
> In my case, the problem is even worst, because I add a slug to the URL 
> with the name of the category, like this:
> /default/category/10/technology
>
> In this case, the slug is added just to make the URL prettier, so it 
> doesn't really matter what is provided in the second argument. All these 
> URLs produce the exact same content:
> /default/category/10
> /default/category/10/
> /default/category/10/technology
> /default/category/10/anything-at-all
>
> In a public website, a bot could send all types of random requests, thus 
> provoking the server to use a lot of RAM for caching several copies of the 
> same content.
>
> So I'm wondering, wouldn't be nice to be able to specify the key? Or at 
> least be able to say which of the args should be consider to create the 
> key? 
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/6d18bd55-fdcf-4a74-a6c8-167f216b3dae%40googlegroups.com.


[web2py] Is it possible to change the auto-generated key used to cache an specific URL?

2019-10-18 Thread Lisandro
When using @cache.action in controller functions, the key used for storing 
the content is auto-generated based on the request URL. To be more 
specific, the key is generated based in *current.request.env.path_info* and 
*current.response.view:*
https://github.com/web2py/web2py/blob/1ce316609a7a70c42dbd586c4a264193608880ba/gluon/cache.py#L614

However, I'm seeing this issue.
Let's say we have a website that has articles splitted into several 
categories. 
We also have a controller function that exposes the articles given a 
category ID passed as first argument in the URL:

@cache.action(cache_model=cache.redis, session=False, vars=False, public=
True)
def category():
cat = db.categories(request.args(0))
articles = cat.articles.select()
return response.render(dict(cat=cat, articles=articles))


This works as expected, however I've noticed that a different key is 
generated for this two URLs:
/default/category/10
/default/category/10/

Notice one of the URLs has a trailing slash. 
I'm using Redis for caching, and I've checked the stored keys and they are 
different. 
*The issue here is that both URLs produce the exact same content, but the 
content is cached twice with different keys.*


In my case, the problem is even worst, because I add a slug to the URL with 
the name of the category, like this:
/default/category/10/technology

In this case, the slug is added just to make the URL prettier, so it 
doesn't really matter what is provided in the second argument. All these 
URLs produce the exact same content:
/default/category/10
/default/category/10/
/default/category/10/technology
/default/category/10/anything-at-all

In a public website, a bot could send all types of random requests, thus 
provoking the server to use a lot of RAM for caching several copies of the 
same content.

So I'm wondering, wouldn't be nice to be able to specify the key? Or at 
least be able to say which of the args should be consider to create the 
key? 

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/821c6951-f6c2-422a-a0bd-02f149c2ab61%40googlegroups.com.


[web2py] Re: Error trying to go back to python 2 after testing python 3: TypeError: translate() takes exactly one argument (2 given)

2019-10-11 Thread Lisandro
Well, in case someone else is facing the same issue, I'll post how I solved 
it.

First, let me add that I had the same issue using json library instead of 
pickle: once the file (json or pkl) was written using py3, then I was able 
to read it with both py3 and py2, but py3 read it as *str* while py2 reads 
it as *unicode*. 
And in the case of py2, unicode wasn't working, so I ended up doing a small 
fix in my routes.py file:

map = pickle.load(open('domains_apps.pkl', 'rb'))

# -- TEMP FIX ---
new_map = {}
for key in list(map):
new_map[str(key)] = str(map[key])
map = new_map
# ---

routers = dict(
BASE=dict(
default_controller='default',
default_function='index',
domains=map,
map_static=True,
exclusive_domain=True,
)
)


Basically, what I do is to create a new dictionary where I explicitly set 
keys and values using str()
It's not an elegant solution, but it seems to work for py2 and py3.
Anyway, I'll only keep the fix until I'm sure I don't need to go back to 
py2.

Thanks for the help!


El viernes, 11 de octubre de 2019, 8:58:40 (UTC-3), Lisandro escribió:
>
> > Also in your json example you are getting unicode both in py2 and py3 
> except py3 does not put the u'' in front of unicode strings because they 
> are default.
>
> I've used type() to inspect the keys and values of the dictionary in py2 
> and py3:
>
> *py2*
> >>> map = pickle.load(open(path, 'rb'))
> >>> first_key = list(map.keys())[0]
> >>> print(type(first_key))
> 
> >>> print(type(map[first_key]))
> 
>
>
> *py3*
> >>> map = pickle.load(open(path, 'rb'))
> >>> first_key = list(map.keys())[0]
> >>> print(type(first_key))
> 
> >>> print(type(map[first_key]))
> 
>
> Notice in python3 it says "class" not "type" and it is str, while on 
> python2 it says unicode :/
>
> Anyway, I don't pretend to bother with this issue unrelated to web2py.
> My goal is to be able to read/write the dicti to a .pkl file with py2 and 
> py3, so I'll keep testing and I'll post the solution here if I find it :)
>
>
>
> El viernes, 11 de octubre de 2019, 2:52:35 (UTC-3), Massimo Di Pierro 
> escribió:
>>
>> I am puzzled by this too. The error is not in web2py code. The error is 
>> in the string.py module.
>> Also in your json example you are getting unicode both in py2 and py3 
>> except py3 does not put the u'' in front of unicode strings because they 
>> are default.
>>
>> On Thursday, 10 October 2019 18:05:55 UTC-7, Lisandro wrote:
>>>
>>> I've found the issue, it's not web2py related, sorry about that.
>>>
>>> My web2py instance has several applications running, each one is 
>>> attached to a domain.
>>> I store a the map of domains:apps in a dictionary that I save to a .pkl 
>>> file.
>>> Then my routes.py reads that file and loads the map of domains:apps
>>>
>>> I write the .pkl file like this:
>>>
>>> with open('map.pkl', 'wb') as file:
>>> pickle.dump(dictionary_map, file, protocol=2)
>>>
>>> Notice I use protocol=2 because I want to be able to read/write the file 
>>> with python 2 and 3.
>>>
>>> In my routes.py I read the file like this:
>>>
>>> map = pickle.load(open('domains_apps.pkl', 'rb'))
>>>
>>> routers = dict(
>>> BASE=dict(
>>> default_controller='default',
>>> default_function='index',
>>> domains=map,
>>> map_static=True,
>>> exclusive_domain=True,
>>> )
>>> )
>>>
>>>
>>>
>>> However, after writing .pkl the file with python 3 and returning to 
>>> python 2, my applications fail with the error reported in my first message. 
>>> The error goes away if I replace the .pkl file with an old backup I had 
>>> made before using python 2.
>>>
>>> I have noticed that once the .pkl file is written with python 3, then 
>>> reading it with python 2 and 3 throws different results:
>>>
>>> *with python 3*:
>>> >>> r = pickle.load(open('domains_apps.pkl', 'rb'))
>>> >>> print(r)
>>> {'prod.com': 'prod', 'test.com': 'test'}
>>>
>>>
>>> *with python 2*:
>>

[web2py] Re: Error trying to go back to python 2 after testing python 3: TypeError: translate() takes exactly one argument (2 given)

2019-10-11 Thread Lisandro
> Also in your json example you are getting unicode both in py2 and py3 
except py3 does not put the u'' in front of unicode strings because they 
are default.

I've used type() to inspect the keys and values of the dictionary in py2 
and py3:

*py2*
>>> map = pickle.load(open(path, 'rb'))
>>> first_key = list(map.keys())[0]
>>> print(type(first_key))

>>> print(type(map[first_key]))



*py3*
>>> map = pickle.load(open(path, 'rb'))
>>> first_key = list(map.keys())[0]
>>> print(type(first_key))

>>> print(type(map[first_key]))


Notice in python3 it says "class" not "type" and it is str, while on 
python2 it says unicode :/

Anyway, I don't pretend to bother with this issue unrelated to web2py.
My goal is to be able to read/write the dicti to a .pkl file with py2 and 
py3, so I'll keep testing and I'll post the solution here if I find it :)



El viernes, 11 de octubre de 2019, 2:52:35 (UTC-3), Massimo Di Pierro 
escribió:
>
> I am puzzled by this too. The error is not in web2py code. The error is in 
> the string.py module.
> Also in your json example you are getting unicode both in py2 and py3 
> except py3 does not put the u'' in front of unicode strings because they 
> are default.
>
> On Thursday, 10 October 2019 18:05:55 UTC-7, Lisandro wrote:
>>
>> I've found the issue, it's not web2py related, sorry about that.
>>
>> My web2py instance has several applications running, each one is attached 
>> to a domain.
>> I store a the map of domains:apps in a dictionary that I save to a .pkl 
>> file.
>> Then my routes.py reads that file and loads the map of domains:apps
>>
>> I write the .pkl file like this:
>>
>> with open('map.pkl', 'wb') as file:
>> pickle.dump(dictionary_map, file, protocol=2)
>>
>> Notice I use protocol=2 because I want to be able to read/write the file 
>> with python 2 and 3.
>>
>> In my routes.py I read the file like this:
>>
>> map = pickle.load(open('domains_apps.pkl', 'rb'))
>>
>> routers = dict(
>> BASE=dict(
>> default_controller='default',
>> default_function='index',
>> domains=map,
>> map_static=True,
>> exclusive_domain=True,
>> )
>> )
>>
>>
>>
>> However, after writing .pkl the file with python 3 and returning to 
>> python 2, my applications fail with the error reported in my first message. 
>> The error goes away if I replace the .pkl file with an old backup I had 
>> made before using python 2.
>>
>> I have noticed that once the .pkl file is written with python 3, then 
>> reading it with python 2 and 3 throws different results:
>>
>> *with python 3*:
>> >>> r = pickle.load(open('domains_apps.pkl', 'rb'))
>> >>> print(r)
>> {'prod.com': 'prod', 'test.com': 'test'}
>>
>>
>> *with python 2*:
>> >>> r = pickle.load(open('domains_apps.pkl', 'rb'))
>> >>> print(r)
>> {*u*'prod.com': *u*'prod', *u*'test.com': *u*'test'}
>>
>>
>> Notice that in python 2 reading the .pkl file (that was written with 
>> python 3 using protocol=2) returns unicode strings. This doesn't happen in 
>> python 3. But i'm not sure what protocol to use. 
>>
>> I'll do some more tests and I'll post here whatever solution I can find. 
>> Thanks for your time!
>> Regards,
>> Lisandro.
>>
>>
>>
>>
>>
>> El jueves, 10 de octubre de 2019, 21:21:53 (UTC-3), Dave S escribió:
>>>
>>> Delete all the .pyc files?
>>>
>>>
>>> /dps
>>>
>>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/3d693278-d999-46ed-994c-8c3dfcf35ad3%40googlegroups.com.


[web2py] Re: Error trying to go back to python 2 after testing python 3: TypeError: translate() takes exactly one argument (2 given)

2019-10-10 Thread Lisandro
I've found the issue, it's not web2py related, sorry about that.

My web2py instance has several applications running, each one is attached 
to a domain.
I store a the map of domains:apps in a dictionary that I save to a .pkl 
file.
Then my routes.py reads that file and loads the map of domains:apps

I write the .pkl file like this:

with open('map.pkl', 'wb') as file:
pickle.dump(dictionary_map, file, protocol=2)

Notice I use protocol=2 because I want to be able to read/write the file 
with python 2 and 3.

In my routes.py I read the file like this:

map = pickle.load(open('domains_apps.pkl', 'rb'))

routers = dict(
BASE=dict(
default_controller='default',
default_function='index',
domains=map,
map_static=True,
exclusive_domain=True,
)
)



However, after writing .pkl the file with python 3 and returning to python 
2, my applications fail with the error reported in my first message. The 
error goes away if I replace the .pkl file with an old backup I had made 
before using python 2.

I have noticed that once the .pkl file is written with python 3, then 
reading it with python 2 and 3 throws different results:

*with python 3*:
>>> r = pickle.load(open('domains_apps.pkl', 'rb'))
>>> print(r)
{'prod.com': 'prod', 'test.com': 'test'}


*with python 2*:
>>> r = pickle.load(open('domains_apps.pkl', 'rb'))
>>> print(r)
{*u*'prod.com': *u*'prod', *u*'test.com': *u*'test'}


Notice that in python 2 reading the .pkl file (that was written with python 
3 using protocol=2) returns unicode strings. This doesn't happen in python 
3. But i'm not sure what protocol to use. 

I'll do some more tests and I'll post here whatever solution I can find. 
Thanks for your time!
Regards,
Lisandro.





El jueves, 10 de octubre de 2019, 21:21:53 (UTC-3), Dave S escribió:
>
> Delete all the .pyc files?
>
>
> /dps
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/7ea86508-b90f-4197-96a8-11a1f7ecbdb5%40googlegroups.com.


[web2py] Error trying to go back to python 2 after testing python 3: TypeError: translate() takes exactly one argument (2 given)

2019-10-10 Thread Lisandro
I'm testing my web2py application to see what I need to fix in order to 
make it compatible with python3.
I'm using web2py Version 2.17.1-stable+timestamp.2018.08.06.01.02.56

My application has been running with python2 for a long time.
Yesterday I tried with python3 (setting up a virtual environment) and 
everything went ok.
But now *I can't go back to use python2*, any request to the application 
fails with this error:


TypeError: translate() takes exactly one argument (2 given) 
<http://localhost:8000/admin/default/errors/welcome#>

Traceback (most recent call last):
  File "/home/gonguinguen/medios/gluon/main.py", line 435, in wsgibase
session.connect(request, response)
  File "/home/gonguinguen/medios/gluon/globals.py", line 996, in connect
response.cookies[response.session_id_name] = response.session_id
  File "/usr/lib64/python2.7/Cookie.py", line 592, in __setitem__
self.__set(key, rval, cval)
  File "/usr/lib64/python2.7/Cookie.py", line 585, in __set
M.set(key, real_value, coded_value)
  File "/usr/lib64/python2.7/Cookie.py", line 459, in set
if "" != translate(key, idmap, LegalChars):
  File "/usr/lib64/python2.7/string.py", line 493, in translate
return s.translate(table, deletions)
TypeError: translate() takes exactly one argument (2 given)



I've already flushed redis cache, restarted webserver, cleared browser 
data... 
Any other suggestion?

Thanks in advance
Regards,
Lisandro

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/0a769bb3-f54f-4024-9917-492d74a3549c%40googlegroups.com.


[web2py] Can't import gluon.contrib.simplejson after migrating to Python 3

2019-10-09 Thread Lisandro

Hi there! I'm using this version of web2py: Version 
2.17.1-stable+timestamp.2018.08.06.01.02.56
I recently moved to Python 3 and I've found this issue. When I try to do 
this:

from gluon.contrib import simplejson

... I receive this error:

ModuleNotFoundError: No module named 'decoder'


This is the full traceback:

Traceback (most recent call last):
  File "/home/gonguinguen/medios/gluon/custom_import.py", line 98, in 
custom_importer
return base_importer(pname, globals, locals, fromlist, level)
ModuleNotFoundError: No module named 
'applications.webmedios.modules.decoder'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/gonguinguen/medios/gluon/custom_import.py", line 102, in 
custom_importer
return NATIVE_IMPORTER(name, globals, locals, fromlist, level)
ModuleNotFoundError: No module named 'decoder'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/gonguinguen/medios/gluon/restricted.py", line 219, in 
restricted
exec(ccode, environment)
  File 
"/home/gonguinguen/medios/applications/webmedios/controllers/admin.py", 
line 696, in 
  File "/home/gonguinguen/medios/gluon/globals.py", line 421, in 
self._caller = lambda f: f()
  File 
"/home/gonguinguen/medios/applications/webmedios/controllers/admin.py", 
line 693, in test
from gluon.contrib import simplejson
  File "/home/gonguinguen/medios/gluon/custom_import.py", line 111, in 
custom_importer
return NATIVE_IMPORTER(name, globals, locals, fromlist, level)
  File "/home/gonguinguen/medios/gluon/contrib/simplejson/__init__.py", 
line 111, in 
from decoder import JSONDecoder, JSONDecodeError
  File "/home/gonguinguen/medios/gluon/custom_import.py", line 104, in 
custom_importer
raise ImportError(e1, import_tb)  # there an import error in the module
ImportError: (ModuleNotFoundError("No module named 
'applications.webmedios.modules.decoder'",), )



Should I install simplejson directly to my virtualenv and avoid using the 
one provided by web2py?

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/eb8295a9-87e2-4d3f-bb4a-cb17c0808025%40googlegroups.com.


[web2py] Re: Disable the simultaneous session in the expiration period

2019-08-08 Thread Lisandro
In this thread you will find an interesting conversation about preventing 
multiple sessions from the same user:
https://groups.google.com/forum/#!searchin/web2py/multiple$20login%7Csort:date/web2py/Zz6zvYav4Sw/2xHJ8loVBgAJ

I use the approach suggested by Massimo: *"when a user first logs in, store 
a uuid in the session and write it in the database (in a new custom field 
in the auth_user table). When a request arrives if the uuid in the session 
does not match the uuid in the database call auth.logout()"*

El jueves, 8 de agosto de 2019, 7:31:15 (UTC-3), Naveen Kumar escribió:
>
> i am working on a project where i want to disable simultaneous user login 
> for same user . currently it is there in web2py version 
> 2.16.1-stable+timestamp.2017.11.14.05.54.25)
>
> i also want to delete or expire previous session , when user again login 
> with the same user (of which he login priviously).
>
> please suggest some ideas
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/13680f37-46b7-4ceb-8bea-b1a71333e52d%40googlegroups.com.


[web2py] Need to know if web2py is setting cookies properly for this specific case

2019-08-08 Thread Lisandro
I have two applications that share the same model. 
One of the applications runs on top level domain, and the other runs in a 
subdomain:

*test.com*: applications/test
*admin.test.com*: applications/test_admin

The user logs in from admin.test.com and the cookie needs to be valid also 
for test.com (so the user is logged in in both applications).
I use this custom code to login the user:

def login():
email = request.post_vars.email
password = request.post_vars.password
user = auth.login_bare(email, password)
if user:
session.auth.expiration = auth.settings.expiration
return response.json({'success': True})


Additionally, in order to make the session valid for the top level domain 
also, I've added this to models/db.py (remember it is the same model for 
both applications):

sessiondb = RedisSession(redis_conn=redis_conn, session_expiry=36000)
session.connect(request, response, db=sessiondb, masterapp='test')
if response.session_id_name in response.cookies:
response.cookies[response.session_id_name]['domain'] = 'test.com'


This approach has been working smoothly for long time, and it still does. 
However, *it doesn't work properly on several versions of Safari*. In those 
cases, the login is done properly, but then it would seem that the browser 
can't read the cookie. So the user logs in, it is redirected to the main 
domain, but when he wants to go to the admin application, he is asked to 
login again. 
I've always thought that the problem is within Safari.
But recently I used the Chrome Inspector to inspect cookies and *I've 
noticed some weird stuff going on with cookies*:


*Accessing test.com (being logged) shows these four cookies:*

*Name*=session_id_test
*Value*="154:1ad89acc-1f33-4c9a-805e-6888dcf227d3"
*Domain*=admin.test.com

*Name*=session_id_test
*Value*="154:aab759f5-4738-42e3-978f-05ba4e60c5a4"
*Domain*=.test.com

*Name*=session_id_test
*Value*="153:34738cd8-e451-4f66-a059-3afd0a805afe"
*Domain*=test.com

*Name*=session_id_test_admin
*Value*=127.0.0.1-0ab04b23-f8df-406c-988e-977b6d78b3f7
*Domain*=admin.test.com


*Accessing admin.test.com (being logged) shows these four cookies:*

*Name*=session_id_test
*Value*="154:1ad89acc-1f33-4c9a-805e-6888dcf227d3"
*Domain*=admin.test.com

*Name*=session_id_test
*Value*="154:aab759f5-4738-42e3-978f-05ba4e60c5a4"
*Domain*=.test.com

*Name*=session_id_test
*Value*="153:34738cd8-e451-4f66-a059-3afd0a805afe"
*Domain*=test.com

*Name*=session_id_test_admin
*Value*=127.0.0.1-af3d5aaa-3388-4bf5-8c65-69693f7eed35
*Domain*=admin.test.com



I'm not sure if there should be that many cookies.
I think that these lines from models/db.py could be making that mess:

if response.session_id_name in response.cookies:
response.cookies[response.session_id_name]['domain'] = 'test.com'


However, I can confirm that this code is running smoothly on major versions 
of Chrome, Firefox, etc. 
It doesn't work only on Safari (actually, it works on a few versions of 
Safari).

What do you think?
If my approach isn't right, what should I add to models/db.py to share the 
session for both applications?

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/568a68ab-0688-46c9-9e8c-9048d4abb243%40googlegroups.com.


[web2py] Re: Encoding problem with a web2py application that implements an XMLRPC webservice

2019-07-17 Thread Lisandro
Thank you for your suggestions.
As I'm not uploading text files from the browser, I don't need to 
encode/decode, I just need to set default encoding to utf8. 
Considering that, I've gone ahead with Alfonso's suggestion and I've moved 
sys.setdefaultencoding('utf8') inside my routes.py. 
It's working smoothly :)

Thanks again!
Best regards,
Lisandro.

El sábado, 6 de julio de 2019, 13:23:38 (UTC-3), Alfonso Serra escribió:
>
> sys.setdefaultencoding will work thorough the server execution process. So 
> as long it is alive you wont need to call it several times.
>
> It will probably work if you set that up on routes.py or web2py.py so its 
> not called per request. 
>
> As long as it fixes your problem i wouldnt care whether it is a bad 
> practice. Python 2.7 by default encodes in ascii so there is not a 
> workaround other than changing the default encoding.
>
> I believe it fixes encoding issues from strings coming from your app, but 
> you will have to encode/decode at some point f you are uploading text files 
> from the browser.
>
>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/825144d9-b01e-4faf-9ef5-d4c9ee93d330%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Encoding problem with a web2py application that implements an XMLRPC webservice

2019-07-04 Thread Lisandro
I need to understand if this is a bug or if it is the expected behaviour 
and I'm doing something wrong.

I have two web2py applications. One of them implements an XMLRPC 
webservice, and the other one consumes it.

All the controllers (in both applications) have this first line:
# -*- coding: utf-8 -*-


One of the applications connects to the webservice to call a specific 
function, and it needs to pass some string arguments:

def make_the_call(): 
from xmlrpclib import ServerProxy
service = ServerProxy(webservice_url)
data = {
'title': 'áéíóú',
'detail': 'Detail with special characters like ñ or Ç'
}
service.add_content(data)


The application that implements the webservice:

from gluon.tools import Service

service = Service()


def call():
return service()


def add_content(data):
db.content.insert(
title=data.get('title'),
detail=data.get('detail')
)
return {'success': True}


But the sentence service.add_content(data) fails with this error and 
traceback:

Traceback (most recent call last):
  File "/home/gonguinguen/medios/gluon/restricted.py", line 219, in restricted
exec(ccode, environment)
  File "/home/gonguinguen/medios/applications/webmedios/controllers/admin.py", 
line 707, in 
  File "/home/gonguinguen/medios/gluon/globals.py", line 421, in 
self._caller = lambda f: f()
  File "/home/gonguinguen/medios/applications/webmedios/controllers/admin.py", 
line 704, in test
admin_password_sitio='93c824d1-91c4-428f-8542-db5db9d4594b')
  File "/home/gonguinguen/medios/applications/webmedios/controllers/admin.py", 
line 695, in instalar_demo_contenido
r = server.add_content(data)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
return self.__send(self.__name, args)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1591, in __request
verbose=self.__verbose
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1306, in single_request
return self.parse_response(response)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1482, in parse_response
return u.close()
  File "/usr/lib64/python2.7/xmlrpclib.py", line 794, in close
raise Fault(**self._stack[0])
Fault: :'ascii' codec can't 
encode character u'\\xe1' in position 1: ordinal not in range(128)">



To make it work, I need to reload sys and set default encoding in the 
add_content() method of the webservice:

def add_content(data):
import sys
reload(sys)
sys.setdefaultencoding('utf8')
db.content.insert(
title=data.get('title'),
detail=data.get('detail')
)
return {'success': True}


After that, it works ok.
But here is the weird part: after having made one successfull call to the 
webservice method, I can remove the lines where I reload the sys, and it 
keeps working ok. But if I reload uwsgi, it starts throwing error again.

Anyway, I've read that reloading sys is not a good practise at all. So I'm 
pretty lost. I presumed that I could add "# -*- coding: utf-8 -*-" and 
forget about the encoding problems. 

What should I do?


-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/584c8981-b4cd-4fe9-9b46-289a68a67d42%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Could this problem in production be related to web2py?

2019-06-07 Thread Lisandro
I'm not exactly sure how many sessions my app is handling, but this numbers 
will give you an idea:

 - My websites receive about 500k visits (sessions) in an average day.
 - The server handles about 2.5 million requests in an average day.
 - I use RedisSession(session_expiry=36000), that is, sessions handled by 
Redis expire after 10 hours.
 - I also use Redis to store in cache the final HTML of public pages for 5 
minutes.
 - My Redis instance uses about 12gb of RAM. 
 - My Redis instance consumes only about 8% of CPU (that is the 8% of one 
single CPU, notice Redis is single-threaded).


When you say "I'd want to ensure disk-persistence for them (but not for 
cached things like search results)", how do you plan to achieve that? I'm 
no expert, but I think the disk-persistance option in Redis is global. If 
you want persistance for sessions and not for other cached things, I think 
you will need to different instances of Redis. 


El viernes, 7 de junio de 2019, 7:09:26 (UTC-3), Tim Nyborg escribió:
>
> Thanks for this.  Let me know if you find a resolution to the 'saving to 
> disk' latency issue.  Redis sessions would be an improvement, but I'd want 
> to ensure disk-persistence for them (but not for cached things like search 
> results).  How many sessions are you storing, and how much RAM does it 
> consume?
>
> On Thursday, 6 June 2019 20:33:28 UTC+1, Lisandro wrote:
>>
>> If you're going to add Redis, let me add a couple of comments about my 
>> own experience:
>>
>>  - Using Redis to store sessions (not only to cache) was a huge 
>> improvement in my case. I have public websites, some of them with much 
>> traffic, so my app handles many sessions. I was using the database for 
>> handling sessions, but when I changed to Redis, the performance improvement 
>> was considerable. 
>>
>>  - Do some tests with the argument "with_lock" available in RedisCache 
>> and RedisSessions (from gluon.contrib). In my specific case, using 
>> with_lock=False is better, but of course this depends on each specific 
>> scenario.
>>
>>  - An advise: choose proper values for "maxmemory" and "maxmemory-policy" 
>> options from Redis configuration. The first one sets the max amount of 
>> memory that Redis is allowed to use, and "maxmemory-policy" allows you to 
>> choose how Redis should evict keys when it hits the maxmemory: 
>> https://redis.io/topics/lru-cache. 
>>
>>
>> El jueves, 6 de junio de 2019, 12:15:38 (UTC-3), Tim Nyborg escribió:
>>>
>>> This is really good to know.  I've a similar architecture to you, and am 
>>> planning to add redis to the stack soon.  Knowing about issues to be on the 
>>> lookout for is very helpful.
>>>
>>> On Friday, 24 May 2019 16:26:50 UTC+1, Lisandro wrote:
>>>>
>>>> I've found the root cause of the issue: the guilty was Redis.
>>>>
>>>> This is what was happening: Redis has an option for persistance 
>>>> <https://redis.io/topics/persistence> wich stores the DB to the disk 
>>>> every certain amount of time. The configuration I had was the one that 
>>>> comes by default with Redis, that stores the DB every 15 minutes if at 
>>>> least 1 key changed, every 5 minutes if at least 10 keys changed, and 
>>>> every 
>>>> 60 seconds if 1 keys changed. My Redis instance was saving DB to the 
>>>> disk every minute, and the saving process was taking about 70 seconds. 
>>>> Apparently, during that time, many of the requests were hanging. What I 
>>>> did 
>>>> was to simply disable the saving process (I can do it in my case because I 
>>>> don't need persistance). 
>>>>
>>>> I'm not sure why this happens. I know that Redis is single-threaded, 
>>>> but its documentation states that many tasks (such as saving the DB) run 
>>>> in 
>>>> a separate thread that Redis creates. So I'm not sure how is that the 
>>>> process of saving DB to the disk is making the other Redis operations 
>>>> hang. 
>>>> But this is what was happening, and I'm able to confirm that, after 
>>>> disabling the DB saving process, my application response times have 
>>>> decreased to expected values, no more timeouts :)
>>>>
>>>> I will continue to investigate this issue with Redis in the proper 
>>>> forum. 
>>>> I hope this helps anyone facing the same issue.
>>>>
>>>> Thanks for the help!
>>>>
>&g

[web2py] Re: Could this problem in production be related to web2py?

2019-06-06 Thread Lisandro
If you're going to add Redis, let me add a couple of comments about my own 
experience:

 - Using Redis to store sessions (not only to cache) was a huge improvement 
in my case. I have public websites, some of them with much traffic, so my 
app handles many sessions. I was using the database for handling sessions, 
but when I changed to Redis, the performance improvement was considerable. 

 - Do some tests with the argument "with_lock" available in RedisCache and 
RedisSessions (from gluon.contrib). In my specific case, using 
with_lock=False is better, but of course this depends on each specific 
scenario.

 - An advise: choose proper values for "maxmemory" and "maxmemory-policy" 
options from Redis configuration. The first one sets the max amount of 
memory that Redis is allowed to use, and "maxmemory-policy" allows you to 
choose how Redis should evict keys when it hits the maxmemory: 
https://redis.io/topics/lru-cache. 


El jueves, 6 de junio de 2019, 12:15:38 (UTC-3), Tim Nyborg escribió:
>
> This is really good to know.  I've a similar architecture to you, and am 
> planning to add redis to the stack soon.  Knowing about issues to be on the 
> lookout for is very helpful.
>
> On Friday, 24 May 2019 16:26:50 UTC+1, Lisandro wrote:
>>
>> I've found the root cause of the issue: the guilty was Redis.
>>
>> This is what was happening: Redis has an option for persistance 
>> <https://redis.io/topics/persistence> wich stores the DB to the disk 
>> every certain amount of time. The configuration I had was the one that 
>> comes by default with Redis, that stores the DB every 15 minutes if at 
>> least 1 key changed, every 5 minutes if at least 10 keys changed, and every 
>> 60 seconds if 1 keys changed. My Redis instance was saving DB to the 
>> disk every minute, and the saving process was taking about 70 seconds. 
>> Apparently, during that time, many of the requests were hanging. What I did 
>> was to simply disable the saving process (I can do it in my case because I 
>> don't need persistance). 
>>
>> I'm not sure why this happens. I know that Redis is single-threaded, but 
>> its documentation states that many tasks (such as saving the DB) run in a 
>> separate thread that Redis creates. So I'm not sure how is that the process 
>> of saving DB to the disk is making the other Redis operations hang. But 
>> this is what was happening, and I'm able to confirm that, after disabling 
>> the DB saving process, my application response times have decreased to 
>> expected values, no more timeouts :)
>>
>> I will continue to investigate this issue with Redis in the proper forum. 
>> I hope this helps anyone facing the same issue.
>>
>> Thanks for the help!
>>
>> El lunes, 13 de mayo de 2019, 13:49:26 (UTC-3), Lisandro escribió:
>>>
>>> After doing a lot of reading about uWSGI, I've discovered that "uWSGI 
>>> cores are not CPU cores" (this was confirmed by unbit developers 
>>> <https://github.com/unbit/uwsgi/issues/233#issuecomment-16456919>, the 
>>> ones that wrote and mantain uWSGI). This makes me think that the issue I'm 
>>> experiencing is due to a misconfiguration of uWSGI. But as I'm a developer 
>>> and not a sysadmin, it's being hard for me to figure out exactly what uWSGI 
>>> options should I tweak. 
>>>
>>> I know this is out of the scope of this group, but I'll post my uWSGI 
>>> app configuration anyway, in case someone still wants to help:
>>>
>>> [uwsgi]
>>> pythonpath = /var/www/medios/
>>> mount = /=wsgihandler:application
>>> master = true
>>> workers = 40
>>> cpu-affinity = 3
>>> lazy-apps = true
>>> harakiri = 60
>>> reload-mercy = 8
>>> max-requests = 4000
>>> no-orphans = true
>>> vacuum = true
>>> buffer-size = 32768
>>> disable-logging = true
>>> ignore-sigpipe = true
>>> ignore-write-errors = true
>>> listen = 65535
>>> disable-write-exception = true
>>>
>>>
>>> Just to remember, this is running on a machine with 16 CPUs.
>>> Maybe I should *enable-threads*, set *processes* options and maybe 
>>> tweak *cpu-affinity. *
>>> My application uses Redis for caching, so I think I can enable threads 
>>> safely. 
>>> What do you think?
>>>
>>>
>>> El jueves, 9 de mayo de 2019, 21:10:57 (UTC-3), Lisandro escribió:
>>>>
>>>> I've checked my app's code once again and I ca

[web2py] How can I check if an application is installed from within routes.py?

2019-06-03 Thread Lisandro
In my production environment, new web2py apps are installed every day. 
In fact, it is the same app that it is installed several times with a 
different name. 
Each installed application corresponds to a website that has its own domain.

In the root folder of web2py, I have a file called "domains_apps" where I 
store which domain corresponds to which application. 
The file looks like this:

domain1.com!app1
domain2.com!app2
domain3.com!app3


Then, in my routes.py file I read the domains_apps file to construct the 
"domains" dictionary that is provided for the routers.
My routes.py file looks like this:

# -*- coding: utf-8 -*-

domains = {}
archivo = open('domains_apps', 'r')
lines = archivo.readlines()
archivo.close()
for line in lines:
if line:
domain, app = line.split('!')
if domain and app:
domains[domain] = app

routers = dict(
  BASE=dict(
default_controller='default',
default_function='index',
domains=domains,
root_static=['robots.txt', 'ads.txt'],
map_static=True,
exclusive_domain=True,
  )
)


This works smoothly. 
Now I'm trying to add a checkpoint: before adding an domain/app, I would 
like to check that the application is indeed installed in web2py. *How do I 
do that?*

I've tried checking if the folder exists inside applications folder, but it 
doesn't work (I created an empty dir and it does pass the validation, but 
then web2py throws an error saying application doesn't exist).
I've also tried checking if the folder exists and it has an __init__.py 
file inside it, but it doesn't work (again, I tried creating a folder with 
an __init__.py file inside, it passes validation but web2py throws error 
saying application doesn't exist).

You may wonder why do I need to check that in routes.py? 
Well, as applications are installed/removed by an external process, 
something could break during this process. And if for some reason my 
domains_apps file ends up including an application that was removed, my 
uWSGI instance will fail loading the application, and all my requests will 
fail. 

So, considering the routes.py I previously showed, how can I check if an 
application exists? 
Does this approach make any sense?



-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/ddb1a05a-b5ea-4ab4-ba73-9c95243d052b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Could this problem in production be related to web2py?

2019-05-24 Thread Lisandro
I've found the root cause of the issue: the guilty was Redis.

This is what was happening: Redis has an option for persistance 
<https://redis.io/topics/persistence> wich stores the DB to the disk every 
certain amount of time. The configuration I had was the one that comes by 
default with Redis, that stores the DB every 15 minutes if at least 1 key 
changed, every 5 minutes if at least 10 keys changed, and every 60 seconds 
if 1 keys changed. My Redis instance was saving DB to the disk every 
minute, and the saving process was taking about 70 seconds. Apparently, 
during that time, many of the requests were hanging. What I did was to 
simply disable the saving process (I can do it in my case because I don't 
need persistance). 

I'm not sure why this happens. I know that Redis is single-threaded, but 
its documentation states that many tasks (such as saving the DB) run in a 
separate thread that Redis creates. So I'm not sure how is that the process 
of saving DB to the disk is making the other Redis operations hang. But 
this is what was happening, and I'm able to confirm that, after disabling 
the DB saving process, my application response times have decreased to 
expected values, no more timeouts :)

I will continue to investigate this issue with Redis in the proper forum. 
I hope this helps anyone facing the same issue.

Thanks for the help!

El lunes, 13 de mayo de 2019, 13:49:26 (UTC-3), Lisandro escribió:
>
> After doing a lot of reading about uWSGI, I've discovered that "uWSGI 
> cores are not CPU cores" (this was confirmed by unbit developers 
> <https://github.com/unbit/uwsgi/issues/233#issuecomment-16456919>, the 
> ones that wrote and mantain uWSGI). This makes me think that the issue I'm 
> experiencing is due to a misconfiguration of uWSGI. But as I'm a developer 
> and not a sysadmin, it's being hard for me to figure out exactly what uWSGI 
> options should I tweak. 
>
> I know this is out of the scope of this group, but I'll post my uWSGI app 
> configuration anyway, in case someone still wants to help:
>
> [uwsgi]
> pythonpath = /var/www/medios/
> mount = /=wsgihandler:application
> master = true
> workers = 40
> cpu-affinity = 3
> lazy-apps = true
> harakiri = 60
> reload-mercy = 8
> max-requests = 4000
> no-orphans = true
> vacuum = true
> buffer-size = 32768
> disable-logging = true
> ignore-sigpipe = true
> ignore-write-errors = true
> listen = 65535
> disable-write-exception = true
>
>
> Just to remember, this is running on a machine with 16 CPUs.
> Maybe I should *enable-threads*, set *processes* options and maybe tweak 
> *cpu-affinity. *
> My application uses Redis for caching, so I think I can enable threads 
> safely. 
> What do you think?
>
>
> El jueves, 9 de mayo de 2019, 21:10:57 (UTC-3), Lisandro escribió:
>>
>> I've checked my app's code once again and I can confirm that it doesn't 
>> create threads. It only uses subprocess.cal() within functions that are 
>> called in the scheduler environment, I understand that's the proper way to 
>> do it because those calls don't run in uwsgi environment.
>>
>> In the other hand, I can't disable the master process, I use "lazy-apps" 
>> and "touch-chain-reload" options of uwsgi in order to achieve graceful 
>> reloading, because acordingly to the documentation about graceful 
>> reloading 
>> <https://uwsgi-docs.readthedocs.io/en/latest/articles/TheArtOfGracefulReloading.html>
>> :
>> *"All of the described techniques assume a modern (>= 1.4) uWSGI release 
>> with the master process enabled."*
>>
>> Graceful reloading allows me to update my app's code and reload uwsgi 
>> workers smoothly, without downtime or errors. What can I do if I can't 
>> disable master process?
>>
>> You mentioned the original problem seems to be a locking problem due to 
>> threads. If my app doesn't open threads, where else could be the cause of 
>> the issue? 
>>
>> The weirdest thing for me is that the timeouts are always on core 0. I 
>> mean, uwsgi runs between 30 and 45 workers over 16 cores, isn't too much of 
>> a coincidence that requests that hang correspond to a few workers always 
>> assigned on core 0?
>>
>>
>> El jueves, 9 de mayo de 2019, 17:10:19 (UTC-3), Leonel Câmara escribió:
>>>
>>> Yes I meant stuff exactly like that.
>>>
>>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/375c1190-4a48-40b8-bf9a-bd51e4d2289a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Could this problem in production be related to web2py?

2019-05-13 Thread Lisandro
After doing a lot of reading about uWSGI, I've discovered that "uWSGI cores 
are not CPU cores" (this was confirmed by unbit developers 
<https://github.com/unbit/uwsgi/issues/233#issuecomment-16456919>, the ones 
that wrote and mantain uWSGI). This makes me think that the issue I'm 
experiencing is due to a misconfiguration of uWSGI. But as I'm a developer 
and not a sysadmin, it's being hard for me to figure out exactly what uWSGI 
options should I tweak. 

I know this is out of the scope of this group, but I'll post my uWSGI app 
configuration anyway, in case someone still wants to help:

[uwsgi]
pythonpath = /var/www/medios/
mount = /=wsgihandler:application
master = true
workers = 40
cpu-affinity = 3
lazy-apps = true
harakiri = 60
reload-mercy = 8
max-requests = 4000
no-orphans = true
vacuum = true
buffer-size = 32768
disable-logging = true
ignore-sigpipe = true
ignore-write-errors = true
listen = 65535
disable-write-exception = true


Just to remember, this is running on a machine with 16 CPUs.
Maybe I should *enable-threads*, set *processes* options and maybe tweak 
*cpu-affinity. *
My application uses Redis for caching, so I think I can enable threads 
safely. 
What do you think?


El jueves, 9 de mayo de 2019, 21:10:57 (UTC-3), Lisandro escribió:
>
> I've checked my app's code once again and I can confirm that it doesn't 
> create threads. It only uses subprocess.cal() within functions that are 
> called in the scheduler environment, I understand that's the proper way to 
> do it because those calls don't run in uwsgi environment.
>
> In the other hand, I can't disable the master process, I use "lazy-apps" 
> and "touch-chain-reload" options of uwsgi in order to achieve graceful 
> reloading, because acordingly to the documentation about graceful 
> reloading 
> <https://uwsgi-docs.readthedocs.io/en/latest/articles/TheArtOfGracefulReloading.html>
> :
> *"All of the described techniques assume a modern (>= 1.4) uWSGI release 
> with the master process enabled."*
>
> Graceful reloading allows me to update my app's code and reload uwsgi 
> workers smoothly, without downtime or errors. What can I do if I can't 
> disable master process?
>
> You mentioned the original problem seems to be a locking problem due to 
> threads. If my app doesn't open threads, where else could be the cause of 
> the issue? 
>
> The weirdest thing for me is that the timeouts are always on core 0. I 
> mean, uwsgi runs between 30 and 45 workers over 16 cores, isn't too much of 
> a coincidence that requests that hang correspond to a few workers always 
> assigned on core 0?
>
>
> El jueves, 9 de mayo de 2019, 17:10:19 (UTC-3), Leonel Câmara escribió:
>>
>> Yes I meant stuff exactly like that.
>>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/b7bf5665-4a2e-4ebd-a147-9f82d2318820%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Could this problem in production be related to web2py?

2019-05-09 Thread Lisandro
I've checked my app's code once again and I can confirm that it doesn't 
create threads. It only uses subprocess.cal() within functions that are 
called in the scheduler environment, I understand that's the proper way to 
do it because those calls don't run in uwsgi environment.

In the other hand, I can't disable the master process, I use "lazy-apps" 
and "touch-chain-reload" options of uwsgi in order to achieve graceful 
reloading, because acordingly to the documentation about graceful reloading 

:
*"All of the described techniques assume a modern (>= 1.4) uWSGI release 
with the master process enabled."*

Graceful reloading allows me to update my app's code and reload uwsgi 
workers smoothly, without downtime or errors. What can I do if I can't 
disable master process?

You mentioned the original problem seems to be a locking problem due to 
threads. If my app doesn't open threads, where else could be the cause of 
the issue? 

The weirdest thing for me is that the timeouts are always on core 0. I 
mean, uwsgi runs between 30 and 45 workers over 16 cores, isn't too much of 
a coincidence that requests that hang correspond to a few workers always 
assigned on core 0?


El jueves, 9 de mayo de 2019, 17:10:19 (UTC-3), Leonel Câmara escribió:
>
> Yes I meant stuff exactly like that.
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/226da65d-9072-44b6-9a42-4fb268e7fd4b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Could this problem in production be related to web2py?

2019-05-09 Thread Lisandro
Hi Leonel, thank you very much for your time.

uWSGI docs confirm what you suggest:
*"The emperor should generally not be run with --master, unless master 
features like advanced logging are specifically needed."*

Allow me to make one last question: what do you mean by "create any thread 
in your application"? Do you mean using subprocess.call() or something like 
that? 
If that's the case, I think I've taken care of that and I only use 
subprocess within scheduler environment, but not in my controller 
functions. 
Is that what you meant?

El jueves, 9 de mayo de 2019, 15:25:36 (UTC-3), Leonel Câmara escribió:
>
> Seems like a locking problem due to threads. Do you create any thread in 
> your application? If so you need to remove master=true from your uwsgi .ini 
> config.
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/e72c0976-aace-467f-90bf-a95960d7f228%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Could this problem in production be related to web2py?

2019-05-09 Thread Lisandro
I'm running web2py in a production environment that handles about 30 
requests per second, using Nginx + uWSGI.

The server is a Linode VPS with 16 cores. 
uWSGI runs an average of 30 workers (it spawns or recycles workers 
depending on traffic load)
The server has plenty of resources available (CPU usage is always around 
25%, used memory is around 50%).

My web2py application consists in a bunch of news websites (articles and 
blog posts). Most of requests are simple HTML pages, and they are generated 
in a few milliseconds. However, since a couple of months ago, I noticed 
that a few of those requests take much longer than they should (some of 
them can take up to 60 seconds, which is nonsense). It isn't always the 
same request that hangs. Requests hang randomly: the same request that 
takes 300 milliseconds to complete can take up to 50 seconds later. 

I checked the health of some related services (like postgresql for 
databases, pgbouncer for pooling database connections, redis for cache) and 
there is nothing weird going on there. None of those services is reporting 
any warning or error, they are all running smoothly.

Here is the only interesting thing I noticed: *those requests that hang 
always correspond to uWSGI workers that have been assigned to core 0*. 
uWSGI has a config option called "harakiri" which is basically a timeout: 
if a worker has been working in a requests for more than X seconds, then 
the worker is killed (recycled). When this happens, uWSGI logs information 
about the event: the worker number, the request that hanged, and the core 
number that the worker was assigned to. And the problem is **always** with 
core 0*. *Here there are some examples of what I see in uWSGI log:

Thu May  9 07:37:29 2019 - *** HARAKIRI ON WORKER 2 (pid: 27125, try: 1) ***
Thu May  9 07:37:29 2019 - HARAKIRI !!! worker 2 status !!!
Thu May  9 07:37:29 2019 - HARAKIRI [core 0] 186.111.149.160 - GET 
/necrologicas 
since 1557398188
Thu May  9 07:37:29 2019 - HARAKIRI !!! end of worker 2 status !!!
Thu May  9 07:55:44 2019 - *** HARAKIRI ON WORKER 9 (pid: 18405, try: 1) ***
Thu May  9 07:55:44 2019 - HARAKIRI !!! worker 9 status !!!
Thu May  9 07:55:44 2019 - HARAKIRI [core 0] 186.109.121.239 - GET / since 
1557399283
Thu May  9 07:55:44 2019 - HARAKIRI !!! end of worker 9 status !!!
Thu May  9 09:02:48 2019 - *** HARAKIRI ON WORKER 8 (pid: 3287, try: 1) ***
Thu May  9 09:02:48 2019 - HARAKIRI !!! worker 8 status !!!
Thu May  9 09:02:48 2019 - HARAKIRI [core 0] 66.249.79.48 - GET /noticia/
2557/secuestran-plantas-de-marihuana-y-una-moto since 1557403307
Thu May  9 09:02:48 2019 - HARAKIRI !!! end of worker 8 status !!!
Thu May  9 09:15:00 2019 - *** HARAKIRI ON WORKER 10 (pid: 9931, try: 1) ***
Thu May  9 09:15:00 2019 - HARAKIRI !!! worker 10 status !!!
Thu May  9 09:15:00 2019 - HARAKIRI [core 0] 66.249.65.142 - GET /amp/156013
/prevencion-salud-show-cantando-con-adriana since 1557404039
Thu May  9 09:15:00 2019 - HARAKIRI !!! end of worker 10 status !!!
Thu May  9 09:29:15 2019 - *** HARAKIRI ON WORKER 22 (pid: 14688, try: 1) 
***
Thu May  9 09:29:15 2019 - HARAKIRI !!! worker 22 status !!!
Thu May  9 09:29:15 2019 - HARAKIRI [core 0] 181.95.11.146 - GET /noticia/
73359/santa-fe-un-changarin-ayudo-a-una-mujer-y-ella-con-una-colecta-le-
compro-una-bic since 1557404894
Thu May  9 09:29:15 2019 - HARAKIRI !!! end of worker 22 status !!!
Thu May  9 11:05:20 2019 - *** HARAKIRI ON WORKER 38 (pid: 521, try: 1) ***


Notice *that the problem is always on core 0*. I've searched the entire 
logs and I haven't found the problem presented in a different core number, 
it's always the core 0.
I've already asked in the uWSGI forum, but it's unlikely uWSGI being the 
problem (the problem started happening without having updated or changed 
anything of uWSGI).

Could this be an issue within web2py? I personally don't think so, but I 
would like to hear some opinions from the experts :)
IMHO the problem could be related to virtualization (remember this is 
running on a Linode VPS).

What do you think?
Any comment or suggestion will be much appreciated.
Thanks!

Best regards,
Lisandro.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/4d74f85a-c01f-4ffa-bbac-6a620c79d037%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Is it possible to write a virtual method field that modifies the row?

2019-04-26 Thread Lisandro
Hi Anthony, thank you for your time.
I tried the method you suggested, but here is my problem: after calling the 
method, the row object isn't updated.

row = db.person(1)
row.change_name('Lisandro')
print(row.name)

This doesn't print "Lisandro". 
Instead, it prints the name the person had before the call.
If I want to print the new name, I have to retrieve the row again like this:

row = db.person(1)
row.change_name('Lisandro')
row = db.person(1)
print(row.name)

I was wondering if it was possible to avoid having to retrieve the record 
again.


El viernes, 26 de abril de 2019, 17:16:10 (UTC-3), Anthony escribió:
>
> def change_name(row, name):
> db(db.person.id == row.person.id).update(name=name)
>
> db.define_table('person',
> Field('name'),
> Field.Method('change_name', change_name))
>
> row = db.person(1)
> row.change_name('Lisandro')
>
> Anthony
>
> On Friday, April 26, 2019 at 12:21:28 PM UTC-4, Lisandro wrote:
>>
>> I've been working with Virtual Fields for a while, but now I'm wondering, 
>> how could I write a Virtual Field method that modifies the row itself?
>>
>> I mean, I would like to do something similar that what the 
>> .update_record() method does. When you call row.update_record(), the row 
>> object is updated with the new values. 
>>
>> I've tried returning the row object in the method definition function but 
>> it doesn't work.
>>
>> I'm wondering, is it even possible to implement something like that? Any 
>> comment or suggestion will be much appreciated.
>>
>> Thanks!
>> Warm regards,
>> Lisandro.
>>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Is it possible to write a virtual method field that modifies the row?

2019-04-26 Thread Lisandro
I've been working with Virtual Fields for a while, but now I'm wondering, 
how could I write a Virtual Field method that modifies the row itself?

I mean, I would like to do something similar that what the .update_record() 
method does. When you call row.update_record(), the row object is updated 
with the new values. 

I've tried returning the row object in the method definition function but 
it doesn't work.

I'm wondering, is it even possible to implement something like that? Any 
comment or suggestion will be much appreciated.

Thanks!
Warm regards,
Lisandro.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: compile_application() not including symlinked views (while it includes symlinked controllers)

2019-04-04 Thread Lisandro
I don't think so, because the symlink has the same permissions (also owner) 
of the other symlinks that are compiled properly. Anyway, I tried to set 
777 permissions but it didn't work.

I forgot to mention that *symlinked views are compiled properly when the 
symlink points directly to a file*, for example:
views/test.html --> *symlink to external file*

... is properly compiled to this:
compiled/views.test.html.pyc


*It appears that the problem is when the symlink points to a folder *which 
contains the html files.

I've taken a look at the source code of compile_views() 

 
but I'm not that much experienced so I couldn't say why it doesn't follow 
symlinked folders :/


El jueves, 4 de abril de 2019, 7:59:49 (UTC-3), Leonel Câmara escribió:
>
> Could it be a permissions problem?
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] compile_application() not including symlinked views (while it includes symlinked controllers)

2019-04-03 Thread Lisandro
I have an application that uses symlinks for some of its controllers, 
models and views.
Symlinked controllers and models are compiled properly, but something weird 
happens with symlinked views.

For example, this symlinked controller:
controllers/test.py --> *symlink to external file*

... is properly compiled to this files:
compiled/controllers.test.functionA.pyc
compiled/controllers.test.functionB.pyc


And even the "models" folder, which is totally symlinked:
models/ --> *symlink to external folder*

*... *is properly compiled to this file:
compiled/models.db.pyc


So it appears that the compilation follows symlinked files and folders.
But I have a symlinked folder inside "views" and the compilation process 
appears to ignore it. 

For example, these views:
views/default/index.html
views/global/ --> *symlink to external folder*

... are compiled to only this:
compiled/views.index.html.pyc


Notice it doesn't include any of the .html files that are inside the 
symlinked folder.
Is this a bug or is it the expected result?

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Error: prepared statement «pg8000_statement_0» already exists

2019-01-25 Thread Lisandro
Hi there! Finally I was able to solve the problem uninstalling psycopg2* 
and reinstalling psycopg2-binary.

Still, I want to comment what happened, in case it helps others.
A packages update broke the psycopg2 package in the server. Actually, the 
server ended up with two instances: psycopg2 and psycopg2-binary, and 
trying to import psycopg2 from python would return an error:

>>> import psycopg2
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib64/python2.7/site-packages/psycopg2/__init__.py", line 50, 
in 
from psycopg2._psycopg import ( # noqa
ImportError: /usr/lib64/python2.7/site-packages/psycopg2/_psycopg.so: 
undefined symbol: PQconninfo


>From what I've learned here, web2py comes with several database adapters. 
Apparently, for postgresql, web2py tries to use psycopg2, and if it can't, 
it will use pg8000. 
And here is the weird stuff: the pg8000 driver works good when the 
connection is made directly to PostgreSQL server. But if the connection is 
made through pgBouncer (a connection pooler for PostgreSQL), then for some 
reason the pgBouncer connections are not reused; instead, they start to 
pile up very very fast. I'm not sure if this a problem of the pg8000 
adapter that comes with web2py or a problem within pgBouncer.

Still, this makes me wonder: should web2py automatically change to pg8000 
when psycopg2 fails? 
I mean, in this scenario, I would have prefer a 500 error. It would have 
been much easier to detect the source of the problem.
As web2py switched to pg8000 without making me notice, and also, as this 
new driver produced a problem with pgBouncer, it took me a while to 
understand why pgBouncer was failing so spectacularly. 

Again, I'm not sure if this is a problem of web2py's pg8000 adapter or a 
problem with pgBouncer itself (remember pg8000 works ok connecting directly 
to PostgreSQL).
But still, wouldn't be nice to be able to say to DAL: "hey, use only this 
adapter, and fail if you can't import it"?

I've not seen an option for that in DAL's constructor.
Could I just remove the folder gluon/contrib/pg8000/ to be sure my 
application will use only psycopg2 and fail if it can't find it?

Thank you very much in advance.
Regards,
Lisandro.

El miércoles, 23 de enero de 2019, 10:56:15 (UTC-3), Massimiliano escribió:
>
> When psycopg is available web2py will use it when is not it use pg8000 
> that was included in web2py
>
> Il giorno mer 23 gen 2019 alle 14:05 Lisandro  > ha scritto:
>
>> Thank you all for that notes.
>>
>> When I run web2py at my server, I see this available drivers: sqlite3, 
>> imaplib, pymysql, pg8000
>> I don't see psycopg2, so I presume it will be available if I uninstall 
>> those two versions and install the psycopg2-binary version.
>>
>> One additional question: which driver is using my app then?
>> I mean, right now my application is connecting directly to PostgreSQL 
>> without problems. Would this mean it is using the pg8000 driver? Would 
>> psycopg2 be available to web2py once I reinstall it (restarting uwsgi)?
>>
>>
>>
>> El miércoles, 23 de enero de 2019, 7:54:37 (UTC-3), Massimiliano escribió:
>>>
>>> Try to uninstall psycopg2-* and reinstall only psycopg2-binary
>>>
>>> On Wed, Jan 23, 2019 at 11:52 AM Massimiliano  wrote:
>>>
>>>> Could be.
>>>>
>>>> When you strart web2py it show database driver available:
>>>> Mine:
>>>> Database drivers available: psycopg2, pymysql, imaplib, sqlite3, 
>>>> pg8000, pyodbc, pymongo 
>>>>
>>>>
>>>>
>>>>
>>>> On Wed, Jan 23, 2019 at 10:59 AM Lisandro  
>>>> wrote:
>>>>
>>>>> Thanks Massimiliano.
>>>>>
>>>>> Apparently psycopg2 is already installed (of course it was already 
>>>>> installed, maybe something broke during the packages upgrade).
>>>>> Something weird is that I see psycopg2 installed twice, is this 
>>>>> correct?
>>>>>
>>>>> ~$ pip freeze | grep psycopg2
>>>>> psycopg2==2.7.5
>>>>> psycopg2-binary==2.7.5
>>>>>
>>>>> Could this be the source of the problem?
>>>>> I don't see how. For what I understand, using or not using pgBouncer 
>>>>> in the middle is transparent to the web2py application: it always 
>>>>> connects 
>>>>> in the same way, the application doesn't know if its connecting to 
>>>>> PostgreSQL or pgBouncer. I think that's the whole idea of pgBouncer, to 
>>>

Re: [web2py] Error: prepared statement «pg8000_statement_0» already exists

2019-01-23 Thread Lisandro
Thank you all for that notes.

When I run web2py at my server, I see this available drivers: sqlite3, 
imaplib, pymysql, pg8000
I don't see psycopg2, so I presume it will be available if I uninstall 
those two versions and install the psycopg2-binary version.

One additional question: which driver is using my app then?
I mean, right now my application is connecting directly to PostgreSQL 
without problems. Would this mean it is using the pg8000 driver? Would 
psycopg2 be available to web2py once I reinstall it (restarting uwsgi)?



El miércoles, 23 de enero de 2019, 7:54:37 (UTC-3), Massimiliano escribió:
>
> Try to uninstall psycopg2-* and reinstall only psycopg2-binary
>
> On Wed, Jan 23, 2019 at 11:52 AM Massimiliano  > wrote:
>
>> Could be.
>>
>> When you strart web2py it show database driver available:
>> Mine:
>> Database drivers available: psycopg2, pymysql, imaplib, sqlite3, pg8000, 
>> pyodbc, pymongo 
>>
>>
>>
>>
>> On Wed, Jan 23, 2019 at 10:59 AM Lisandro > > wrote:
>>
>>> Thanks Massimiliano.
>>>
>>> Apparently psycopg2 is already installed (of course it was already 
>>> installed, maybe something broke during the packages upgrade).
>>> Something weird is that I see psycopg2 installed twice, is this correct?
>>>
>>> ~$ pip freeze | grep psycopg2
>>> psycopg2==2.7.5
>>> psycopg2-binary==2.7.5
>>>
>>> Could this be the source of the problem?
>>> I don't see how. For what I understand, using or not using pgBouncer in 
>>> the middle is transparent to the web2py application: it always connects in 
>>> the same way, the application doesn't know if its connecting to PostgreSQL 
>>> or pgBouncer. I think that's the whole idea of pgBouncer, to act as a 
>>> middle man, pooling connections, behaving as if the application was 
>>> connected directly to PostgreSQL.
>>>
>>> Any comment or suggestion will be much appreciated.
>>>
>>> El miércoles, 23 de enero de 2019, 6:51:06 (UTC-3), Massimiliano 
>>> escribió:
>>>>
>>>> Have you tried to install psycopg2? Is the standard de facto postgresql 
>>>> driver.
>>>> The pip package should be psycopg2-binary
>>>>
>>>> On Wed, Jan 23, 2019 at 10:39 AM Lisandro  
>>>> wrote:
>>>>
>>>>> Hi there! Yesterday I had a MAJOR downtime and I would need your help 
>>>>> to understand what happened.
>>>>>
>>>>> The team that is in charge of upgrading security packages at my server 
>>>>> (CentOS 7 at Linode) did an update that involved an upgrade to pgBouncer. 
>>>>> Accordingly to what they said, they noticed pgBouncer was throwing errors 
>>>>> after the upgrade, so they downgraded to the previous version that was 
>>>>> installed. But sadly the problem remained. After this upgrade/downgrade 
>>>>> of 
>>>>> pgBouncer, all the attempts of connecting from my web2py app to pgBouncer 
>>>>> fail. 
>>>>>
>>>>> Inside of postgresql.log I can see lot of this:
>>>>> 2019-01-22 14:39:37 -03 ERROR:  prepared statement 
>>>>> «pg8000_statement_0» already exists
>>>>> 2019-01-22 14:39:37 -03 SENTENCIA:  begin transaction
>>>>> 2019-01-22 14:39:38 -03 ERROR:  prepared statement 
>>>>> «pg8000_statement_0» already exists
>>>>> 2019-01-22 14:39:38 -03 SENTENCIA:  begin transaction
>>>>>
>>>>> I've noticed that "pg8000_statement_0" is referenced at line 1894 in 
>>>>> gluon/contrib/pg8000/core.py, but I can't realise if there is something I 
>>>>> could do to avoid the error. 
>>>>> I'm using web2py Version 2.16.1-stable+timestamp.2017.11.14.05.54.25, 
>>>>> and I've noticed that gluon/contrib/pg8000/core.py isn't anymore in 
>>>>> version 
>>>>> 2.17.1.
>>>>>
>>>>> Of course I've tried restarting al the involved services, but nothing 
>>>>> worked. Every time my web2py application tries to connect to the 
>>>>> database, 
>>>>> if pgBouncer is at the middle, the 5 attempts fail and those lines are 
>>>>> printed to the postgresql.log. Right now I've bypassed pgbouncer and my 
>>>>> application is connecting directly to postgresql.
>>>>>
>>>>> Could you put some lights into this? What can I do to avoi

Re: [web2py] Error: prepared statement «pg8000_statement_0» already exists

2019-01-23 Thread Lisandro
Another weird stuff I noticed: in my server, if I open a terminal, run 
python and try to import psycopg2, I receive an error:

>>> import psycopg2
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib64/python2.7/site-packages/psycopg2/__init__.py", line 50, 
in 
from psycopg2._psycopg import ( # noqa
ImportError: /usr/lib64/python2.7/site-packages/psycopg2/_psycopg.so: 
undefined symbol: PQconninfo

Does it mean that psycopg2 is broken? If that's the case, how can my 
application still be running? I'm pretty lost. 


El miércoles, 23 de enero de 2019, 6:58:57 (UTC-3), Lisandro escribió:
>
> Thanks Massimiliano.
>
> Apparently psycopg2 is already installed (of course it was already 
> installed, maybe something broke during the packages upgrade).
> Something weird is that I see psycopg2 installed twice, is this correct?
>
> ~$ pip freeze | grep psycopg2
> psycopg2==2.7.5
> psycopg2-binary==2.7.5
>
> Could this be the source of the problem?
> I don't see how. For what I understand, using or not using pgBouncer in 
> the middle is transparent to the web2py application: it always connects in 
> the same way, the application doesn't know if its connecting to PostgreSQL 
> or pgBouncer. I think that's the whole idea of pgBouncer, to act as a 
> middle man, pooling connections, behaving as if the application was 
> connected directly to PostgreSQL.
>
> Any comment or suggestion will be much appreciated.
>
> El miércoles, 23 de enero de 2019, 6:51:06 (UTC-3), Massimiliano escribió:
>>
>> Have you tried to install psycopg2? Is the standard de facto postgresql 
>> driver.
>> The pip package should be psycopg2-binary
>>
>> On Wed, Jan 23, 2019 at 10:39 AM Lisandro  wrote:
>>
>>> Hi there! Yesterday I had a MAJOR downtime and I would need your help to 
>>> understand what happened.
>>>
>>> The team that is in charge of upgrading security packages at my server 
>>> (CentOS 7 at Linode) did an update that involved an upgrade to pgBouncer. 
>>> Accordingly to what they said, they noticed pgBouncer was throwing errors 
>>> after the upgrade, so they downgraded to the previous version that was 
>>> installed. But sadly the problem remained. After this upgrade/downgrade of 
>>> pgBouncer, all the attempts of connecting from my web2py app to pgBouncer 
>>> fail. 
>>>
>>> Inside of postgresql.log I can see lot of this:
>>> 2019-01-22 14:39:37 -03 ERROR:  prepared statement «pg8000_statement_0» 
>>> already exists
>>> 2019-01-22 14:39:37 -03 SENTENCIA:  begin transaction
>>> 2019-01-22 14:39:38 -03 ERROR:  prepared statement «pg8000_statement_0» 
>>> already exists
>>> 2019-01-22 14:39:38 -03 SENTENCIA:  begin transaction
>>>
>>> I've noticed that "pg8000_statement_0" is referenced at line 1894 in 
>>> gluon/contrib/pg8000/core.py, but I can't realise if there is something I 
>>> could do to avoid the error. 
>>> I'm using web2py Version 2.16.1-stable+timestamp.2017.11.14.05.54.25, 
>>> and I've noticed that gluon/contrib/pg8000/core.py isn't anymore in version 
>>> 2.17.1.
>>>
>>> Of course I've tried restarting al the involved services, but nothing 
>>> worked. Every time my web2py application tries to connect to the database, 
>>> if pgBouncer is at the middle, the 5 attempts fail and those lines are 
>>> printed to the postgresql.log. Right now I've bypassed pgbouncer and my 
>>> application is connecting directly to postgresql.
>>>
>>> Could you put some lights into this? What can I do to avoid that error 
>>> and still connect to pgBouncer with web2py 2.16.1?
>>>
>>> Thank you very much in advance.
>>> Regards, Lisandro.
>>>
>>> -- 
>>> Resources:
>>> - http://web2py.com
>>> - http://web2py.com/book (Documentation)
>>> - http://github.com/web2py/web2py (Source code)
>>> - https://code.google.com/p/web2py/issues/list (Report Issues)
>>> --- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "web2py-users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to web2py+un...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>> -- 
>> Massimiliano
>>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Error: prepared statement «pg8000_statement_0» already exists

2019-01-23 Thread Lisandro
Thanks Massimiliano.

Apparently psycopg2 is already installed (of course it was already 
installed, maybe something broke during the packages upgrade).
Something weird is that I see psycopg2 installed twice, is this correct?

~$ pip freeze | grep psycopg2
psycopg2==2.7.5
psycopg2-binary==2.7.5

Could this be the source of the problem?
I don't see how. For what I understand, using or not using pgBouncer in the 
middle is transparent to the web2py application: it always connects in the 
same way, the application doesn't know if its connecting to PostgreSQL or 
pgBouncer. I think that's the whole idea of pgBouncer, to act as a middle 
man, pooling connections, behaving as if the application was connected 
directly to PostgreSQL.

Any comment or suggestion will be much appreciated.

El miércoles, 23 de enero de 2019, 6:51:06 (UTC-3), Massimiliano escribió:
>
> Have you tried to install psycopg2? Is the standard de facto postgresql 
> driver.
> The pip package should be psycopg2-binary
>
> On Wed, Jan 23, 2019 at 10:39 AM Lisandro  > wrote:
>
>> Hi there! Yesterday I had a MAJOR downtime and I would need your help to 
>> understand what happened.
>>
>> The team that is in charge of upgrading security packages at my server 
>> (CentOS 7 at Linode) did an update that involved an upgrade to pgBouncer. 
>> Accordingly to what they said, they noticed pgBouncer was throwing errors 
>> after the upgrade, so they downgraded to the previous version that was 
>> installed. But sadly the problem remained. After this upgrade/downgrade of 
>> pgBouncer, all the attempts of connecting from my web2py app to pgBouncer 
>> fail. 
>>
>> Inside of postgresql.log I can see lot of this:
>> 2019-01-22 14:39:37 -03 ERROR:  prepared statement «pg8000_statement_0» 
>> already exists
>> 2019-01-22 14:39:37 -03 SENTENCIA:  begin transaction
>> 2019-01-22 14:39:38 -03 ERROR:  prepared statement «pg8000_statement_0» 
>> already exists
>> 2019-01-22 14:39:38 -03 SENTENCIA:  begin transaction
>>
>> I've noticed that "pg8000_statement_0" is referenced at line 1894 in 
>> gluon/contrib/pg8000/core.py, but I can't realise if there is something I 
>> could do to avoid the error. 
>> I'm using web2py Version 2.16.1-stable+timestamp.2017.11.14.05.54.25, and 
>> I've noticed that gluon/contrib/pg8000/core.py isn't anymore in version 
>> 2.17.1.
>>
>> Of course I've tried restarting al the involved services, but nothing 
>> worked. Every time my web2py application tries to connect to the database, 
>> if pgBouncer is at the middle, the 5 attempts fail and those lines are 
>> printed to the postgresql.log. Right now I've bypassed pgbouncer and my 
>> application is connecting directly to postgresql.
>>
>> Could you put some lights into this? What can I do to avoid that error 
>> and still connect to pgBouncer with web2py 2.16.1?
>>
>> Thank you very much in advance.
>> Regards, Lisandro.
>>
>> -- 
>> Resources:
>> - http://web2py.com
>> - http://web2py.com/book (Documentation)
>> - http://github.com/web2py/web2py (Source code)
>> - https://code.google.com/p/web2py/issues/list (Report Issues)
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "web2py-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to web2py+un...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
> -- 
> Massimiliano
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Error: prepared statement «pg8000_statement_0» already exists

2019-01-23 Thread Lisandro
Hi there! Yesterday I had a MAJOR downtime and I would need your help to 
understand what happened.

The team that is in charge of upgrading security packages at my server 
(CentOS 7 at Linode) did an update that involved an upgrade to pgBouncer. 
Accordingly to what they said, they noticed pgBouncer was throwing errors 
after the upgrade, so they downgraded to the previous version that was 
installed. But sadly the problem remained. After this upgrade/downgrade of 
pgBouncer, all the attempts of connecting from my web2py app to pgBouncer 
fail. 

Inside of postgresql.log I can see lot of this:
2019-01-22 14:39:37 -03 ERROR:  prepared statement «pg8000_statement_0» 
already exists
2019-01-22 14:39:37 -03 SENTENCIA:  begin transaction
2019-01-22 14:39:38 -03 ERROR:  prepared statement «pg8000_statement_0» 
already exists
2019-01-22 14:39:38 -03 SENTENCIA:  begin transaction

I've noticed that "pg8000_statement_0" is referenced at line 1894 in 
gluon/contrib/pg8000/core.py, but I can't realise if there is something I 
could do to avoid the error. 
I'm using web2py Version 2.16.1-stable+timestamp.2017.11.14.05.54.25, and 
I've noticed that gluon/contrib/pg8000/core.py isn't anymore in version 
2.17.1.

Of course I've tried restarting al the involved services, but nothing 
worked. Every time my web2py application tries to connect to the database, 
if pgBouncer is at the middle, the 5 attempts fail and those lines are 
printed to the postgresql.log. Right now I've bypassed pgbouncer and my 
application is connecting directly to postgresql.

Could you put some lights into this? What can I do to avoid that error and 
still connect to pgBouncer with web2py 2.16.1?

Thank you very much in advance.
Regards, Lisandro.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Scheduler tasks fail intermitently with no apparent reason (sys.exit(1))

2019-01-16 Thread Lisandro
Thank you all you guys for the help.
I've found an inconsistence in my own code. I have a scheduled task that 
checks all the other tasks of the scheduler and sends me an email if 
something failed or timed out. The email includes the error traceback, but 
an error is making the function to include an incorrect traceback, so 
that's where my confusion came. Sorry for the bother, I think it's working 
as it is supposed to.

To answer Dave's question: new apps are installed around 5 times per day. 
Also, once per week a bunch of apps are removed. This process works 
smoothly (I use uwsgi chain reload to avoid any issues). 

Thanks again!


El miércoles, 16 de enero de 2019, 5:35:32 (UTC-3), Dave S escribió:
>
>
>
> On Tuesday, January 15, 2019 at 11:31:51 AM UTC-8, Lisandro wrote:
>>
>> Thank you Leonel, that could be the reason, or at least could be related.
>> The "models" folder in those cases wasn't removed or anything like that, 
>> but still I should mention a couple of things about my scenario:
>>
>> * I have several web2py apps installed, serving multiple websites; each 
>> website has two applications, and for every website, the "models" folder is 
>> present in the first application, and it's symlinked in the second one.
>>
>> * Websites are created on demand: a scheduled task installs new 
>> applications, creating the necessary folders, db, etc.
>>
>>
>> The times those tasks failed, they didn't correspond to an application 
>> that was being installed at that time. 
>> I always run the scheduler with -K , and main_app is of course 
>> always installed.
>>
>> I'm not sure how the scheduler works, maybe it needs to read the models 
>> folder in every installed application? 
>>
>>
> Try stopping the scheduler for other apps during an install.  The task 
> doing the install should have all it's environment set up [by the scheduler 
> in it's app] before the the new app folder is created, so that should be 
> alright.  You might want to make sure the installer app doesn't have 
> anything else scheduled too close in time to the installation.
>
> How often are you installing new apps?
>
> /dps
>
>
>  
>
>>
>> El martes, 15 de enero de 2019, 15:34:53 (UTC-3), Leonel Câmara escribió:
>>>
>>> *Lisandro *that problem is appearing when trying to create an 
>>> environment to execute the task, where it imports the mod tels from the 
>>> application. Is it possible the folder of the application has been removed 
>>> or moved before the scheduler task is run?
>>>
>>> This would explain why it doesn't happen frequently.
>>>
>>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Scheduler tasks fail intermitently with no apparent reason (sys.exit(1))

2019-01-15 Thread Lisandro
Thank you Leonel, that could be the reason, or at least could be related.
The "models" folder in those cases wasn't removed or anything like that, 
but still I should mention a couple of things about my scenario:

* I have several web2py apps installed, serving multiple websites; each 
website has two applications, and for every website, the "models" folder is 
present in the first application, and it's symlinked in the second one.

* Websites are created on demand: a scheduled task installs new 
applications, creating the necessary folders, db, etc.


The times those tasks failed, they didn't correspond to an application that 
was being installed at that time. 
I always run the scheduler with -K , and main_app is of course 
always installed.

I'm not sure how the scheduler works, maybe it needs to read the models 
folder in every installed application? 


El martes, 15 de enero de 2019, 15:34:53 (UTC-3), Leonel Câmara escribió:
>
> *Lisandro *that problem is appearing when trying to create an environment 
> to execute the task, where it imports the models from the application. Is 
> it possible the folder of the application has been removed or moved before 
> the scheduler task is run?
>
> This would explain why it doesn't happen frequently.
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Scheduler tasks fail intermitently with no apparent reason (sys.exit(1))

2019-01-15 Thread Lisandro
That was my first guess, but the task status in those cases was FAILED, not 
TIMEOUT.

I forgot to mention I'm using web2py 
version 2.16.1-stable+timestamp.2017.11.14.05.54.25
I'll keep monitoring and see if I can add any detail.

Thanks!

El martes, 15 de enero de 2019, 5:24:56 (UTC-3), Niphlod escribió:
>
> Timeout?!

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Scheduler tasks fail intermitently with no apparent reason (sys.exit(1))

2019-01-14 Thread Lisandro
I've seeing this for a while now, but I can't figure out why it happens.

I have several tasks (around 150) that run once per day (at different 
hours, maybe some of them run simultaneously). 
They always run successfully with no problems. However, once every time, 
one of them fail with this traceback:

Traceback (most recent call last): File 
"/var/www/medios/gluon/scheduler.py", line 481, in executor 
extra_request={'is_scheduler': True}) File 
"/var/www/medios/gluon/shell.py", line 175, in env sys.exit(1) SystemExit: 1



Other times it fails with this traceback:

Traceback (most recent call last): File 
"/var/www/medios/gluon/scheduler.py", line 501, in executor result = 
dumps(_function(*args, **vars)) File "applications/thetimes/compiled/
models.db.py", line 517, in newsletter File 
"applications/thetimes/modules/globales.py", line 1008, in 
enviar_newsletter sleep(1+throttle_count*3) File 
"/var/www/medios/gluon/scheduler.py", line 901, in 
signal.signal(signal.SIGTERM, lambda signum, stack_frame: sys.exit(1)) 
SystemExit: 1


It doesn't happen very often, but still I would like to know if there is 
something in my code that I should fix, or is something related to the 
scheduler itself.
Any observation or suggestion will be appreciated. Thanks!

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Unexplicable error, any thoughts?

2018-11-30 Thread Lisandro
Thank you for your comments. I checked the locals in the detail of the 
error ticket, and also checked every code/args/vars section, but I didn't 
find anything interesting.

Val's comment made me think again if something else could delete the 
record. 
This is a multi-user scenario and, in theory, two users could be trying to 
edit/delete the same content. 
However, the time it takes to run the function is very low, so the only 
scenario where the error could happen is if the second process (the one 
that deletes the record and commits) runs right between the start/end of 
the first process (the "first process" would be the function that I exposed 
earlier).

I mean, if that is what is happening, it must be happening in a very 
specific timeframe. 
Still, I'm skeptical about this explanation, mostly because if that's the 
reason, the error should happen much less frequently than it does.

My application is running compiled, and I can't decompile it in production.
I guess that I will make a fix to my code and check that the record exists 
before doing the insert. Not necessarily an elegant solution, but it will 
avoid the error tickets being created.

Any other comment is very welcome.

Regards,
Lisandro.


El viernes, 30 de noviembre de 2018, 16:26:20 (UTC-3), Dave S escribió:
>
>
>
> On Friday, November 30, 2018 at 10:54:14 AM UTC-8, Val K wrote:
>>
>> It seems that something deletes the record while your controller is 
>> runnig. Is there another one that could do it?
>
>
> In addition, examine the locals in each frame as shown in the error 
> ticket.  There might be a hint of clue there.
>
> /dps
>  
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Unexplicable error, any thoughts?

2018-11-30 Thread Lisandro
I've been dealing with this issue since some time now, but can't understand 
how can it even happen. It shouldn't. But it does :P

In the past I had reported a very similar problem (probably related), but 
as I was using a very old web2py version, and considering that it was 
impossible that the problem to happen in the way it was described, I just 
Iet it go. The thread where we talked about it is this:
https://groups.google.com/forum/#!searchin/web2py/lisandro%7Csort:date/web2py/gdmTww_jrkA/IXzu5Ns3CQAJ

Anyway, now I'm running web2py 2.17.1 and I'm facing a very similar issue. 
In fact, it happens in the same place of the code of the previous issue, so 
I think it's the same issue, but I cant' figure out how it can even happen.

This is the traceback:

Traceback (most recent call last):
  File "/var/www/medios/gluon/restricted.py", line 219, in restricted
exec(ccode, environment)
  File 
"applications/informatesalta_panel/compiled/controllers.contenido.editar.py", 
line 727, in 
  File "/var/www/medios/gluon/globals.py", line 419, in 
self._caller = lambda f: f()
  File "/var/www/medios/gluon/tools.py", line 3982, in f
return action(*a, **b)
  File 
"applications/informatesalta_panel/compiled/controllers.contenido.editar.py", 
line 338, in editar
  File "/var/www/medios/gluon/packages/dal/pydal/objects.py", line 753, in 
insert
ret = self._db._adapter.insert(self, row.op_values())
  File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 486, 
in insert
raise e
IntegrityError: inserción o actualización en la tabla «multimedia_contenido» 
viola la llave foránea «multimedia_contenido_contenido_fkey»
DETAIL:  La llave (contenido)=(176336) no está presente en la tabla «contenido».



It is in spanish, but it's simple: a classic "IntegrityError: insert or 
update on table violates foreign key constraint".

I have the table "contenido" (spanish for "content"), that is a table for 
storing articles and news.
I have also another table called "multimedia" which stores media files.
And finally I have the table "multimedia_contenido" that stores the 
relations between the first two tables (which media is assigned to which 
article)

The controller/function that allows the user to edit the content and its 
media files is contenido/editar, that's the place where the error happens.
The first argument of the function contenido/editar is the record ID of the 
"contenido" table.
This is the code of the function (simplified):

@auth.requires_login()
def editar():
contenido = db.contenido[request.args(0)]
if not contenido:
redirect(URL('default', 'index'))

# if a POST is sent, updates the contenido
if request.env.request_method == 'POST':
datos = {
'last_updated': request.now,
'title': request.vars.title,
'detail': request.vars.detail}
contenido.update_record(**datos)

# in some very weird situations of the past, contenido was None at 
this
# point, so I had to add this ugly check:
if not contenido:
return response.json({'success': False})

# runs a custom virtual method that only does a simple 
db.executesql()
contenido.actualizar_tsv()


# stores relations to multimedia_contenido
contenido.multimedia_contenido.delete()
for id_multimedia in request.vars.multimedia.split(','):
if db(db.multimedia.id == id_multimedia).count():
# this is the line producing the error
db.multimedia_contenido.insert(
contenido=contenido.id,
multimedia=id_multimedia)

return response.json({'success': True})

return dict(contenido=contenido)



Notice the traceback shows that the error is at line 
*db.multimedia_contenido.insert(...* and it says that the the ID *176336* 
is not present in the table contenido. 
But wait!! The detailed error ticket shows that the URL where the error 
triggered was in fact */contenido/editar/176336*
And, if you check the first line of my function, you will notice it gets 
the "contenido" record from the given ID passed as the first argument of 
the function.

So, how can it be even possible? 
The post is made to the URL /contenido/editar/176336.
The first thing the function does is to check that the row exists. If it 
doesn't exist, it redirects. So in theory there is no way it gets to the 
end of the function if the content doesn't exist.

But the ticket error shows another thing: it says that the code did run 
almost to the last, when it throwed error because the content didn't exist 
anymore in the database table.

The only

[web2py] Re: Is it possible to share cache between two differente web2py applications?

2018-11-26 Thread Lisandro
Thanks Massimo!
I've recently noted that the fix I did was incomplete.

I've sent a new pull request with the new code:
https://github.com/web2py/web2py/pull/2059

Regards,
Lisandro.

El domingo, 18 de noviembre de 2018, 16:06:37 (UTC-3), Massimo Di Pierro 
escribió:
>
> approved! :-)
>
> On Wednesday, 14 November 2018 08:02:07 UTC-8, Lisandro wrote:
>>
>> I've created a pull request to achieve this:
>> https://github.com/web2py/web2py/pull/2055
>>
>> If it's not accepted (which is totally ok with me), I would like to know 
>> if I have some other alternative.
>> Thanks!
>>
>> El lunes, 23 de enero de 2017, 17:29:22 (UTC-3), Lisandro escribió:
>>>
>>> I'm using RedisCache, and I've seen that web2py adds a prefix to all the 
>>> keys I store in the cache.
>>> For example, if I have an application called "master" and I do this:
>>>
>>> config = cache.redis('config', lambda: initialize_config(), time_expire=
>>> 999)
>>>
>>> ... then the actual key used to store the data is "w2p:master:config"
>>>
>>> But, what about if I have two applications that need to share the config?
>>> How can I tell web2py to use the same cache prefix for two specific 
>>> applications?
>>>
>>>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Is it possible to share cache between two differente web2py applications?

2018-11-14 Thread Lisandro
I've created a pull request to achieve this:
https://github.com/web2py/web2py/pull/2055

If it's not accepted (which is totally ok with me), I would like to know if 
I have some other alternative.
Thanks!

El lunes, 23 de enero de 2017, 17:29:22 (UTC-3), Lisandro escribió:
>
> I'm using RedisCache, and I've seen that web2py adds a prefix to all the 
> keys I store in the cache.
> For example, if I have an application called "master" and I do this:
>
> config = cache.redis('config', lambda: initialize_config(), time_expire=
> 999)
>
> ... then the actual key used to store the data is "w2p:master:config"
>
> But, what about if I have two applications that need to share the config?
> How can I tell web2py to use the same cache prefix for two specific 
> applications?
>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Scheduler task (defined inside a module) won't run new code even after restarting uWSGI

2018-10-12 Thread Lisandro
Sorry, my bad.
I realised that I had to restart scheduler workers, not uWSGI workers.
So I restarted the scheduler workers and it worked.

Sorry for the bother!

El viernes, 12 de octubre de 2018, 12:04:40 (UTC-3), Lisandro escribió:
>
> I'm facing a weird issue: I have a scheduler task that is defined inside a 
> module. At the same time, that function calls another function that is 
> defined in **another** module.
> I had to make some changes to the last module, so I restarted uWSGI, but 
> the task is still running with the old code :/
>
> This is more or less what I have:
>
> in models/scheduler.py
> def mytask():
> from general import do_something
> do_something()
>
>
> in modules/general.py
> def do_something():
> from particular import do_another_stuff
> print "hi there I'm doing something"
> do_another_stuff()
> return True
>
>
> in modules/particular.py
> def do_another_stuff():
> print "hi there I'm doing another stuff"
>
>
> I've recently changed the code of modules/particular.py but they aren't 
> reflected.
> Notice that I also changed the code of modules/general.py and those 
> changes **are** in deed reflected. 
>
> I can't recall exactly now, but I remember having recently dealed with a 
> similar issue: a scheduled task whose code is defined in a module that at 
> the same time calls a function defined in another module. In this scenario, 
> changes to the last module aren't reflected when the task is run from the 
> scheduler environment. Does it make sense?
>
> I've also tried deleting modules/*.pyc but it didn't work.
> What am I missing?
>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Scheduler task (defined inside a module) won't run new code even after restarting uWSGI

2018-10-12 Thread Lisandro
I'm facing a weird issue: I have a scheduler task that is defined inside a 
module. At the same time, that function calls another function that is 
defined in **another** module.
I had to make some changes to the last module, so I restarted uWSGI, but 
the task is still running with the old code :/

This is more or less what I have:

in models/scheduler.py
def mytask():
from general import do_something
do_something()


in modules/general.py
def do_something():
from particular import do_another_stuff
print "hi there I'm doing something"
do_another_stuff()
return True


in modules/particular.py
def do_another_stuff():
print "hi there I'm doing another stuff"


I've recently changed the code of modules/particular.py but they aren't 
reflected.
Notice that I also changed the code of modules/general.py and those changes 
**are** in deed reflected. 

I can't recall exactly now, but I remember having recently dealed with a 
similar issue: a scheduled task whose code is defined in a module that at 
the same time calls a function defined in another module. In this scenario, 
changes to the last module aren't reflected when the task is run from the 
scheduler environment. Does it make sense?

I've also tried deleting modules/*.pyc but it didn't work.
What am I missing?

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: How many apps can a web2py instance handle?

2018-09-18 Thread Lisandro
Hi Leonel, thank you for your time. I supposed that would be the answer, 
but still I wanted to hear your opinion because of your experience.

Yes, it would definitely be better to install the app only once. In fact, 
we have some plans to do that in the future. But it won't be possible for 
now, because of the time and cost it would take for us.

Still, I must say that we have been able to manage the updating process in 
a pretty decent way. We have a main app that manages the 
installation/update/deletion of apps. That main app has its own database 
where it stores information about those installed apps (each one of them is 
a website). The main app has a bunch of tasks that we run through the 
scheduler. This approach gives us some advantages, for example, when we 
have a new version of the app, we first update some of the websites, wait 
some time to test and debug, and then we apply the update to the rest of 
the websites. As we have many many websites, this approach reduces the 
possibility of introducing an error to all the websites at once.

If your are curious, you can check our main app at medios.com.ar
Our service is only available in spanish speaking countries, and we have 
been slowly expanding (right now we run about 300 websites).
We owe part of our success to web2py and its great community, so I take the 
opportunity to thank you all once again!

Best regards,
Lisandro.

El martes, 18 de septiembre de 2018, 13:22:42 (UTC-3), Leonel Câmara 
escribió:
>
> No there's no limit. Although, wouldn't it be better to make a 
> multi-tenant app using common filters instead of replicating the app each 
> time? This seems like a nightmare when you want to update the app you're 
> replicating across all the installations.  
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] How many apps can a web2py instance handle?

2018-09-18 Thread Lisandro
ar/www/medios/applications/meganoticias/', 
'/var/www/medios/applications/elecodesunchales/', 
'/var/www/medios/applications/melodijoperez_panel/', 
'/var/www/medios/applications/vallecalchaqui/', 
'/var/www/medios/applications/mundoe/', 
'/var/www/medios/applications/lmdiario_panel/', 
'/var/www/medios/applications/policiales/', 
'/var/www/medios/applications/fmespectaculo/', 
'/var/www/medios/applications/diariocordoba_panel/', 
'/var/www/medios/applications/nexofm/', 
'/var/www/medios/applications/spacionoticias/', 
'/var/www/medios/applications/periodicodelpilar/', 
'/var/www/medios/applications/utrapol/', 
'/var/www/medios/applications/tribunadelsur_panel/', 
'/var/www/medios/applications/surcordobes/', 
'/var/www/medios/applications/tapalquedigital/', 
'/var/www/medios/applications/elurbanodesacarlos_panel/', 
'/var/www/medios/applications/ushuaia24/', 
'/var/www/medios/applications/aquijujuy/', 
'/var/www/medios/applications/ahoracasilda/', 
'/var/www/medios/applications/informenoa/', 
'/var/www/medios/applications/diariovision/', 
'/var/www/medios/applications/elperidiario/', 
'/var/www/medios/applications/todogolftv/', 
'/var/www/medios/applications/plus_panel/', 
'/var/www/medios/applications/diariourbanodigital/', 
'/var/www/medios/applications/patinespaloybocha/', 
'/var/www/medios/applications/r24n_panel/', 
'/var/www/medios/applications/tribunadelsur/', 
'/var/www/medios/applications/nepdiario/', 
'/var/www/medios/applications/informatesalta_panel/', 
'/var/www/medios/applications/tresdigital/', 
'/var/www/medios/applications/elpiranense/', 
'/var/www/medios/applications/elbaston/', 
'/var/www/medios/applications/infoturfperu/', 
'/var/www/medios/applications/redonline_panel/', 
'/var/www/medios/applications/eldiariodeoliva/', 
'/var/www/medios/applications/sicarditv/', 
'/var/www/medios/applications/portalvosrafaela/', 
'/var/www/medios/applications/ultimomomentonoticias/', 
'/var/www/medios/applications/elperiodico/', 
'/var/www/medios/applications/eldespertador/', 
'/var/www/medios/applications/todosaltanoticias/', 
'/var/www/medios/applications/unpuntodvista/', 
'/var/www/medios/applications/lasrosasdigital/', 
'/var/www/medios/applications/laresistencianoticias/', 
'/var/www/medios/applications/cuartopoder/', 
'/var/www/medios/applications/escorpioinfo/', 
'/var/www/medios/applications/poderciudadano/', 
'/var/www/medios/applications/masvoces/', 
'/var/www/medios/applications/lanomina/', 
'/var/www/medios/applications/elurbanodesacarlos/', 
'/var/www/medios/applications/castellanosprueba/', 
'/var/www/medios/applications/redonline/', 
'/var/www/medios/applications/lmdiario/', 
'/var/www/medios/applications/ahoracasilda_panel/', 
'/var/www/medios/applications/castellanosprueba_panel/', 
'/var/www/medios/applications/diariocordoba/', 
'/var/www/medios/applications/assernoticias/', 
'/var/www/medios/applications/lagacetaciudadana/', 
'/var/www/medios/applications/ciudadhuala/', 
'/var/www/medios/applications/lujan365/', 
'/var/www/medios/applications/elinforme/', 
'/var/www/medios/applications/vientostucumanos/', 
'/var/www/medios/applications/argentano/', 
'/var/www/medios/applications/webmedios/', 
'/var/www/medios/applications/diputadosjusticialistas/', 
'/var/www/medios/applications/lanocion/', 
'/var/www/medios/applications/melodijoperez/', 
'/var/www/medios/applications/diarionorteformosa/', 
'/var/www/medios/applications/infocde/', 
'/var/www/medios/applications/eldiariodeoliva_panel/', 
'/var/www/medios/applications/omarmartinez/', 
'/var/www/medios/applications/banderafueguina/'])


The original request was directed to one of those apps, but 
*request.global_settings.app_folders* has a list of the path of all the 
installed apps in that web2py instance. Something similar happens with 
*request.global_settings.db_sessions*.

So it made me wonder, is there a theoretical limit?
Could the number of installed apps affect performance at a signifficant 
level?

Thank you very much in advance.
Best regards,
Lisandro.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Can't understand this error in gluon/rewrite.py

2018-09-06 Thread Lisandro
I've modified gluon/rewrite.py to log functions and controllers:

if self.args:
try:
mylog = open('mylog.txt', 'a')
mylog.write('%s %s\n' % (self.functions, self.controllers))
mylog.close()
except:
pass
if self.args[0] in self.functions or self.args[0] in self.controllers or 
self.args[0] in applications:
self.omit_function = False


In the generated log I don't see anything weird:

set([]) set(['load', 'default', 'app', 'static'])
set([]) set(['load', 'default', 'app', 'static'])
set([]) set(['load', 'default', 'app', 'static'])
set([]) set(['load', 'default', 'app', 'static'])
set([]) set(['default', 'cache', 'afip', 'reportes', 'contrataciones', 
'admin', 'paypal', 'errores', 'periodos', 'facturas', 'ws', 'sitios', 
'scheduler', 'static', 'mercadopago', 'estadistica', 'notificaciones'])
set([]) set(['default', 'cache', 'afip', 'reportes', 'contrataciones', 
'admin', 'paypal', 'errores', 'periodos', 'facturas', 'ws', 'sitios', 
'scheduler', 'static', 'mercadopago', 'estadistica', 'notificaciones'])
set([]) set(['default', 'cache', 'afip', 'reportes', 'contrataciones', 
'admin', 'paypal', 'errores', 'periodos', 'facturas', 'ws', 'sitios', 
'scheduler', 'static', 'mercadopago', 'estadistica', 'notificaciones'])
set([]) set(['default', 'cache', 'afip', 'reportes', 'contrataciones', 
'admin', 'paypal', 'errores', 'periodos', 'facturas', 'ws', 'sitios', 
'scheduler', 'static', 'mercadopago', 'estadistica', 'notificaciones'])
set([]) set(['default', 'cache', 'afip', 'reportes', 'contrataciones', 
'admin', 'paypal', 'errores', 'periodos', 'facturas', 'ws', 'sitios', 
'scheduler', 'static', 'mercadopago', 'estadistica', 'notificaciones'])
set([]) set(['default', 'cache', 'afip', 'reportes', 'contrataciones', 
'admin', 'paypal', 'errores', 'periodos', 'facturas', 'ws', 'sitios', 
'scheduler', 'static', 'mercadopago', 'estadistica', 'notificaciones'])


There is a bunch of that. Notice the function list is always empty, I'm not 
sure if that's correct (I would say yes, because everything works as 
expected).
The list of controllers is correct, they correspond to the application 
running in each case.

Because I have many many applications installed and running, the log file 
was growing very quickly, so I made a simple modification to the code, to 
log only when the application was one of the ones throwing the error:

if self.args:
try:
if self.application in ('pescaregional', 'experienciasgo'):
mylog = open('mylog.txt', 'a')
mylog.write('%s %s\n' % (self.functions, self.controllers))
mylog.close()
except:
pass
if self.args[0] in self.functions or self.args[0] in self.controllers or 
self.args[0] in applications:
self.omit_function = False


The log was the same as the one I showed before.
However, I've noticed that *self.application is not set in the scheduler 
environment*, because the logging was done when the requests were made 
through http, but nothing was logged when the requests came from the 
scheduler environment.


Anyway, I think this is too much for me. Debugging this is getting hard, as 
the problem happens only in my production environment, and I don't like 
very much the idea of changing web2py code of the main instance that is 
running several apps. Regardless I use uwsgi gracefully reloading, I've 
done this tests very late at night, when the activity is low, but still I'm 
afraid of breaking something at production.

For now, I'll just apply a simple patch: whenever I use URL() in the 
scheduler environment, I'll check that the args are provided as strings.
Thanks a lot for your help!!

El martes, 4 de septiembre de 2018, 11:43:28 (UTC-3), Leonel Câmara 
escribió:
>
> The problems seems to be in the controllers and not in the application 
> list since looking at your logs they seem to be fine.
>
> Notice that "load" does not check if the controllers are coming just with 
> "DEFAULT" string.
>
> https://github.com/web2py/web2py/blob/master/gluon/rewrite.py#L283
>
> Can you log the controllers too?
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Can't understand this error in gluon/rewrite.py

2018-09-04 Thread Lisandro
ing in the case where this is the only web2py app (not even the admin 
>> or welcome is installed). In this case the list of applications is probably 
>> a single string instead of a list. 
>>
>> Are you setting routers.BASE.applications somewhere in your routes.py?
>>
>> Otherwise, I think this is definitely a web2py bug where the default 
>> value of routers.BASE.applications which is a string 'ALL' is being passed 
>> without being converted to a list of all applications.
>>
>
> Doesn't look like routers.BASE.applications is set in routes.py, but it 
> appears that somehow in the context of the scheduler, the default value of 
> 'ALL' is not getting converted to a list of apps. It's hard to track down 
> exactly where things are going wrong.
>
> Lisandro, maybe confirm the value of routers.BASE.applications at that 
> point in rewrite.py by printing/logging it.
>
> Anthony
>  
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Can't understand this error in gluon/rewrite.py

2018-09-03 Thread Lisandro
This is my routes.py:

# -*- coding: utf-8 -*-

# creates a dictionary that will map each domain with its own app,
# based on the content of a text file called "domains_apps", and also
# a list of all the apps installed
domains = {}
apps = []
_archivo = open('domains_apps', 'r')
_lineas = _archivo.readlines()
_archivo.close()
for _linea in _lineas:
domain, app = _linea.strip().split('!')
domains[domain] = app
if not app in apps:
apps.append(app)


routers = dict(\
  BASE = dict(\
default_controller = 'default', \
default_function = 'index', \
domains = domains, \
root_static = ['robots.txt'], \
map_static = True, \
exclusive_domain = True, \
  )\
)


routes_onerror = []
for app in apps:
for code in ['403', '404', '500', '503']:
routes_onerror.append((r'%s/%s' %(app, code), r'/%s/static/%s.html' 
%(app, code)))
routes_onerror.append((r'%s/*' %app, r'/%s/static/500.html' %app))


The file "domains_apps" looks like this:

adn979.com!adn
panel.adn979.com!adn_panel
blogdemedios.com.ar!blogmedios
panel.blogdemedios.com.ar!blogmedios_panel
demo.medios.com.ar!demo
panel.demo.medios.com.ar!demo_panel
diarioprimicia.com.ar!diarioprimicia
panel.diarioprimicia.com.ar!diarioprimicia_panel


Notice each domain has its own app associated. 
This web2py instance is running ~600 apps (that is, 600 copies of the same 
app).

Do you see something odd here?
In the meantime, I'll verify that all the apps referened in "domains_apps" 
file are in deed installed apps.


El lunes, 3 de septiembre de 2018, 11:40:22 (UTC-3), Anthony escribió:
>
> What does your routes.py file look like?
>
> On Monday, September 3, 2018 at 10:25:22 AM UTC-4, Lisandro wrote:
>>
>> This problem is getting weirder.
>>
>> I've found that passing integer numbers as args to URL() helper isn't a 
>> problem for web2py. 
>> I could successfully run some examples using integer and long integers as 
>> URL args, and it always works ok. 
>> In fact, as I stated before, my application uses URL in that way since a 
>> long time ago, with no errors.
>> Even more: in the apps where the code fails, it only fails when it is run 
>> from the scheduler, but it works ok if it is run from the controllers.
>> But, just to remember, it doesn't fail in all the apps, only in some of 
>> them. But the apps are all the same, it's the same app installed several 
>> times.
>>
>>
>> A quick resume:
>>
>> This sentence:
>> URL('default', f=f, args=[contenido.id, contenido.slug], extension='', 
>> scheme=True, host=current.CONFIG.dominio)
>>
>>
>>  * from a controller, it always runs ok.
>>  * from the scheduler, it fails in some applications.
>>
>> The error points to gluon/rewrite.py:
>>
>> File "applications/pescaregional/modules/virtual_methods.py", line 248, 
>> in contenido_url return URL(c='default', f=f, args=[contenido.id, 
>> contenido.slug], extension='', scheme=True,host=current.CONFIG.dominio) 
>> if f else None 
>> File "/var/www/medios/gluon/html.py", line 391, in URL args, other, 
>> scheme, host, port, language=language) 
>> File "/var/www/medios/gluon/rewrite.py", line 197, in url_out function, 
>> args, other, scheme, host, port, language) 
>> File "/var/www/medios/gluon/rewrite.py", line 1366, in map_url_out return 
>> map.acf() 
>> File "/var/www/medios/gluon/rewrite.py", line 1292, in acf self.omit_acf
>> () # try to omit a/c/f 
>> File "/var/www/medios/gluon/rewrite.py", line 1241, in omit_acf if self.
>> args[0] in self.functions or self.args[0] in self.controllers or self.
>> args[0] in applications: TypeError: 'in ' requires stringas left operand, 
>> not long
>>
>>
>> I'm a bit lost.
>> Where else should I look?
>>
>>
>> El lunes, 3 de septiembre de 2018, 1:45:49 (UTC-3), Lisandro escribió:
>>>
>>> Thanks for that fast response.
>>> If the cause of the problem is passing "contenido.id" as int, then the 
>>> error is even more weird, because my app uses URL like that in several 
>>> situations, for example:
>>>
>>> URL('contenido', 'editar', args=contenido.id)
>>> URL('categoria', 'editar', args=categoria.id)
>>> URL('default', 'index', args=[categoria.id, page_number])
>>>
>>>
>>> Now that I think it, the 

[web2py] Re: Can't understand this error in gluon/rewrite.py

2018-09-03 Thread Lisandro
This problem is getting weirder.

I've found that passing integer numbers as args to URL() helper isn't a 
problem for web2py. 
I could successfully run some examples using integer and long integers as 
URL args, and it always works ok. 
In fact, as I stated before, my application uses URL in that way since a 
long time ago, with no errors.
Even more: in the apps where the code fails, it only fails when it is run 
from the scheduler, but it works ok if it is run from the controllers.
But, just to remember, it doesn't fail in all the apps, only in some of 
them. But the apps are all the same, it's the same app installed several 
times.


A quick resume:

This sentence:
URL('default', f=f, args=[contenido.id, contenido.slug], extension='', 
scheme=True, host=current.CONFIG.dominio)


 * from a controller, it always runs ok.
 * from the scheduler, it fails in some applications.

The error points to gluon/rewrite.py:

File "applications/pescaregional/modules/virtual_methods.py", line 248, in 
contenido_url return URL(c='default', f=f, args=[contenido.id, contenido.
slug], extension='', scheme=True,host=current.CONFIG.dominio) if f else None 
File "/var/www/medios/gluon/html.py", line 391, in URL args, other, scheme, 
host, port, language=language) 
File "/var/www/medios/gluon/rewrite.py", line 197, in url_out function, args
, other, scheme, host, port, language) 
File "/var/www/medios/gluon/rewrite.py", line 1366, in map_url_out return 
map.acf() 
File "/var/www/medios/gluon/rewrite.py", line 1292, in acf self.omit_acf() # 
try to omit a/c/f 
File "/var/www/medios/gluon/rewrite.py", line 1241, in omit_acf if self.args
[0] in self.functions or self.args[0] in self.controllers or self.args[0] in 
applications: TypeError: 'in ' requires stringas left operand, not long


I'm a bit lost.
Where else should I look?


El lunes, 3 de septiembre de 2018, 1:45:49 (UTC-3), Lisandro escribió:
>
> Thanks for that fast response.
> If the cause of the problem is passing "contenido.id" as int, then the 
> error is even more weird, because my app uses URL like that in several 
> situations, for example:
>
> URL('contenido', 'editar', args=contenido.id)
> URL('categoria', 'editar', args=categoria.id)
> URL('default', 'index', args=[categoria.id, page_number])
>
>
> Now that I think it, the error that I reported happens within the 
> scheduler environment.
> Could that difference be the reason? Well, even in that case, it wouldn't 
> explain why it works ok in other apps.
>
>
>
>
> El lunes, 3 de septiembre de 2018, 1:25:19 (UTC-3), fiubarc escribió:
>>
>> Hello, 
>>
>> I think it's more a matter of python language, contenido.id is long type 
>> then self.args[0] in 'any string'  raise an exception because, as the 
>> message says :
>>
>> 'in ' requires string as left operand, not long
>>
>> You can do args=['%s' % contenido.id,  or args=[str(contenido.id), 
>>  
>>
>> Dont know why in others installed apps works, is really weird
>>
>>
>>
>> El lunes, 3 de septiembre de 2018, 0:54:32 (UTC-3), Lisandro escribió:
>>>
>>> I have the same app installed several times within a web2py instance 
>>> (running version 2.16.1-stable+timestamp.2017.11.14.05.54.25).
>>>
>>> In some of them, this sentence:
>>>
>>> URL(c='default', f=f, args=[contenido.id, contenido.slug], extension='', 
>>> scheme=True, host=current.CONFIG.dominio)
>>>
>>> ... throws this traceback error:
>>>
>>> Traceback (most recent call last): 
>>> File "/var/www/medios/gluon/scheduler.py", line 501, in executor result 
>>> = dumps(_function(*args, **vars)) 
>>> File "applications/pescaregional/compiled/models.db.py", line 519, in 
>>> newsletter 
>>> File "applications/pescaregional/modules/globales.py", line 938, in 
>>> enviar_newsletter 'url_noticia': noticia.url() + utm_vars, 
>>> File "/var/www/medios/gluon/packages/dal/pydal/objects.py", line 2407, 
>>> in __call__ return self.method(self.row, *args, **kwargs) 
>>> File "applications/pescaregional/compiled/models.db.py", line 295, in 
>>> File "applications/pescaregional/modules/virtual_methods.py", line 248, 
>>> in contenido_url return URL(c='default', f=f, args=[contenido.id, 
>>> contenido.slug], extension='', scheme=True, host=current.CONFIG.dominio) 
>>> if f else None 
>

[web2py] Re: Can't understand this error in gluon/rewrite.py

2018-09-02 Thread Lisandro
Thanks for that fast response.
If the cause of the problem is passing "contenido.id" as int, then the 
error is even more weird, because my app uses URL like that in several 
situations, for example:

URL('contenido', 'editar', args=contenido.id)
URL('categoria', 'editar', args=categoria.id)
URL('default', 'index', args=[categoria.id, page_number])


Now that I think it, the error that I reported happens within the scheduler 
environment.
Could that difference be the reason? Well, even in that case, it wouldn't 
explain why it works ok in other apps.




El lunes, 3 de septiembre de 2018, 1:25:19 (UTC-3), fiubarc escribió:
>
> Hello, 
>
> I think it's more a matter of python language, contenido.id is long type then 
> self.args[0] in 'any string'  raise an exception because, as the message says 
> :
>
> 'in ' requires string as left operand, not long
>
> You can do args=['%s' % contenido.id,  or args=[str(contenido.id), 
>  
>
> Dont know why in others installed apps works, is really weird
>
>
>
> El lunes, 3 de septiembre de 2018, 0:54:32 (UTC-3), Lisandro escribió:
>>
>> I have the same app installed several times within a web2py instance 
>> (running version 2.16.1-stable+timestamp.2017.11.14.05.54.25).
>>
>> In some of them, this sentence:
>>
>> URL(c='default', f=f, args=[contenido.id, contenido.slug], extension='', 
>> scheme=True, host=current.CONFIG.dominio)
>>
>> ... throws this traceback error:
>>
>> Traceback (most recent call last): 
>> File "/var/www/medios/gluon/scheduler.py", line 501, in executor result = 
>> dumps(_function(*args, **vars)) 
>> File "applications/pescaregional/compiled/models.db.py", line 519, in 
>> newsletter 
>> File "applications/pescaregional/modules/globales.py", line 938, in 
>> enviar_newsletter 'url_noticia': noticia.url() + utm_vars, 
>> File "/var/www/medios/gluon/packages/dal/pydal/objects.py", line 2407, in 
>> __call__ return self.method(self.row, *args, **kwargs) 
>> File "applications/pescaregional/compiled/models.db.py", line 295, in 
>> File "applications/pescaregional/modules/virtual_methods.py", line 248, 
>> in contenido_url return URL(c='default', f=f, args=[contenido.id, 
>> contenido.slug], extension='', scheme=True, host=current.CONFIG.dominio) 
>> if f else None 
>> File "/var/www/medios/gluon/html.py", line 391, in URL args, other, 
>> scheme, host, port, language=language) 
>> File "/var/www/medios/gluon/rewrite.py", line 197, in url_out function, 
>> args, other, scheme, host, port, language) 
>> File "/var/www/medios/gluon/rewrite.py", line 1366, in map_url_out return 
>> map.acf() 
>> File "/var/www/medios/gluon/rewrite.py", line 1292, in acf self.omit_acf
>> () # try to omit a/c/f 
>> File "/var/www/medios/gluon/rewrite.py", line 1241, in omit_acf if self.
>> args[0] in self.functions or self.args[0] in self.controllers or self.
>> args[0] in applications: TypeError: 'in ' requires string as left operand
>> , not long
>>
>>
>>
>> The problem is solved changing the sentence with this:
>>
>> URL(c='default', f=f, args=['%s' % contenido.id, '%s' % contenido.slug], 
>> extension='', scheme=True, host=current.CONFIG.dominio)
>>
>> ... notice that the args are converted to strings.
>>
>> But the weird part is that *the error only happens in some of the 
>> installed apps; and, remember, it's the exact same app installed several 
>> times*.
>>
>> First I thought it had something to do with the values stored at 
>> "contenido", which is a row from a table. But it happens with any row.
>>
>> I'm using PostgreSQL 9.3. I've checked the databases' enconding and they 
>> all have the same one. I also checked the "contenido" table definition in a 
>> couple of dbs (one corresponding to an app that presents the error, and 
>> other that doesn't), and they are exactly the same.
>>
>> What could be the difference that makes gluon/rewrite.py throw the error 
>> in some of the apps?
>>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Can't understand this error in gluon/rewrite.py

2018-09-02 Thread Lisandro
I have the same app installed several times within a web2py instance 
(running version 2.16.1-stable+timestamp.2017.11.14.05.54.25).

In some of them, this sentence:

URL(c='default', f=f, args=[contenido.id, contenido.slug], extension='', 
scheme=True, host=current.CONFIG.dominio)

... throws this traceback error:

Traceback (most recent call last): 
File "/var/www/medios/gluon/scheduler.py", line 501, in executor result = 
dumps(_function(*args, **vars)) 
File "applications/pescaregional/compiled/models.db.py", line 519, in 
newsletter 
File "applications/pescaregional/modules/globales.py", line 938, in 
enviar_newsletter 'url_noticia': noticia.url() + utm_vars, 
File "/var/www/medios/gluon/packages/dal/pydal/objects.py", line 2407, in 
__call__ return self.method(self.row, *args, **kwargs) 
File "applications/pescaregional/compiled/models.db.py", line 295, in 
File "applications/pescaregional/modules/virtual_methods.py", line 248, in 
contenido_url return URL(c='default', f=f, args=[contenido.id, contenido.
slug], extension='', scheme=True, host=current.CONFIG.dominio) if f else 
None 
File "/var/www/medios/gluon/html.py", line 391, in URL args, other, scheme, 
host, port, language=language) 
File "/var/www/medios/gluon/rewrite.py", line 197, in url_out function, args
, other, scheme, host, port, language) 
File "/var/www/medios/gluon/rewrite.py", line 1366, in map_url_out return 
map.acf() 
File "/var/www/medios/gluon/rewrite.py", line 1292, in acf self.omit_acf() # 
try to omit a/c/f 
File "/var/www/medios/gluon/rewrite.py", line 1241, in omit_acf if self.args
[0] in self.functions or self.args[0] in self.controllers or self.args[0] in 
applications: TypeError: 'in ' requires string as left operand, not long



The problem is solved changing the sentence with this:

URL(c='default', f=f, args=['%s' % contenido.id, '%s' % contenido.slug], 
extension='', scheme=True, host=current.CONFIG.dominio)

... notice that the args are converted to strings.

But the weird part is that *the error only happens in some of the installed 
apps; and, remember, it's the exact same app installed several times*.

First I thought it had something to do with the values stored at 
"contenido", which is a row from a table. But it happens with any row.

I'm using PostgreSQL 9.3. I've checked the databases' enconding and they 
all have the same one. I also checked the "contenido" table definition in a 
couple of dbs (one corresponding to an app that presents the error, and 
other that doesn't), and they are exactly the same.

What could be the difference that makes gluon/rewrite.py throw the error in 
some of the apps?

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Why is scheduler creating so many files at /var/spool/postfix/maildrop?

2018-08-13 Thread Lisandro
To be honest, that's the way I do it since long time ago, never thought 
about changing it. But now that you've pointed that out, I've taken another 
look to the book.
The book explains how to run the scheduler via upstart [1], but I'm using 
CentOS 7, which uses *systemd* instead of *upstart.*

In case it helps someone else, here is what I did to run the scheduler via 
systemd:


1) I've created the file */home/myuser/web2py/scheduler.sh* with this 
content:

#!/bin/bash
/bin/python /home/myuser/web2py/web2py.py -K myapp


2) Then I've created the file */etc/systemd/system/scheduler.service* with 
this content:

[Unit]
 Description=web2py Scheduler

[Service]
 ExecStart=/home/myuser/web2py/scheduler.sh
 User=nginx

[Install]
 WantedBy=default.target


3) Then run this commands to start the service:
sudo systemctl daemon-reload
sudo systemctl start scheduler

If you need to check the status of the service, run:
sudo systemctl status scheduler

If you need to enable the service to run on startup, run:
sudo systemctl enable scheduler


Thank you very much Dave for your time!
Best regards,
Lisandro.



[1] 
http://web2py.com/books/default/chapter/29/13/deployment-recipes#Start-the-scheduler-as-a-Linux-service-upstart-


El lunes, 13 de agosto de 2018, 16:18:57 (UTC-3), Dave S escribió:
>
>
>
> On Monday, August 13, 2018 at 5:43:13 AM UTC-7, Lisandro wrote:
>>
>> Thanks for that clarification.
>> The files were being created every minute, in concordance with the 
>> frequence I run the scheduler.
>> I was able to solve it adding ">/dev/null 2>&1" to the end of the line in 
>> crontab, so it ended up like this:
>>
>> * * * * * python /var/www/medios/web2py.py -K webmedios >/dev/null 2>&1
>>
>> That did the trick, and those files are not created anymore.
>> Thank you for your help!
>>
>> Best regards,
>> Lisandro.
>>
>
> I don't understand why you are starting the scheduler every minute, 
> instead of starting it once, and then scheduling tasks to repeat on a 1 
> minute schedule.
>
> /dps
>  
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Why is scheduler creating so many files at /var/spool/postfix/maildrop?

2018-08-13 Thread Lisandro
Thanks for that clarification.
The files were being created every minute, in concordance with the 
frequence I run the scheduler.
I was able to solve it adding ">/dev/null 2>&1" to the end of the line in 
crontab, so it ended up like this:

* * * * * python /var/www/medios/web2py.py -K webmedios >/dev/null 2>&1

That did the trick, and those files are not created anymore.
Thank you for your help!

Best regards,
Lisandro.


El viernes, 10 de agosto de 2018, 19:52:30 (UTC-3), Dave S escribió:
>
>
>
> On Thursday, August 9, 2018 at 3:56:33 PM UTC-7, Lisandro wrote:
>>
>> I've found that my production server has a lot of files in 
>> /var/spool/postfix/maildrop. 
>> I don't use postfix at all, however that folder is full of files. 
>> All the files follow the same name format, like 
>> this: 2B8431690D, 5712AE68F, 73CF062660, 73C02183A9, 5706512838, 2B7E413705.
>>
>> The file content appears to be encoded, but using nano I can see they all 
>> have something very similar, like this:
>>
>> T^Q1533305867 356761A^Urewrite_context=localF
>> CronDaemonS^EnginxM^@N^]From: "(Cron Daemon)" N  To: 
>> nginxN]Subject: Cron  python /var/www/medios/web2py.py -K 
>> webmedios # web2py schedulerN'Content-Type: text/plain; 
>> charset=UTF-8N^^Auto-Submitted: auto-generatedN^PPrecedence: 
>> bulkN#X-Cron-Env: N+X-Cron-Env: 
>> N^]X-Cron-Env: 
>> N^[X-Cron-Env: 
>> N!X-Cron-Env: N X-Cron-Env: 
>> N^[X-Cron-Env: N^XX-Cron-Env: 
>> N^@N^Tweb2py Web FrameworkN1Created by Massimo Di Pierro, 
>> Copyright 2007-2018N3Version 
>> 2.16.1-stable+timestamp.2017.11.14.05.54.25NGDatabase drivers available: 
>> sqlite3, psycopg2, pg8000, pymysql, imaplibN,starting single-scheduler for 
>> "webmedios"...X^@R^EnginxE^@
>>
>>
>> Notice this line:
>> python /var/www/medios/web2py.py -K webmedios
>>
>>
>> I'm using the system's cron (with the "nginx" user) to run the scheduler, 
>> so I guess these files are being created each time the scheduler is run. 
>> Is there a way to avoid those files being created? Or is it something I 
>> would have to solve at OS level?
>>
>> Thanks in advance!
>>
>
> It looks like it has nothing to do with the scheduler and everything to do 
> with the cron daemon.
> Notice this line:
>
>> From: "(Cron Daemon)" 
>
>
>
> How often are they being created?  The scheduler generally only needs to 
> be started once per boot.
>
> /dps
>  
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Why is scheduler creating so many files at /var/spool/postfix/maildrop?

2018-08-09 Thread Lisandro
I've found that my production server has a lot of files in 
/var/spool/postfix/maildrop. 
I don't use postfix at all, however that folder is full of files. 
All the files follow the same name format, like 
this: 2B8431690D, 5712AE68F, 73CF062660, 73C02183A9, 5706512838, 2B7E413705.

The file content appears to be encoded, but using nano I can see they all 
have something very similar, like this:

T^Q1533305867 356761A^Urewrite_context=localF
CronDaemonS^EnginxM^@N^]From: "(Cron Daemon)" N  To: nginxN]Subject: 
Cron  python /var/www/medios/web2py.py -K webmedios # web2py 
schedulerN'Content-Type: text/plain; charset=UTF-8N^^Auto-Submitted: 
auto-generatedN^PPrecedence: bulkN#X-Cron-Env: 
N+X-Cron-Env: 
N^]X-Cron-Env: 
N^[X-Cron-Env: 
N!X-Cron-Env: N X-Cron-Env: 
N^[X-Cron-Env: N^XX-Cron-Env: 
N^@N^Tweb2py Web FrameworkN1Created by Massimo Di Pierro, 
Copyright 2007-2018N3Version 
2.16.1-stable+timestamp.2017.11.14.05.54.25NGDatabase drivers available: 
sqlite3, psycopg2, pg8000, pymysql, imaplibN,starting single-scheduler for 
"webmedios"...X^@R^EnginxE^@


Notice this line:
python /var/www/medios/web2py.py -K webmedios


I'm using the system's cron (with the "nginx" user) to run the scheduler, 
so I guess these files are being created each time the scheduler is run. 
Is there a way to avoid those files being created? Or is it something I 
would have to solve at OS level?

Thanks in advance!

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Bot sends bad POST and triggers ValueError: Invalid boundary in multipart form: ''

2018-08-06 Thread Lisandro
I'm afraid I don't have that info.
Also, I'm not able to log the request body with nginx because I'm on a very 
active production environment and this error is very rare. 

Anyway, it's been a couple of weeks since last time it happened.
I guess it was just a bot sending some weird post. 
If it happens again, I'll try to collect more info.

Thanks for caring!


El domingo, 5 de agosto de 2018, 22:29:01 (UTC-3), Massimo Di Pierro 
escribió:
>
> Can you show an example of the body of the post that causes the problem?
>
> On Tuesday, 17 July 2018 06:43:35 UTC-7, Lisandro wrote:
>>
>> Hi there! I'm just reporting this situation in case it's a bug, I'm not 
>> sure.
>>
>> I have a public webpage (no login required), and from time to time I see 
>> this error: 
>> ValueError: Invalid boundary in multipart form: ''
>>
>> The error is produced by a bot that sends a bad POST to an URL that 
>> doesn't even expect a POST (it's just a public URL that shows a list of 
>> news, and it is cached). 
>> But as my application access request.vars, when the bot sends that POST, 
>> I see this error traceback:
>>
>> Traceback (most recent call last):
>>   File "/var/www/medios/gluon/restricted.py", line 219, in restricted
>> exec(ccode, environment)
>>   File "applications/informatesalta/compiled/controllers.default.index.py", 
>> line 4, in 
>>   File "applications/informatesalta/modules/globales.py", line 2108, in 
>> get_publicidades_response
>> layout = request.vars.layout or ''
>>   File "/var/www/medios/gluon/globals.py", line 314, in vars
>> self.parse_all_vars()
>>   File "/var/www/medios/gluon/globals.py", line 285, in parse_all_vars
>> for key, value in iteritems(self.post_vars):
>>   File "/var/www/medios/gluon/globals.py", line 306, in post_vars
>> self.parse_post_vars()
>>   File "/var/www/medios/gluon/globals.py", line 242, in parse_post_vars
>> dpost = cgi.FieldStorage(fp=body, environ=env, keep_blank_values=1)
>>   File "/usr/lib64/python2.7/cgi.py", line 507, in __init__
>> self.read_multi(environ, keep_blank_values, strict_parsing)
>>   File "/usr/lib64/python2.7/cgi.py", line 621, in read_multi
>> raise ValueError, 'Invalid boundary in multipart form: %r' % (ib,)
>> ValueError: Invalid boundary in multipart form: ''
>>
>>
>>
>> Notice the error is triggered in gluon/globals.py, specifically in 
>> "parse_all_vars" function.
>> What can I do in order to avoid the ticket error?
>>
>> Thanks in advance!
>>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Update row When run a controller

2018-07-26 Thread Lisandro
You've defined the field as "current_state" (notice the lowercase), but 
you're trying to update the field "Current_State" (notice the camel case).

You need to replace this:
db(db.Exp_Logs_Perm.id == '1').update(Current_State='1')

with this
db(db.Exp_Logs_Perm.id == '1').update(current_state='1')

... and it should work.


El jueves, 26 de julio de 2018, 8:40:45 (UTC-3), s.bo...@gmail.com escribió:
>
> Hi there, i have created a website that controls a remote lab and i need 
> every time a user logs in to execute an experiment no other user can log in 
> to the same experiment station. All i want to do is to update a flag value  
> in a specific table in the database. In thee controller of the experiment i 
> added this .. db(db.Exp_Logs_Perm.id == '1').update(Current_State='1')..i 
> ve tried update record but nothing changed and i am getting this error .
>
>raise ValueError("No fields to update")
> ValueError: No fields to update
>
>
> i define my table inside model db.py. --> 
> db.define_table('Exp_Logs_Perm',Field('Experiment',requires=IS_NOT_EMPTY()),Field('Info'),Field('current_state',requires=IS_NOT_EMPTY()))
> Does the controller executes everytime my html file for this controller is 
> called?? 
>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: auto save feature

2018-07-20 Thread Lisandro
I'm not an expert on web2py, but I think this isn't something you will find 
included in web2py, as there is many things that would need to be adjusted 
to each specific case.
I think you will have to write some custom javascript to detect the moment 
when the input field has changes, and then send an http POST to store the 
new value. 
Notice this could fire a lot of http POSTs to your application, so maybe it 
is better to send the POST when the input loses focus (that is the onblur 
method <https://www.w3schools.com/jsref/event_onblur.asp>).

Another used technique is to set a timeout in javascript 
<https://www.w3schools.com/jsref/met_win_settimeout.asp> (let's say, 5 
seconds) to send an http POST with the values of all the fields. 
Then, each time an input value changes, reset the timeout. 
That is a simple autosave mechanism. 

Regards,
Lisandro

El domingo, 15 de julio de 2018, 10:05:22 (UTC-3), Diego Tostes escribió:
>
> Hi,
>
> i have a table with more than 60 fields. Is it possible to create a "auto 
> save" feature with web2py Built-in methods to allowing users to have data 
> saved while the process of filling the form?
>
> rgds
>
> Diego
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: row.update_record() leaves row as None (only sometimes)

2018-07-19 Thread Lisandro
Thanks Leonel.
I'm using PostgreSQL, so if that case isn't possible, then I think the 
problem could be in the second line, where the code retrieves the record 
using row.id:

row.update_record(**data)
row = db.content[row.id]  # the problem could be here
row.update_tsv()


Remember yesterday I realised the error is happening only in the apps 
running with the code above, and the error is in the third line (where 
apparently row is None). 
If I'm not wrong, that would mean the record wasn't retrieved. But as we 
know, the record exists (the previous line was executed successfully), so 
maybe row.id is None, therefor row ends up being None and causing the error 
at the third line. I can't say how could row.id end up being None, but I 
think that would be the problem here. I've just checked the code to see if 
the "id" field is updated or changed, but I didn't find anything regarding 
that. 

Anyway, in the next days I'll remove the fix to let only this code running:

row.update_record(**data)
row.update_tsv()

I think that is the proper and expected way.
If something goes wrong with that, I'll update this thread.

Once again I deeply thank you for your time and willingness to help!


P.S.: for the second time, I searched for a way to donate to web2py, but 
I've seen a post where you explain why you don't accept donations, so I'll 
try to contribute a bit more in the forum. What I have to offer is 
meaningless compared to what you bring to the forum, but I'll do my best.

El jueves, 19 de julio de 2018, 12:57:11 (UTC-3), Leonel Câmara escribió:
>
> It should not be possible if your database has proper transactions like 
> postgresql. If you're using something like mongodb then you're SOL.
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: row.update_record() leaves row as None (only sometimes)

2018-07-19 Thread Lisandro
That was my first thought: in some cases, another request deletes the 
record right in the instant between the execution of the first and second 
line.
But I thought it wasn't possible because the function runs inside a db 
transaction. Or could it still happen? 

Another thought is that the row.id is removed or set to None by 
row.update_record(**data), so the next line would set row to None, thus 
triggering the error.
But I'm not sure how could that happen. I checked and the "data" dictionary 
hasn't got the "id" key (I mean, the id field isn't updated).

Anyway, what I'm going to do to catch the error is this:

row.update_record(**data)
if not row:
return response.json({'success':False})
row.update_tsv()

This way I'll avoid the error ticket for those few cases.
I guess I could also decompile a couple of apps and put a log line there, 
though I don't know exactly what to log.



El jueves, 19 de julio de 2018, 10:56:55 (UTC-3), Anthony escribió:
>
> On Thursday, July 19, 2018 at 4:26:09 AM UTC-4, Lisandro wrote:
>>
>> Well, I owe you an apology, because I got confused regarding which app 
>> was throwing the error and which web2py version was running. 
>>
>> Until recently, I was using a very old web2py version (2.10). This 
>> problem was happening since long time ago (but not very frequently as I 
>> stated). For that old web2py version, I had already applied the fix to my 
>> app:
>>
>> row.update_record(**data)
>> row = db.content[row.id]
>> row.update_tsv()
>>
>
> There must still be something else going on that we're not seeing. Even in 
> web2py 2.10, there would have been no way to get the error in question, as 
> the .update_record method could not turn a Row object into None.
>
> I suppose the above code could generate this error if the record in 
> question could be deleted in a separate HTTP request in between the 
> execution of the first and second lines above. Is that possible (i.e., is 
> there some other action that could be deleting existing records)?
>
> Anthony
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Bot sends bad POST and triggers ValueError: Invalid boundary in multipart form: ''

2018-07-17 Thread Lisandro
Hi there! I'm just reporting this situation in case it's a bug, I'm not 
sure.

I have a public webpage (no login required), and from time to time I see 
this error: 
ValueError: Invalid boundary in multipart form: ''

The error is produced by a bot that sends a bad POST to an URL that doesn't 
even expect a POST (it's just a public URL that shows a list of news, and 
it is cached). 
But as my application access request.vars, when the bot sends that POST, I 
see this error traceback:

Traceback (most recent call last):
  File "/var/www/medios/gluon/restricted.py", line 219, in restricted
exec(ccode, environment)
  File "applications/informatesalta/compiled/controllers.default.index.py", 
line 4, in 
  File "applications/informatesalta/modules/globales.py", line 2108, in 
get_publicidades_response
layout = request.vars.layout or ''
  File "/var/www/medios/gluon/globals.py", line 314, in vars
self.parse_all_vars()
  File "/var/www/medios/gluon/globals.py", line 285, in parse_all_vars
for key, value in iteritems(self.post_vars):
  File "/var/www/medios/gluon/globals.py", line 306, in post_vars
self.parse_post_vars()
  File "/var/www/medios/gluon/globals.py", line 242, in parse_post_vars
dpost = cgi.FieldStorage(fp=body, environ=env, keep_blank_values=1)
  File "/usr/lib64/python2.7/cgi.py", line 507, in __init__
self.read_multi(environ, keep_blank_values, strict_parsing)
  File "/usr/lib64/python2.7/cgi.py", line 621, in read_multi
raise ValueError, 'Invalid boundary in multipart form: %r' % (ib,)
ValueError: Invalid boundary in multipart form: ''



Notice the error is triggered in gluon/globals.py, specifically in 
"parse_all_vars" function.
What can I do in order to avoid the ticket error?

Thanks in advance!

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: row.update_record() leaves row as None (only sometimes)

2018-07-17 Thread Lisandro
I don't think the code for update_tsv() is related, because the traceback 
shows that the error is produced even before looking for that method. I 
mean, the error says that the "row" object is None, therefor I think it 
would throw error calling any method. 
But anyway, in case it helps to figure out what is going on, this is the 
code for update_tsv():

def update_tsv(row):
db = current.db
title = detail = ''
if row.title:
 title = row.title.replace("'", "")
if row.detail:
detail = row.detail.replace("'", "")
db.executesql(
 """UPDATE content SET tsv = (SELECT
 setweight(to_tsvector(coalesce(%s, '')), 'A') ||
 setweight(to_tsvector(coalesce(%s, '')), 'B')) WHERE id = %s;""",
 placeholders=[title, detail, row.id])
 return True



El jueves, 12 de julio de 2018, 20:52:38 (UTC-3), Leonel Câmara escribió:
>
> Can I see the code for update_tsv the bug is clearly there?
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: row.update_record() leaves row as None (only sometimes)

2018-07-12 Thread Lisandro
Thank you Leonel for your time.

The code I showed is a bit simplified. 
I do some validation in deed (to requests.vars and request.args). Even 
though, if there is a problem with request.vars (for example, if 
request.vars.age is a string that is not a digit), the update_record would 
throw some error.

Also, if row is None, it would throw an error inmediately when calling 
update_record(), but notice the error is at the line where the virtual 
method is called, right after running .update_record() succesfully. That is 
what I don't understand, how can row be None if the previous line was 
executed succesfully?

First I thought "maybe the content was deleted by another user, or 
something like that", but that does not make sense, since the function is 
executed inside a database transaction, and there is no commit made between 
the two sentences.
It's also weird that it happens very few times (compared to the times that 
it runs succesfully).

Weird... :/
Anyway, I'll keep monitoring the issue and will try to reproduce it.


El jueves, 12 de julio de 2018, 14:05:35 (UTC-3), Leonel Câmara escribió:
>
> You are not validating request.vars.name nor request.vars.age. Is it 
> possible something weird is going on there? Also you're not checking if row 
> is None initially after you get it using request.args(0).  
>   
> Other than that, I don't see how this is even possible. Because a None Row 
> should cause a problem with update_record to begin with.
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] row.update_record() leaves row as None (only sometimes)

2018-07-12 Thread Lisandro
Hi there! I'm having this weird issue where, after doing a 
row.update_record(), the row object ends up being None.

The scenario is simple: it is a function that receives a post with some 
data, creates a dictionary with the data, updates de row, and finally calls 
a virtual method of the row.
This is the function, simplified:

def edit():
row = db.content[request.args(0)]
data = {
  'name': request.vars.name,
  'age': request.vars.age
}
row.update_record(**data)
row.update_tsv()
return 'done!'


This function is executed several times per day (about 1.000 times in an 
average day). 
But the function throws error about 3 or 4 times per day, and this is the 
error:

AttributeError: 'NoneType' object has no attribute 'update_tsv'



I can't reproduce it, because as I said, the function works ok and is 
executed several times per day. Still, I can't figure out what could be 
causing those isolated errors. 
If the whole function is executed inside a db transaction, what could be 
the reason why row object ends up being None after running update_record?


I was able to avoid those isolated errors with this fix:

row.update_record(**data)
row = db.content[request.args(0)]
row.update_tsv()

I could even prevent those errors whit this fix:

row.update_record(**data)
row = db.content[row.id]
row.update_tsv()

That's weird. But also I don't think that is the proper solution: it's 
inefficient, and I would have to add that extra select in every single line 
where my app calls .update_record(), obviously that's ugly and I've 
discarded the solution.

So, what could be causing the error?
Any help or comment will be much appreciated.
I'm using last stable version of web2py 
(2.16.1-stable+timestamp.2017.11.14.05.54.25)

Best regards,
Lisandro.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Using @cache.action with Redis takes 10 seconds to return a simple HTTP 503 or 404

2018-06-06 Thread Lisandro
Thank you for your fast answer Anthony!
You are right, the problem is fixed in master.
I applied those changes and the problem was solved.

Thank you very much!

El miércoles, 6 de junio de 2018, 10:44:42 (UTC-3), Anthony escribió:
>
> I believe this has been fixed in master: 
> https://github.com/web2py/web2py/commit/ea5ea6a30759a2c825f23381540dc396cbc475b7
> .
>
> Anthony
>
> On Wednesday, June 6, 2018 at 9:16:57 AM UTC-4, Lisandro wrote:
>>
>> Quick update: *apparently the problem doesn't happen using 
>> RedisCache(with_lock=False, ...)*
>> Should this be considered a bug or is it the expected behaviour?
>>
>> I would like to keep with_lock=True but avoid that 10 second delay when a 
>> function raises HTTP 503 or 404.
>> The issue with that delay is that, during that time, the request is using 
>> a database connection, and in my environment I have the risk of exhausting 
>> db connections.
>>
>> I'll try to play a bit with the source code of redis_cache.py, but still 
>> any suggestion or comment will be much appreciated.
>> Thanks!
>>
>> El miércoles, 6 de junio de 2018, 9:45:55 (UTC-3), Lisandro escribió:
>>>
>>> Hi there! I recently upgraded my production environment to web2py 
>>> 2.16.1, and I'm facing an old issue that was apparently solved.
>>> In my applications I use @cache.action to cache public pages, and as I 
>>> use Redis for cache, I use it like this:
>>>
>>> @cache.action(time_expire=300, cache_model=cache.redis, session=False, 
>>> vars=False, public=True)
>>> def test():
>>> 
>>>
>>>
>>> Well, I've found that, *when the function raises an HTTP 503 or 404, it 
>>> takes 10 seconds to complete and return the result*.
>>> Now, this only happens when cache_model=cache.redis. 
>>> *If I change it to cache_model=cache.ram, it takes a few milliseconds to 
>>> complete as expected.*
>>>
>>> I'm using Redis 3.2.10 64bit on CentOS7.
>>> The problem had been reported long time ago, and it was also fixed:
>>> https://github.com/web2py/web2py/issues/1355
>>>
>>> I've checked my web2py source code, and it's indeed using the latest 
>>> stable version 2.16.1, and just in case, I looked at 
>>> gluon/contrib/redis_cache.py and the fix is there.
>>> However, it appears that the problem still can be reproduced with this 
>>> code:
>>>
>>> from gluon.contrib.redis_cache import RedisCache
>>> from gluon.contrib.redis_session import RedisSession
>>> from gluon.contrib.redis_utils import RConn
>>>
>>> _redis_conn = RConn('localhost', 6379)
>>> cache.redis = RedisCache(redis_conn=_redis_conn, with_lock=True)
>>>
>>> @cache.action(time_expire=300, cache_model=cache.redis, session=False, 
>>> vars=False, public=True)
>>> def test():
>>> raise HTTP(503)
>>>
>>>
>>> Notice that it always takes 10 seconds, maybe that should bring in some 
>>> clue about what's happening.
>>> Any suggestion?
>>>
>>> Thanks in advance!
>>> Regards, Lisandro
>>>
>>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Using @cache.action with Redis takes 10 seconds to return a simple HTTP 503 or 404

2018-06-06 Thread Lisandro
Quick update: *apparently the problem doesn't happen using 
RedisCache(with_lock=False, ...)*
Should this be considered a bug or is it the expected behaviour?

I would like to keep with_lock=True but avoid that 10 second delay when a 
function raises HTTP 503 or 404.
The issue with that delay is that, during that time, the request is using a 
database connection, and in my environment I have the risk of exhausting db 
connections.

I'll try to play a bit with the source code of redis_cache.py, but still 
any suggestion or comment will be much appreciated.
Thanks!

El miércoles, 6 de junio de 2018, 9:45:55 (UTC-3), Lisandro escribió:
>
> Hi there! I recently upgraded my production environment to web2py 2.16.1, 
> and I'm facing an old issue that was apparently solved.
> In my applications I use @cache.action to cache public pages, and as I use 
> Redis for cache, I use it like this:
>
> @cache.action(time_expire=300, cache_model=cache.redis, session=False, 
> vars=False, public=True)
> def test():
> 
>
>
> Well, I've found that, *when the function raises an HTTP 503 or 404, it 
> takes 10 seconds to complete and return the result*.
> Now, this only happens when cache_model=cache.redis. 
> *If I change it to cache_model=cache.ram, it takes a few milliseconds to 
> complete as expected.*
>
> I'm using Redis 3.2.10 64bit on CentOS7.
> The problem had been reported long time ago, and it was also fixed:
> https://github.com/web2py/web2py/issues/1355
>
> I've checked my web2py source code, and it's indeed using the latest 
> stable version 2.16.1, and just in case, I looked at 
> gluon/contrib/redis_cache.py and the fix is there.
> However, it appears that the problem still can be reproduced with this 
> code:
>
> from gluon.contrib.redis_cache import RedisCache
> from gluon.contrib.redis_session import RedisSession
> from gluon.contrib.redis_utils import RConn
>
> _redis_conn = RConn('localhost', 6379)
> cache.redis = RedisCache(redis_conn=_redis_conn, with_lock=True)
>
> @cache.action(time_expire=300, cache_model=cache.redis, session=False, 
> vars=False, public=True)
> def test():
> raise HTTP(503)
>
>
> Notice that it always takes 10 seconds, maybe that should bring in some 
> clue about what's happening.
> Any suggestion?
>
> Thanks in advance!
> Regards, Lisandro
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Using @cache.action with Redis takes 10 seconds to return a simple HTTP 503 or 404

2018-06-06 Thread Lisandro
Hi there! I recently upgraded my production environment to web2py 2.16.1, 
and I'm facing an old issue that was apparently solved.
In my applications I use @cache.action to cache public pages, and as I use 
Redis for cache, I use it like this:

@cache.action(time_expire=300, cache_model=cache.redis, session=False, vars=
False, public=True)
def test():



Well, I've found that, *when the function raises an HTTP 503 or 404, it 
takes 10 seconds to complete and return the result*.
Now, this only happens when cache_model=cache.redis. 
*If I change it to cache_model=cache.ram, it takes a few milliseconds to 
complete as expected.*

I'm using Redis 3.2.10 64bit on CentOS7.
The problem had been reported long time ago, and it was also fixed:
https://github.com/web2py/web2py/issues/1355

I've checked my web2py source code, and it's indeed using the latest stable 
version 2.16.1, and just in case, I looked at gluon/contrib/redis_cache.py 
and the fix is there.
However, it appears that the problem still can be reproduced with this code:

from gluon.contrib.redis_cache import RedisCache
from gluon.contrib.redis_session import RedisSession
from gluon.contrib.redis_utils import RConn

_redis_conn = RConn('localhost', 6379)
cache.redis = RedisCache(redis_conn=_redis_conn, with_lock=True)

@cache.action(time_expire=300, cache_model=cache.redis, session=False, vars=
False, public=True)
def test():
raise HTTP(503)


Notice that it always takes 10 seconds, maybe that should bring in some 
clue about what's happening.
Any suggestion?

Thanks in advance!
Regards, Lisandro

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Web2py locks row at auth_user and web2py_session_* tables and doesn't unlock them

2018-06-04 Thread Lisandro
I just wanted to confirm that the problem was apparently solved after 
upgrading web2py to version 2.16.1 and also upgrading Redis to latest 
stable 64bit version (version 1.8.1).
The app has been running for a couple of weeks now and the problem didn't 
arise.
I truly sorry having bothered you with this, considering that the problem 
could be solved just upgrading web2py and Redis.
We also have just moved our production deployment to CentOS 7 for better 
stability.

Once again, I deeply thank all of you for your time and your constant 
willingness to help!
Best regards,
Lisandro.


El domingo, 6 de mayo de 2018, 0:13:06 (UTC-3), Massimo Di Pierro escribió:
>
> That is a very old version of web2py. The problem may have been solved 
> years ago. ;-)
>
> On Thursday, 3 May 2018 16:37:53 UTC-5, Lisandro wrote:
>>
>> I came back to reply this particular message, because the problem 
>> happened again yesterday.
>>
>> Just in order to remember, the problem was with redis_cache.py failing to 
>> write some specific key name.
>> Not *__lock object was found in the cache during the incident.
>> Connecting manually to redis-cli would show that the key isn't present 
>> and can be written and read.
>> But from the web2py application (using the redis_cache.py adapter), the 
>> application hangs with that specific key.
>> The only solution is to change the key name. Weird. 
>>
>>
>> Yesterday, our Redis server crashed (it reported out of memory, but the 
>> server had plenty of RAM available, this was due to an issue with memory 
>> fragmentation).
>> Anyway, after that crash, I did a flushall and restarted Redis server. 
>> Everything started to work ok for almost the total 260 websites we run 
>> with web2py, but 4 of them kept hanging. 
>> All the requests to those websites were hanging. I put some log 
>> sentences, and I found out that it was the same problem, this time with 
>> another random key name. 
>>
>> The solution was, again, to de-compile the apps, manually edit the source 
>> code and change the key name. But of course, this is not a solution.
>> Again, this time there was no *__lock key in the cache, and I could 
>> succesfully use redis-cli to write and read the key name. 
>>
>> I don't have a proper solution yet. 
>> redis-cli flushall doesn't help.
>>
>>
>> I'm using an old web2py version (Version 
>> 2.10.3-stable+timestamp.2015.04.02.21.42.07).
>> I already have plans to upgrade. 
>> But before, could I manually update only gluon/contrib/redis_*.py files?
>> In your opinion, does that test worthwhile? Does it make any sense?
>>
>>
>>
>> El viernes, 20 de abril de 2018, 8:28:28 (UTC-3), Lisandro escribió:
>>>
>>> Sorry to bother you again with this, but I think I've found the problem.
>>> *The problem is apparently with Redis integration. *It had nothing to 
>>> do with connections, database, sessions, none of that. Here is what I've 
>>> found.
>>>
>>> Remember, the line where my app hangs is this:
>>>
>>> *session.important_messages = cache.redis('important-messages-%s' % 
>>> auth.user.id <http://auth.user.id/>,*
>>> *  lambda: 
>>> get_important_messages(), *
>>> * time_expire=180)*
>>>
>>>
>>> As the problem only presented in production, on the website of my 
>>> customer, I asked him to allow me to play a little with the code. 
>>> So, first thing I did was to cache request.now instead of calling the 
>>> function "get_important_messages()", but the problem remained.
>>> Then I thought "maybe if I change the key..." and I changed the original 
>>> code to this:
>>>
>>> *session.important_messages = cache.redis('important-messages',*
>>> * lambda: 
>>> get_important_messages(),*
>>> * time_expire=180)*
>>>
>>>
>>> *Notice that only thing I changed was the key to store in Redis. And it 
>>> worked! *I thought that maybe "auth.user.id" was some large number, but 
>>> I checked and the user ID is 3. Tried to pass it like int(auth.user.id) 
>>> but I had no success. *App still hangs when I try to retrieve that 
>>> specific key*. Only that key.
>>>
>>> I've connected to redis-cli and it tells me that the key isn't there.
>>> So I set a "hello" valu

[web2py] Re: Web2py locks row at auth_user and web2py_session_* tables and doesn't unlock them

2018-05-03 Thread Lisandro
I came back to reply this particular message, because the problem happened 
again yesterday.

Just in order to remember, the problem was with redis_cache.py failing to 
write some specific key name.
Not *__lock object was found in the cache during the incident.
Connecting manually to redis-cli would show that the key isn't present and 
can be written and read.
But from the web2py application (using the redis_cache.py adapter), the 
application hangs with that specific key.
The only solution is to change the key name. Weird. 


Yesterday, our Redis server crashed (it reported out of memory, but the 
server had plenty of RAM available, this was due to an issue with memory 
fragmentation).
Anyway, after that crash, I did a flushall and restarted Redis server. 
Everything started to work ok for almost the total 260 websites we run with 
web2py, but 4 of them kept hanging. 
All the requests to those websites were hanging. I put some log sentences, 
and I found out that it was the same problem, this time with another random 
key name. 

The solution was, again, to de-compile the apps, manually edit the source 
code and change the key name. But of course, this is not a solution.
Again, this time there was no *__lock key in the cache, and I could 
succesfully use redis-cli to write and read the key name. 

I don't have a proper solution yet. 
redis-cli flushall doesn't help.


I'm using an old web2py version (Version 
2.10.3-stable+timestamp.2015.04.02.21.42.07).
I already have plans to upgrade. 
But before, could I manually update only gluon/contrib/redis_*.py files?
In your opinion, does that test worthwhile? Does it make any sense?



El viernes, 20 de abril de 2018, 8:28:28 (UTC-3), Lisandro escribió:
>
> Sorry to bother you again with this, but I think I've found the problem.
> *The problem is apparently with Redis integration. *It had nothing to do 
> with connections, database, sessions, none of that. Here is what I've found.
>
> Remember, the line where my app hangs is this:
>
> *session.important_messages = cache.redis('important-messages-%s' % 
> auth.user.id <http://auth.user.id/>,*
> *  lambda: 
> get_important_messages(), *
> * time_expire=180)*
>
>
> As the problem only presented in production, on the website of my 
> customer, I asked him to allow me to play a little with the code. 
> So, first thing I did was to cache request.now instead of calling the 
> function "get_important_messages()", but the problem remained.
> Then I thought "maybe if I change the key..." and I changed the original 
> code to this:
>
> *session.important_messages = cache.redis('important-messages',*
> * lambda: 
> get_important_messages(),*
> * time_expire=180)*
>
>
> *Notice that only thing I changed was the key to store in Redis. And it 
> worked! *I thought that maybe "auth.user.id" was some large number, but I 
> checked and the user ID is 3. Tried to pass it like int(auth.user.id) but 
> I had no success. *App still hangs when I try to retrieve that specific 
> key*. Only that key.
>
> I've connected to redis-cli and it tells me that the key isn't there.
> So I set a "hello" value for the key, I get it, then I deleted it:
>
> $ redis-cli
> 127.0.0.1:6379> DUMP w2p:myapp:important-messages-3
> (nil)
> 127.0.0.1:6379> SET w2p:myapp:important-messages-3 "hello"
> OK
> 127.0.0.1:6379> DUMP w2p:myapp:important-messages-3
> "\x00\x05hello\x06\x00\xf5\x9f\xb7\xf6\x90a\x1c\x99"
> 127.0.0.1:6379> DEL w2p:myapp:important-messages-3
> (integer) 1127.0.0.1:6379> DUMP w2p:myapp:important-messages-3
> 127.0.0.1:6379> DUMP w2p:myapp:important-messages-3
> (nil)
>
>
> But event after that, web2py hangs with this simple code:
>
> *r = cache.redis('important-messages-3', **lambda: request.now, *
> *time_expire=30)*
>
> This happens only with that specific key. I can set the key to 
> "important-messages-2", "important-messages-999", "important-messages-A", 
> anything I can think, but with that specific key it hangs.
>
> We have several websites (around 200), and this problem has happened about 
> 5 o 6 times in different websites, but it was always the same problem. The 
> only solution I had (until now) was to create a new account for the user 
> (that explains why it worked with a new account, that is because the new 
> account had a different auth.user.id, so the key to store in redis was 
> different).
>
> Could this be a bug in the redis_cache.py integration?
> Maybe I should open a new threa

[web2py] Re: web2py, PostgreSQL and exclusive locks

2018-05-03 Thread Lisandro
Please ignore that last message of mine. 
The hanging problem was produced by an issue with the redis_cache.py 
adapter failing to write a cache key.


El miércoles, 2 de mayo de 2018, 17:08:38 (UTC-3), Lisandro escribió:
>
> Well, I've commented out the line of the postgresql adapter of web2py, the 
> line where it runs the SET standard_conforming_strings=on; but now it hangs 
> in the previous line "SET CLIENT_ENCODING TO 'UTF8'".
>
> This is the function of the web2py's adapter where the application hangs:
> https://github.com/web2py/pydal/blob/master/pydal/adapters/postgres.py#L107
>
> For what I see, that function is called after connection, as its name 
> dictates "after_connection()"
>
> What could possibly be hanging between the DAL() instantiation and the 
> function after_connection() run by web2py?
>
>
>
> El miércoles, 2 de mayo de 2018, 16:09:31 (UTC-3), Lisandro escribió:
>>
>> Hi there, sorry to bother in this old post.
>>
>> I'm having a problem regarding standard_conforming_strings.
>> Today my app experienced a problem with Redis going out of memory.
>> After the problem was fixed, all my websites started to work normally, 
>> except four of them (of a total of 260 websites).
>>
>> For these websites don't working, the problem was a 504 timeout.
>> When I checked for long running queries, I see this:
>>
>>  11622 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:59.493348  | 
>> 2018-05-02 16:04:36.006134-03 | f   | idle in transaction | SET 
>> standard_conforming_strings=on;
>>  11635 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:57.579705  | 
>> 2018-05-02 16:04:37.919777-03 | f   | idle in transaction | SET 
>> standard_conforming_strings=on;
>>  11651 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:55.500219  | 
>> 2018-05-02 16:04:39.999263-03 | f   | idle in transaction | SET 
>> standard_conforming_strings=on;
>>  11693 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:50.412742  | 
>> 2018-05-02 16:04:45.08674-03  | f   | idle in transaction | SET 
>> standard_conforming_strings=on;
>>  11801 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:36.528754  | 
>> 2018-05-02 16:04:58.970728-03 | f   | idle in transaction | SET 
>> standard_conforming_strings=on;
>>  11853 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:31.644218  | 
>> 2018-05-02 16:05:03.855264-03 | f   | idle in transaction | SET 
>> standard_conforming_strings=on;
>>  11904 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:25.186631  | 
>> 2018-05-02 16:05:10.312851-03 | f   | idle in transaction | SET 
>> standard_conforming_strings=on;
>>  11945 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:21.143921  | 
>> 2018-05-02 16:05:14.355561-03 | f   | idle in transaction | SET 
>> standard_conforming_strings=on;
>>  11998 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:13.615864  | 
>> 2018-05-02 16:05:21.883618-03 | f   | idle in transaction | SET 
>> standard_conforming_strings=on;
>>
>>
>> That, for every database of these four websites.
>> Of course I tried to clean redis cache, but the problem remains.
>>
>> How should I countinue investigating? Where?
>>
>>
>> El domingo, 24 de noviembre de 2013, 22:45:08 (UTC-3), Massimo Di Pierro 
>> escribió:
>>>
>>> We can make it optional. Please open a ticket.
>>>
>>> On Sunday, 24 November 2013 02:28:23 UTC-6, Jayadevan M wrote:
>>>>
>>>> My doubt is - do we need to explicitly set it ON? Since the default 
>>>> setting is ON, any client connecting will have it turned ON anyway?
>>>>
>>>> On Sunday, November 24, 2013 1:48:23 PM UTC+5:30, Massimo Di Pierro 
>>>> wrote:
>>>>>
>>>>> It must be done for every connection. Do you have connection pooling 
>>>>> on? If a connection is recycled it should not do it again.
>>>>>
>>>>> On Saturday, 23 November 2013 22:49:47 UTC-6, Jayadevan M wrote:
>>>>>>
>>>>>> Thanks for the reply. OK, let us take this forward on the first one 
>>>>>> (default behaviour). Since the default behaviour is to SET 
>>>>>> standard_conforming_strings=on, is there a need to do it again, for each 
>>>>>> connection/call? It will incur an ever-so-small overhead which can be 
>>>>>> avoided?
>>>>>>
>>>>>> On Saturday, November 23, 2013 7:31:00 PM UTC+5:30, Massimo Di Pier

[web2py] Re: web2py, PostgreSQL and exclusive locks

2018-05-02 Thread Lisandro
Well, I've commented out the line of the postgresql adapter of web2py, the 
line where it runs the SET standard_conforming_strings=on; but now it hangs 
in the previous line "SET CLIENT_ENCODING TO 'UTF8'".

This is the function of the web2py's adapter where the application hangs:
https://github.com/web2py/pydal/blob/master/pydal/adapters/postgres.py#L107

For what I see, that function is called after connection, as its name 
dictates "after_connection()"

What could possibly be hanging between the DAL() instantiation and the 
function after_connection() run by web2py?



El miércoles, 2 de mayo de 2018, 16:09:31 (UTC-3), Lisandro escribió:
>
> Hi there, sorry to bother in this old post.
>
> I'm having a problem regarding standard_conforming_strings.
> Today my app experienced a problem with Redis going out of memory.
> After the problem was fixed, all my websites started to work normally, 
> except four of them (of a total of 260 websites).
>
> For these websites don't working, the problem was a 504 timeout.
> When I checked for long running queries, I see this:
>
>  11622 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:59.493348  | 
> 2018-05-02 16:04:36.006134-03 | f   | idle in transaction | SET 
> standard_conforming_strings=on;
>  11635 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:57.579705  | 
> 2018-05-02 16:04:37.919777-03 | f   | idle in transaction | SET 
> standard_conforming_strings=on;
>  11651 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:55.500219  | 
> 2018-05-02 16:04:39.999263-03 | f   | idle in transaction | SET 
> standard_conforming_strings=on;
>  11693 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:50.412742  | 
> 2018-05-02 16:04:45.08674-03  | f   | idle in transaction | SET 
> standard_conforming_strings=on;
>  11801 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:36.528754  | 
> 2018-05-02 16:04:58.970728-03 | f   | idle in transaction | SET 
> standard_conforming_strings=on;
>  11853 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:31.644218  | 
> 2018-05-02 16:05:03.855264-03 | f   | idle in transaction | SET 
> standard_conforming_strings=on;
>  11904 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:25.186631  | 
> 2018-05-02 16:05:10.312851-03 | f   | idle in transaction | SET 
> standard_conforming_strings=on;
>  11945 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:21.143921  | 
> 2018-05-02 16:05:14.355561-03 | f   | idle in transaction | SET 
> standard_conforming_strings=on;
>  11998 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:13.615864  | 
> 2018-05-02 16:05:21.883618-03 | f   | idle in transaction | SET 
> standard_conforming_strings=on;
>
>
> That, for every database of these four websites.
> Of course I tried to clean redis cache, but the problem remains.
>
> How should I countinue investigating? Where?
>
>
> El domingo, 24 de noviembre de 2013, 22:45:08 (UTC-3), Massimo Di Pierro 
> escribió:
>>
>> We can make it optional. Please open a ticket.
>>
>> On Sunday, 24 November 2013 02:28:23 UTC-6, Jayadevan M wrote:
>>>
>>> My doubt is - do we need to explicitly set it ON? Since the default 
>>> setting is ON, any client connecting will have it turned ON anyway?
>>>
>>> On Sunday, November 24, 2013 1:48:23 PM UTC+5:30, Massimo Di Pierro 
>>> wrote:
>>>>
>>>> It must be done for every connection. Do you have connection pooling 
>>>> on? If a connection is recycled it should not do it again.
>>>>
>>>> On Saturday, 23 November 2013 22:49:47 UTC-6, Jayadevan M wrote:
>>>>>
>>>>> Thanks for the reply. OK, let us take this forward on the first one 
>>>>> (default behaviour). Since the default behaviour is to SET 
>>>>> standard_conforming_strings=on, is there a need to do it again, for each 
>>>>> connection/call? It will incur an ever-so-small overhead which can be 
>>>>> avoided?
>>>>>
>>>>> On Saturday, November 23, 2013 7:31:00 PM UTC+5:30, Massimo Di Pierro 
>>>>> wrote:
>>>>>>
>>>>>> You raise two issues:
>>>>>>
>>>>>> 1) About
>>>>>> SET standard_conforming_strings=on
>>>>>> This is required and in fact as you say it is the default behavior 
>>>>>> since 9.1. This has nothing to do which locking.
>>>>>>
>>>>>> 2) You see exclusive locks. Which locks? Can you say more?
>>>>>>
>>>>>>
>>>>>>
>>&g

[web2py] Re: web2py, PostgreSQL and exclusive locks

2018-05-02 Thread Lisandro
Hi there, sorry to bother in this old post.

I'm having a problem regarding standard_conforming_strings.
Today my app experienced a problem with Redis going out of memory.
After the problem was fixed, all my websites started to work normally, 
except four of them (of a total of 260 websites).

For these websites don't working, the problem was a 504 timeout.
When I checked for long running queries, I see this:

 11622 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:59.493348  | 
2018-05-02 16:04:36.006134-03 | f   | idle in transaction | SET 
standard_conforming_strings=on;
 11635 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:57.579705  | 
2018-05-02 16:04:37.919777-03 | f   | idle in transaction | SET 
standard_conforming_strings=on;
 11651 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:55.500219  | 
2018-05-02 16:04:39.999263-03 | f   | idle in transaction | SET 
standard_conforming_strings=on;
 11693 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:50.412742  | 
2018-05-02 16:04:45.08674-03  | f   | idle in transaction | SET 
standard_conforming_strings=on;
 11801 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:36.528754  | 
2018-05-02 16:04:58.970728-03 | f   | idle in transaction | SET 
standard_conforming_strings=on;
 11853 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:31.644218  | 
2018-05-02 16:05:03.855264-03 | f   | idle in transaction | SET 
standard_conforming_strings=on;
 11904 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:25.186631  | 
2018-05-02 16:05:10.312851-03 | f   | idle in transaction | SET 
standard_conforming_strings=on;
 11945 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:21.143921  | 
2018-05-02 16:05:14.355561-03 | f   | idle in transaction | SET 
standard_conforming_strings=on;
 11998 | cipollettiinforma  | medios   | 127.0.0.1   | 00:00:13.615864  | 
2018-05-02 16:05:21.883618-03 | f   | idle in transaction | SET 
standard_conforming_strings=on;


That, for every database of these four websites.
Of course I tried to clean redis cache, but the problem remains.

How should I countinue investigating? Where?


El domingo, 24 de noviembre de 2013, 22:45:08 (UTC-3), Massimo Di Pierro 
escribió:
>
> We can make it optional. Please open a ticket.
>
> On Sunday, 24 November 2013 02:28:23 UTC-6, Jayadevan M wrote:
>>
>> My doubt is - do we need to explicitly set it ON? Since the default 
>> setting is ON, any client connecting will have it turned ON anyway?
>>
>> On Sunday, November 24, 2013 1:48:23 PM UTC+5:30, Massimo Di Pierro wrote:
>>>
>>> It must be done for every connection. Do you have connection pooling on? 
>>> If a connection is recycled it should not do it again.
>>>
>>> On Saturday, 23 November 2013 22:49:47 UTC-6, Jayadevan M wrote:

 Thanks for the reply. OK, let us take this forward on the first one 
 (default behaviour). Since the default behaviour is to SET 
 standard_conforming_strings=on, is there a need to do it again, for each 
 connection/call? It will incur an ever-so-small overhead which can be 
 avoided?

 On Saturday, November 23, 2013 7:31:00 PM UTC+5:30, Massimo Di Pierro 
 wrote:
>
> You raise two issues:
>
> 1) About
> SET standard_conforming_strings=on
> This is required and in fact as you say it is the default behavior 
> since 9.1. This has nothing to do which locking.
>
> 2) You see exclusive locks. Which locks? Can you say more?
>
>
>
> On Saturday, 23 November 2013 05:53:02 UTC-6, Jayadevan M wrote:
>>
>> I am testing our web2py application with a few concurrent users. 
>> While monitoring the database (PostgreSQL), I can see a number of 
>> exclusive 
>> locks. The SQL is 
>> SET standard_conforming_strings=on
>> Is this expected behaviour?
>>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Request with login privileges hangs for a specific user account, how to debug it?

2018-04-24 Thread Lisandro
Hi Massimo, thank you for your time.
I had marked this thread as "no action needed" because I found the cause of 
the issue.

First I thought it was a different problem, so I opened a new thread in 
this forum:
https://groups.google.com/forum/#!topic/web2py/E9jrmf5E-B4
The title of that thread is not correct. I ended up finding that the 
problem is generated in some specific situations, and a specific key can't 
be stored at redis cache. In that thread I've posted some details about the 
tests I made to figure out that was the root cause. 

Sorry if I created some confusion opening a new thread, won't happen again 
:)


El martes, 24 de abril de 2018, 18:43:09 (UTC-3), Massimo Di Pierro 
escribió:
>
> That query itself cannot case hanging but maybe when that query is 
> executed the database is busy with some other background task?
> Try setting migrations to false. may be you are doing more database IO 
> than you should
>
> On Friday, 6 April 2018 09:41:09 UTC-5, Lisandro wrote:
>>
>> Hi Anthony, again, thank you very much for your time, I really appreciate 
>> it.
>>
>> El jueves, 5 de abril de 2018, 17:52:36 (UTC-3), Anthony escribió:
>>>
>>> On Thursday, April 5, 2018 at 2:57:20 PM UTC-4, Lisandro wrote:
>>>>
>>>> Thank you Anthony, yes I'm aware of that.
>>>> I use it like that for this reason: sometimes (not very often) an 
>>>> external app modifies a field of the auth_user table (specifically, it 
>>>> sets 
>>>> true or false a field that I use as a flag). However that change isn't 
>>>> updated to auth.user. In order to do so, the user needs to logout and 
>>>> login 
>>>> again. So I retrieve the auth_user record again and store it to 
>>>> response.answer.
>>>>
>>>> Maybe it could be done like this:
>>>> if auth.is_logged_in():
>>>> auth.user = db.auth_user[auth.user.id]
>>>>
>>>> But I thought it could be break something with Auth methods, so I store 
>>>> it in response.user.
>>>>
>>>
>>> Got it. Yeah, don't replace auth.user -- create a separate variable.
>>>  
>>>
>>>> Anyway, I set this topic as "no action needed" because I opened a new 
>>>> topic, I've found some more info and I think the issue isn't related to 
>>>> that sentence.
>>>>
>>>
>>> But you indicated the select generated by that code was causing Postgres 
>>> to hang. Are you sure that is the case? In other words, is the web2py code 
>>> getting stuck at that line and ultimately causing your server to time out? 
>>> Have you tried adding some logging statements to your code to determine 
>>> exactly where it is getting stuck?
>>>
>>
>> To be truth, I'm not exactly sure that is the line where the code hangs, 
>> I supposed that because of the select query taking too long, but I can't be 
>> sure.
>> The problem is that the incident presents sporadically, and the worst 
>> part is that I can't reproduce it. Also, as it happens in the production 
>> server, I can't afford to modify the app code in production, giving that I 
>> would be making changes to an application that is used by our customers, so 
>> I'm in a tricky situation. 
>>
>> I've made have plans to move sessions to Redis, but as a developer, I 
>> would still like to understand the root cause of the issue :)
>>
>> Anyway, I'll wait to the incident happens again, hoping that it happens 
>> in an app of a "small" customer so I can do some tests.
>>
>>
>>  
>>
>>>
>>> Anthony
>>>
>>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Web2py locks row at auth_user and web2py_session_* tables and doesn't unlock them

2018-04-21 Thread Lisandro
El sábado, 21 de abril de 2018, 18:00:36 (UTC-3), Anthony escribió:
>
> A quick comment about a couple of tests I did regarding RedisSession (that 
>> also has a "with_lock" argument).
>> To the test, I updated web2py locally to version 
>> 2.16.1-stable+timestamp.2017.11.14.05.54.25
>> And then I run my apps using:
>>
>>- RedisSession(..., with_lock=False)
>>This is the way that I was already using it, and apps run normally 
>>(pretty fast because they do simple tasks)
>>
>>- RedisSession(..., with_lock=True)
>>Using this, the apps start responding with hughe delay; the same 
>>requests that usually took less than a second to complete, start tooking 
>>between 6 and 10 seconds.
>>
>> That seems odd. This should have minimal effect on the time to complete a 
> single request if there are no other simultaneous requests involving the 
> same session. How are you testing?
>

It is weird indeed. 
I'm testing in my local environment, with web2py updated to the version I 
mentioned, I use uwsgi and nginx, but I tried also with web2py's embedded 
server. 
I'm testing on an admin app that I developed, it doesn't make ajax calls, 
it is just a bunch of urls where I put a grid with pagination and some 
forms to edit rows, stuff like that.

I've noticed that when I use RedisSession(..., with_lock=True) the app 
takes much longer to respond. 
I've measured the change using the Chrome Inspector, within the Network 
tab, checking for the TTFB (time to first byte).
Normally, running locally, the TTFB of any request URL is around 50ms. 
But when I set RedisSession(..., with_lock=True), flushall redis and 
restart uwsgi, the TTFB raises to 6s up to 10s.
This happens even with requests where I run session.forget(response)

Something particularily weird is that once every 5 or 6 times, the request 
responds fast (with a low TTFB). 
I mean, the same request URL, I stay with the Chrome inspector opened in 
that tab, and reload the page several times.
The TTFB is always between 6s and 10s, but every 5 or 6 times, it is low 
(about 50ms as it should be). 
I'm running Redis 3.2.10

Here is my db.py, but I don't think it has anything out of the ordinary:

db = DAL(
'postgres://%s:%s@%s/%s' % (CONFIG.db_user, CONFIG.db_user_password, 
CONFIG.db_host, CONFIG.db_name),
migrate=False,
lazy_tables=True,
folder=os.path.join(CONFIG.path_datos, 'databases'))

_conn = RConn('localhost', 6379)

sessiondb = RedisSession(
redis_conn=_conn, 
session_expiry=172800,  # two days
with_lock=True)

session.connect(request, response, db=sessiondb)

auth = Auth(globals(), 
db, 

hmac_key=Auth.get_or_create_key(filename=os.path.join(CONFIG.path_datos, 
'auth.key')))

auth.define_tables()

# ... define all the tables
# ... configure auth settings



It does look like the Redis session code puts a 2 second lock on the 
> session when it is first read at the beginning of a request, so if a page 
> makes multiple simultaneous Ajax requests when it loads, each request will 
> take up to 2 seconds before the next can be processed (if I'm reading the 
> code right, an update to the session will result in the lock being 
> released, but if no update is made, it looks like you have to wait for the 
> full 2 seconds to expire).
>
> Note, locking with Redis cache works differently -- the lock is held only 
> during the time it takes to read from or write to the cache, not for the 
> duration of the entire request or beyond.
>


I see what you mean, I took a look at the code.

I've read this from the book: "*The redis backend for sessions is the only 
one that can prevent concurrent modifications to the same session: this is 
especially true for ajax-intensive applications that write to sessions 
often in a semi-concurrent way. To favour speed this is by default not 
enforced, however if you want to turn on the locking behaviour, just turn 
it on with with_lock=True parameter passed to the RedisSession object*"

Considering that, and giving that until now I stored the sessions in the 
db, I guess that it won't hurt to use RedisSession(..., with_lock=False), 
as it would remain being the same that before (regarding the lock), right? 


Anyway, let me know if there is some additional test that I can do to 
figure out why the app takes longer using RedisSession(..., with_lock=True).

Regards,
Lisandro

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Web2py locks row at auth_user and web2py_session_* tables and doesn't unlock them

2018-04-21 Thread Lisandro
Ok, I've left it set to True.

A quick comment about a couple of tests I did regarding RedisSession (that 
also has a "with_lock" argument).
To the test, I updated web2py locally to version 
2.16.1-stable+timestamp.2017.11.14.05.54.25
And then I run my apps using:

   - RedisSession(..., with_lock=False)
   This is the way that I was already using it, and apps run normally 
   (pretty fast because they do simple tasks)
   
   - RedisSession(..., with_lock=True)
   Using this, the apps start responding with hughe delay; the same 
   requests that usually took less than a second to complete, start tooking 
   between 6 and 10 seconds.

I was testing locally, with only one user, one session. 
After switching with_lock argument I always did a "flushall" at redis and 
restarted uwsgi, just in case.
I switched several times to confirm, and the difference is significant. 

Having seen this, I'll keep using Redis this way:
RedisCache(..., with_lock=True)
RedisSession(..., with_lock=False)


Thank you very much Anthony for your help.
Regards,
Lisandro.

PS: I would like to change this thread subject to something more 
appropiate, but I'm not allowed to do it. Sorry for having created it with 
that ugly title, I'll do better next time.

El viernes, 20 de abril de 2018, 15:32:15 (UTC-3), Anthony escribió:
>
> On Friday, April 20, 2018 at 11:58:47 AM UTC-4, Lisandro wrote:
>>
>> I see what you mean. 
>> But still, if my interpretation is correct, in those cases we should see 
>> the *__lock key stored.
>> What is weird about my specific issue is that there was no *__lock key.
>>
>> Anyway, regardless upgrading web2py, now I'm wondering if I should set 
>> with_lock True or False. Do you have any suggestion? The book says:
>> "*Redis cache subsystem allows you to prevent the infamous "thundering 
>> herd problem": this is not active by default because usually you choose 
>> redis for speed, but at a negligible cost you can make sure that only one 
>> thread/process can set a value concurrently.*" 
>>
>> I haven't found comments regarding when is best to use with_lock=True and 
>> when to use with_lock=False.
>>
>
> Probably safest to set it to True unless that slows things down 
> noticeably. Or go with False if you can tolerate the occasional race 
> condition.
>
> Anthony
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Web2py locks row at auth_user and web2py_session_* tables and doesn't unlock them

2018-04-20 Thread Lisandro
I see what you mean. 
But still, if my interpretation is correct, in those cases we should see 
the *__lock key stored.
What is weird about my specific issue is that there was no *__lock key.

Anyway, regardless upgrading web2py, now I'm wondering if I should set 
with_lock True or False. Do you have any suggestion? The book says:
"*Redis cache subsystem allows you to prevent the infamous "thundering herd 
problem": this is not active by default because usually you choose redis 
for speed, but at a negligible cost you can make sure that only one 
thread/process can set a value concurrently.*" 

I haven't found comments regarding when is best to use with_lock=True and 
when to use with_lock=False. I'm guessing with_lock=True is best when the 
process that generates the data that is going to be cached takes long time 
or uses much resources. That's not my case, so I'm tempted to change it to 
False, but I'm not sure about the decision. If you have any experience or 
suggestion about that, I'll appreciate you comment about it.

Thanks again.
Regards,
Lisandro.


El viernes, 20 de abril de 2018, 12:19:11 (UTC-3), Anthony escribió:
>
> On Friday, April 20, 2018 at 10:47:10 AM UTC-4, Lisandro wrote:
>>
>> Thank you very much for your time Anthony.
>>
>> Yes, I use Redis with_lock=True.
>> I checked but there is no *__lock key stored in Redis. I double checked 
>> that.
>>
>> But, giving you mentioned with_lock, I tried to set with_lock=False, and 
>> it worked. 
>> Then I set with_lock=True again, and it worked too.
>> *Apparently, the problem went away after executing the request one time 
>> with_lock=False, and then I could set it back to True and it kept working 
>> ok*.
>>
>> I'm using an old version of web2py (2.10).
>>
>
> Looking at the code 
> <https://github.com/web2py/web2py/blob/R-2.10.1/gluon/contrib/redis_cache.py#L140>
>  
> under 2.10, it is not clear what the problem could be, as the locking code 
> is in a try block and there is a "finally" clause that deletes the lock key 
> if there is an exception.
>
> The current code in master looks like it could result in a lock being 
> stuck if an exception occurs while storing or retrieving a cache item.
>
> Anthony
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Web2py locks row at auth_user and web2py_session_* tables and doesn't unlock them

2018-04-20 Thread Lisandro
Thank you very much for your time Anthony.

Yes, I use Redis with_lock=True.
I checked but there is no *__lock key stored in Redis. I double checked 
that.

But, giving you mentioned with_lock, I tried to set with_lock=False, and it 
worked. 
Then I set with_lock=True again, and it worked too.
*Apparently, the problem went away after executing the request one time 
with_lock=False, and then I could set it back to True and it kept working 
ok*.

I'm using an old version of web2py (2.10). 
I already have plans to upgrade next month.
I've checked the difference between the redis_cache.py file of my version 
and the current stable one: https://www.diffchecker.com/owJ67Slp
But I'm not able to see if the new version could help on this. It is indeed 
a weird problem.


In the book I've read that with_lock=True is good to avoid that two 
different threads write the same key value. In my case, I suppose it won't 
hurt if that happens (my app uses cache in order to increase performance).
Should I consider setting it to False?



El viernes, 20 de abril de 2018, 10:54:20 (UTC-3), Anthony escribió:
>
> When you set up the Redis cache, do you set with_lock=True? If so, I 
> wonder if an error here 
> <https://github.com/web2py/web2py/blob/94a9bfd05f287fcff776f2d79b222b0b92b86a32/gluon/contrib/redis_cache.py#L158>
>  
> could be causing the key to be locked and never released. I guess you can 
> check for a key named "w2p:myapp:important-messages-3:__lock".
>
> Anthony
>
> On Friday, April 20, 2018 at 7:28:28 AM UTC-4, Lisandro wrote:
>>
>> Sorry to bother you again with this, but I think I've found the problem.
>> *The problem is apparently with Redis integration. *It had nothing to do 
>> with connections, database, sessions, none of that. Here is what I've found.
>>
>> Remember, the line where my app hangs is this:
>>
>> *session.important_messages = cache.redis('important-messages-%s' % 
>> auth.user.id <http://auth.user.id/>,*
>> *  lambda: 
>> get_important_messages(), *
>> * time_expire=180)*
>>
>>
>> As the problem only presented in production, on the website of my 
>> customer, I asked him to allow me to play a little with the code. 
>> So, first thing I did was to cache request.now instead of calling the 
>> function "get_important_messages()", but the problem remained.
>> Then I thought "maybe if I change the key..." and I changed the original 
>> code to this:
>>
>> *session.important_messages = cache.redis('important-messages',*
>> * lambda: 
>> get_important_messages(),*
>> * time_expire=180)*
>>
>>
>> *Notice that only thing I changed was the key to store in Redis. And it 
>> worked! *I thought that maybe "auth.user.id" was some large number, but 
>> I checked and the user ID is 3. Tried to pass it like int(auth.user.id) 
>> but I had no success. *App still hangs when I try to retrieve that 
>> specific key*. Only that key.
>>
>> I've connected to redis-cli and it tells me that the key isn't there.
>> So I set a "hello" value for the key, I get it, then I deleted it:
>>
>> $ redis-cli
>> 127.0.0.1:6379> DUMP w2p:myapp:important-messages-3
>> (nil)
>> 127.0.0.1:6379> SET w2p:myapp:important-messages-3 "hello"
>> OK
>> 127.0.0.1:6379> DUMP w2p:myapp:important-messages-3
>> "\x00\x05hello\x06\x00\xf5\x9f\xb7\xf6\x90a\x1c\x99"
>> 127.0.0.1:6379> DEL w2p:myapp:important-messages-3
>> (integer) 1127.0.0.1:6379> DUMP w2p:myapp:important-messages-3
>> 127.0.0.1:6379> DUMP w2p:myapp:important-messages-3
>> (nil)
>>
>>
>> But event after that, web2py hangs with this simple code:
>>
>> *r = cache.redis('important-messages-3', **lambda: request.now, *
>> *time_expire=30)*
>>
>> This happens only with that specific key. I can set the key to 
>> "important-messages-2", "important-messages-999", "important-messages-A", 
>> anything I can think, but with that specific key it hangs.
>>
>> We have several websites (around 200), and this problem has happened 
>> about 5 o 6 times in different websites, but it was always the same 
>> problem. The only solution I had (until now) was to create a new account 
>> for the user (that explains why it worked with a new account, that is 
>> because the new account had a different auth.user.id, so the key to 
>> stor

[web2py] Re: Web2py locks row at auth_user and web2py_session_* tables and doesn't unlock them

2018-04-20 Thread Lisandro
Sorry to bother you again with this, but I think I've found the problem.
*The problem is apparently with Redis integration. *It had nothing to do 
with connections, database, sessions, none of that. Here is what I've found.

Remember, the line where my app hangs is this:

*session.important_messages = cache.redis('important-messages-%s' % 
auth.user.id <http://auth.user.id/>,*
*  lambda: 
get_important_messages(), *
* time_expire=180)*


As the problem only presented in production, on the website of my customer, 
I asked him to allow me to play a little with the code. 
So, first thing I did was to cache request.now instead of calling the 
function "get_important_messages()", but the problem remained.
Then I thought "maybe if I change the key..." and I changed the original 
code to this:

*session.important_messages = cache.redis('important-messages',*
* lambda: get_important_messages(),*
* time_expire=180)*


*Notice that only thing I changed was the key to store in Redis. And it 
worked! *I thought that maybe "auth.user.id" was some large number, but I 
checked and the user ID is 3. Tried to pass it like int(auth.user.id) but I 
had no success. *App still hangs when I try to retrieve that specific key*. 
Only that key.

I've connected to redis-cli and it tells me that the key isn't there.
So I set a "hello" value for the key, I get it, then I deleted it:

$ redis-cli
127.0.0.1:6379> DUMP w2p:myapp:important-messages-3
(nil)
127.0.0.1:6379> SET w2p:myapp:important-messages-3 "hello"
OK
127.0.0.1:6379> DUMP w2p:myapp:important-messages-3
"\x00\x05hello\x06\x00\xf5\x9f\xb7\xf6\x90a\x1c\x99"
127.0.0.1:6379> DEL w2p:myapp:important-messages-3
(integer) 1127.0.0.1:6379> DUMP w2p:myapp:important-messages-3
127.0.0.1:6379> DUMP w2p:myapp:important-messages-3
(nil)


But event after that, web2py hangs with this simple code:

*r = cache.redis('important-messages-3', **lambda: request.now, *
*time_expire=30)*

This happens only with that specific key. I can set the key to 
"important-messages-2", "important-messages-999", "important-messages-A", 
anything I can think, but with that specific key it hangs.

We have several websites (around 200), and this problem has happened about 
5 o 6 times in different websites, but it was always the same problem. The 
only solution I had (until now) was to create a new account for the user 
(that explains why it worked with a new account, that is because the new 
account had a different auth.user.id, so the key to store in redis was 
different).

Could this be a bug in the redis_cache.py integration?
Maybe I should open a new thread about this, right?


El jueves, 19 de abril de 2018, 10:27:46 (UTC-3), Lisandro escribió:
>
> Hi there,
> I've found the issue but I still don't know how is it produced.
> Anthony was right from the begining when he said "the app is not hanging 
> because the locks are being held, but rather the locks are being held 
> because the app is hanging"
> Since that comment, I was waiting for the problem to happen again to 
> decompile the app and print some logs to see exactly the line of code where 
> the application hangs. 
>
> So that's what I did, and *I've found that my app indeed hangs in an 
> specific line of code of models/db.py:*
> This is my models/db.py resumed:
>
>
> if auth.is_logged_in() and auth.user.responsable:
>
> 
>
> *# --- THIS IS THE LINE WHERE THE CODE HANGS --*
> *session.important_messages = cache.redis('important_messages-%s' % 
> auth.user.id <http://auth.user.id>,*
> * lambda: 
> get_important_messages(), *
> * time_expire=180)*
>
>
>
>
> So I checked what the function "get_important_messages()" does, and I see 
> that it connects to a webservice (also developed with web2py):
>
>
> def get_important_messages():
> from gluon.contrib.simplejsonrpc import ServerProxy
>
> webservice = ServerProxy('
> https://main-app-domain.com/ws/call/jsonrpc?token=XXX1')
> try:
> result = webservice.get_account_info(CONFIG.customer_id)
> except Exception as e:
> result = []
> return result
>
>
>
> Then I went to double check my nginx error.log, this time looking for 
> errors in the URL that the app uses to connect to the webservice. 
> Surprisingly, I'm seeing a few timeouts everyday for that URL:
>
> 2018/04/17 15:08:22 [error] 23587#23587: *93711423 upstream

[web2py] Re: Web2py locks row at auth_user and web2py_session_* tables and doesn't unlock them

2018-04-19 Thread Lisandro
hile reading response header from upstream, client: 
MY.OWN.SERVER.IP, server: main-app-domain.com, request: "POST 
/ws/call/jsonrpc?token=XXX7 HTTP/1.1", upstream: 
"uwsgi://unix:///tmp/medios.socket", host: "main-app-domain.com"
2018/04/17 15:12:50 [error] 23589#23589: *93721468 upstream timed out (110: 
Connection timed out) while reading response header from upstream, client: 
MY.OWN.SERVER.IP, server: main-app-domain.com, request: "POST 
/ws/call/jsonrpc?token=XXX8 HTTP/1.1", upstream: 
"uwsgi://unix:///tmp/medios.socket", host: "main-app-domain.com"
2018/04/16 10:39:39 [error] 16600#16600: *89723537 upstream timed out (110: 
Connection timed out) while reading response header from upstream, client: 
MY.OWN.SERVER.IP, server: main-app-domain.com, request: "POST 
/ws/call/jsonrpc?token=XXX7 HTTP/1.1", upstream: 
"uwsgi://unix:///tmp/medios.socket", host: "main-app-domain.com"
2018/04/16 10:40:10 [error] 16601#16601: *89724987 upstream timed out (110: 
Connection timed out) while reading response header from upstream, client: 
MY.OWN.SERVER.IP, server: main-app-domain.com, request: "POST 
/ws/call/jsonrpc?token=XXX9 HTTP/1.1", upstream: 
"uwsgi://unix:///tmp/medios.socket", host: "main-app-domain.com"
2018/04/16 10:40:11 [error] 16602#16602: *89725040 upstream timed out (110: 
Connection timed out) while reading response header from upstream, client: 
MY.OWN.SERVER.IP, server: main-app-domain.com, request: "POST 
/ws/call/jsonrpc?token=XXX9 HTTP/1.1", upstream: 
"uwsgi://unix:///tmp/medios.socket", host: "main-app-domain.com"
2018/04/16 16:59:46 [error] 17874#17874: *90771814 upstream timed out (110: 
Connection timed out) while reading response header from upstream, client: 
MY.OWN.SERVER.IP, server: main-app-domain.com, request: "POST 
/ws/call/jsonrpc?token=XXX8 HTTP/1.1", upstream: 
"uwsgi://unix:///tmp/medios.socket", host: "main-app-domain.com"
2018/04/16 17:00:56 [error] 17877#17877: *90774663 upstream timed out (110: 
Connection timed out) while reading response header from upstream, client: 
MY.OWN.SERVER.IP, server: main-app-domain.com, request: "POST 
/ws/call/jsonrpc?token=XXX8 HTTP/1.1", upstream: 
"uwsgi://unix:///tmp/medios.socket", host: "main-app-domain.com"
2018/04/16 17:01:11 [error] 17879#17879: *90775407 upstream timed out (110: 
Connection timed out) while reading response header from upstream, client: 
MY.OWN.SERVER.IP, server: main-app-domain.com, request: "POST 
/ws/call/jsonrpc?token=XXX9 HTTP/1.1", upstream: 
"uwsgi://unix:///tmp/medios.socket", host: "main-app-domain.com"
2018/04/15 13:46:46 [error] 11395#11395: *86829630 upstream timed out (110: 
Connection timed out) while reading response header from upstream, client: 
MY.OWN.SERVER.IP, server: main-app-domain.com, request: "POST 
/ws/call/jsonrpc?token=XXX9 HTTP/1.1", upstream: 
"uwsgi://unix:///tmp/medios.socket", host: "main-app-domain.com"


So, what I know now is that *the problem are these timeouts that occur 
ocasionally when an app tries to connect the main webservice with this 
code:*

webservice = 
ServerProxy('https://main-app-domain.com/ws/call/jsonrpc?token=XXX1'



This is the code of the ws.py controller that implements the webservice:

# -*- coding: utf-8 -*-

from gluon.tools import Service

service = Service()


def call():
if not request.vars.token or not db(db.websites.token == 
request.vars.token).count():
raise HTTP(403)
session.forget()
return service()



Notice that the call receives a token, and every app that tries to connect 
has its own token, in order to validate the connection.
I'm not sure why some of the calls to the webservice hang, but I'm sure of 
this:

   - While some of these calls time out, other identical calls work 
   properly (and they are all identical, just calls to connect to the 
   webservice).
   - Just in case, I've checked that my nginx configuration isn't applying 
   requests limits to my server IP or something like that, but no warning or 
   error regarding this is showed in the nginx error.log
   - Also, just in case, I checked my pgBouncer log to see if connections 
   to the main database are exhausted, but that's not the case either 
   (actually, if this was the case, I would see error tickets created and also 
   any other attempt of connection to the webservice would fail, when this is 
   not happening).


Now I'm lost here, I don't see how the attempt of connection to the 
webservice could fail. 
Maybe network problems, but they should affect other connections as well.

Any comment or suggestion will be much apreciated.
Regards,
Lisandro.






El lunes, 16 de abril de 2018, 18:57:47 (UTC-3), Lisandro escribió:
>
> Hi, th

[web2py] Re: Web2py locks row at auth_user and web2py_session_* tables and doesn't unlock them

2018-04-16 Thread Lisandro
Hi, thank you both for your time and concern.

@Richard: this particular website was still running with sessions stored in 
Redis. As we have several websites, moving sessions to Redis is something 
that we will do progressively in the next weeks.

@Anthony: the database server is PostgreSQL, running in the same VPS, so I 
wouldn't say it's due to network problems. I do have pgBouncer and I limit 
the pool size to only 1 connection (with 2 of reserve pool) per database. 
The app didn't have much load (actually it almost never has), but in this 
situation, with that query hanging for 60 seconds, it's probable that the 
ticket error was because there were no more connections available for that 
db (for example, if the user with the problem tried simultaneously in a 
laptop, in a pc and in his mobile phone). 


Some (weird) points about the problem:

   - While it presents in an specific account, other user accounts can 
   login and work perfectly with the app.
   - As an admin, I have the permission to impersonate other user accounts. 
   When the problem happens, I can impersonate any account but the one with 
   the problem (the impersonation is successfull, but the same timeout 
   presents after I'm impersonating the account).
   - Problem doesn't go away deleting all web2py_session_table records and 
   clearing cookies.
   - Problem doesn't go away changing the account email or password.
   - The only solution I've been applying last times it happened, was to 
   create a new account for the user and invalidate the old one.


Today, when the problem happened, I created the new account for the user 
and moved the sessions to Redis. Maybe I should have kept sessions in the 
db, in order to debug the problem with that account. Now it's not possible 
anymore, because I already moved to Redis. Of course I could move back 
sessions to db, but I don't like the idea of debugging at production in the 
website of a customer, specially one who had a recent issue with this.

So, I'll wait if it happens again, and I'll try to leave the account there 
to do some tests.
Thank you very much for your time!


El lunes, 16 de abril de 2018, 17:31:47 (UTC-3), Anthony escribió:
>
> Where is the database server running? Is it possible there are occasional 
> network problems connecting to it?
>
> Anthony
>
> On Monday, April 16, 2018 at 3:15:54 PM UTC-4, Lisandro wrote:
>>
>> Hi there, sorry to bother again, I have some additional info that could 
>> help.
>>
>> The problem happened again, exactly the same as the other times. 
>> But this time an error ticket was created with this traceback:
>>
>>- 
>>
>>Traceback (most recent call last):
>>  File "/var/www/medios/gluon/main.py", line 463, in wsgibase
>>session._try_store_in_db(request, response)
>>  File "/var/www/medios/gluon/globals.py", line 1152, in _try_store_in_db
>>if not table._db(table.id == record_id).update(**dd):
>>  File "/var/www/medios/gluon/packages/dal/pydal/objects.py", line 2117, 
>> in update
>>ret = db._adapter.update("%s" % table._tablename,self.query,fields)
>>  File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
>> 988, in update
>>raise e
>>DatabaseError: query_wait_timeout
>>server closed the connection unexpectedly
>>This probably means the server terminated abnormally
>>before or while processing the request.
>>
>>
>>
>> Could this indicate that for some reason web2py is failing to store the 
>> session?
>> Or could it still be that a deadlock in my app code is producing this 
>> error?
>>
>>
>> El viernes, 6 de abril de 2018, 18:59:28 (UTC-3), Lisandro escribió:
>>>
>>> Oh, I see, you made a good point there, I hadn't realised.
>>>
>>> I guess I will have to take a closer look to my app code. Considering 
>>> that the problem exists in specific accounts while others work ok, and 
>>> considering also that the problem happens with any request that that 
>>> specific user makes to any controller/function, I'm thinking: what does my 
>>> app do different for a user compared to another one at request level? For 
>>> "request level" I mean all the code the app runs in every request, to 
>>> start, the models/db.py
>>>
>>> I'll take a closer look to that and will post another message here if I 
>>> find something that could signal the root cause of the issue. 
>>>
>>> Thank you very much for your help!
>>>
>>>
>>>
>>&

[web2py] Re: Web2py locks row at auth_user and web2py_session_* tables and doesn't unlock them

2018-04-16 Thread Lisandro
Hi there, sorry to bother again, I have some additional info that could 
help.

The problem happened again, exactly the same as the other times. 
But this time an error ticket was created with this traceback:

   - 
   
   Traceback (most recent call last):
 File "/var/www/medios/gluon/main.py", line 463, in wsgibase
   session._try_store_in_db(request, response)
 File "/var/www/medios/gluon/globals.py", line 1152, in _try_store_in_db
   if not table._db(table.id == record_id).update(**dd):
 File "/var/www/medios/gluon/packages/dal/pydal/objects.py", line 2117, in 
update
   ret = db._adapter.update("%s" % table._tablename,self.query,fields)
 File "/var/www/medios/gluon/packages/dal/pydal/adapters/base.py", line 
988, in update
   raise e
   DatabaseError: query_wait_timeout
   server closed the connection unexpectedly
   This probably means the server terminated abnormally
   before or while processing the request.
   
   

Could this indicate that for some reason web2py is failing to store the 
session?
Or could it still be that a deadlock in my app code is producing this error?


El viernes, 6 de abril de 2018, 18:59:28 (UTC-3), Lisandro escribió:
>
> Oh, I see, you made a good point there, I hadn't realised.
>
> I guess I will have to take a closer look to my app code. Considering that 
> the problem exists in specific accounts while others work ok, and 
> considering also that the problem happens with any request that that 
> specific user makes to any controller/function, I'm thinking: what does my 
> app do different for a user compared to another one at request level? For 
> "request level" I mean all the code the app runs in every request, to 
> start, the models/db.py
>
> I'll take a closer look to that and will post another message here if I 
> find something that could signal the root cause of the issue. 
>
> Thank you very much for your help!
>
>
>
> El viernes, 6 de abril de 2018, 16:05:13 (UTC-3), Anthony escribió:
>>
>> On Friday, April 6, 2018 at 10:58:56 AM UTC-4, Lisandro wrote:
>>>
>>> Yes, in fact, I've been running that SQL command to check for locks, and 
>>> sometimes I see that lock on other tables, but that other locks live for 
>>> less than a second. However, when the problem happens, the lock on the 
>>> auth_user and web2py_session tables remains there for the whole 60 seconds.
>>>
>>
>> Yes, but that doesn't mean the lock or the database has anything to do 
>> with the app hanging. The locks will be held for the duration of the 
>> database transaction, and web2py wraps HTTP requests in a transaction, so 
>> the transaction doesn't end until the request ends (unless you explicitly 
>> call db.commit()). In other words, the app is not hanging because the locks 
>> are being held, but rather the locks are being held because the app is 
>> hanging. First you have to figure out why the app is hanging (it could be 
>> the database, but could be something else).
>>
>> Anthony
>>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] How to retrieve info about sessions when they are stored in Redis?

2018-04-15 Thread Lisandro
Thank you very much Richard!
With your help I was able to retrieve the keys of the sessions stored in 
Redis with this simple line of code:

session_keys = cache.redis.r_server.smembers('w2p:sess:%s:id_idx' % 
application_name)

That line returns a set with all the session keys stored in Redis.
Notice that the keys retrieved may be expired, so if you need to check how 
many of those keys are still valid, you would have to iterate over the set 
checking the ttl of each key, like this:

valid_session_keys = [key for key in session_keys if cache.redis.r_server.
ttl(key) > 0]

I'm not sure why the deleted keys remain in Redis with a negative TTL, but 
I presume this is because redis server would be doing some automatically 
cleaning once every time, deleting definitively those keys with negative 
TTL.

Thanks again!
Regards,
Lisandro.



El miércoles, 11 de abril de 2018, 13:49:25 (UTC-3), Richard escribió:
>
> I gave a look directly in the redis shell
>
> redis-cli -h localhost -p 6379 -a PASSWORD_IF_YOU_SET_ONE
>
> It appears that each session store will have a different key which goes 
> like that : w2p:sess:APP_NAME:SOME_ID
>
> And w2p:sess:APP_NAME:id_idx will contains a set of each unique session 
> existings, so you would have to access this list of session id then access 
> the actual session.
>
> You can list keys in redis with :
>
> SCAN 0
>
> Better then KEYS that could cause lock and lot of memory usage 
>
> To list the set of existing sessions in ...:id_idx you need to use
>
> smembers w2p:sess:APP_NAME:id_idx
>
> It migth help you figure out how to manage you redis sessions form python 
> and better understand session contrib : 
> https://github.com/web2py/web2py/blob/0d646fa5e7c731cb5c392adf6a885351e77e4903/gluon/contrib/redis_session.py
>
> Good luck
>
> Richard
>
>
>
> On Mon, Apr 9, 2018 at 12:11 AM, Lisandro  > wrote:
>
>> Recently I moved the sessions from the database to Redis, and I'm 
>> wondering: is there a way to retrieve info about sessions when they are 
>> stored in Redis? 
>> For example, when sessions are stored in the database, you have the 
>> option to use SQL to do some stuff like counting or deleting sessions. How 
>> to do it when sessions are stored in Redis?
>>
>> I also use Redis to cache HTML responses from web2py and any other stuff 
>> that can be cached (lists, dictionaries, etc). In order to be able to list 
>> the keys cached by one specific web2py application, I have written this 
>> custom function to retrieve those keys. 
>> I've read that it's not a good idea to use cache.redis.r_server.keys() 
>> method on production 
>> <https://stackoverflow.com/questions/23296681/redis-safely-retrieving-a-small-set-of-keys-in-production-database>,
>>  
>> so I written this code based on what I saw in the clear() method at 
>> gluon.contrib.redis_cache 
>> <https://github.com/web2py/web2py/blob/master/gluon/contrib/redis_cache.py#L233>
>> :
>>
>> def get_cache_keys(application, prefix=''):
>> import re
>> result = []
>> regex = ':%s*' % prefix
>> prefix = 'w2p:%s' % application
>> cache_set = 'w2p:%s:___cache_set' % application
>> r = re.compile(regex)
>> buckets = current.cache.redis.r_server.smembers(cache_set)  # get all 
>> buckets
>> if buckets:  # get all keys in buckets
>> keys = current.cache.redis.r_server.sunion(buckets)
>> else:
>> return result
>> for a in keys:
>> if r.match(str(a).replace(prefix, '', 1)):
>> result.append(a)
>> return result
>>
>>
>> With that code, I'm able to list all the keys cached by a web2py 
>> application.
>> As I'm also using Redis to store sessions, I want to be able to list all 
>> the session keys.
>> I've tried a similar code to the one showed above, replacing this:
>>
>> prefix = 'w2p:sess:%s' % application
>> cache_set = 'w2p:sess:%s:id_idx' % application
>>
>> But that doesn't work. Is it possible to achieve what I want? Any 
>> suggestion will be much appreciated.
>>
>> Regards,
>> Lisandro.
>>
>> -- 
>> Resources:
>> - http://web2py.com
>> - http://web2py.com/book (Documentation)
>> - http://github.com/web2py/web2py (Source code)
>> - https://code.google.com/p/web2py/issues/list (Report Issues)
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "web2py-users" group.
>

[web2py] How to retrieve info about sessions when they are stored in Redis?

2018-04-08 Thread Lisandro
Recently I moved the sessions from the database to Redis, and I'm 
wondering: is there a way to retrieve info about sessions when they are 
stored in Redis? 
For example, when sessions are stored in the database, you have the option 
to use SQL to do some stuff like counting or deleting sessions. How to do 
it when sessions are stored in Redis?

I also use Redis to cache HTML responses from web2py and any other stuff 
that can be cached (lists, dictionaries, etc). In order to be able to list 
the keys cached by one specific web2py application, I have written this 
custom function to retrieve those keys. 
I've read that it's not a good idea to use cache.redis.r_server.keys() 
method on production 
<https://stackoverflow.com/questions/23296681/redis-safely-retrieving-a-small-set-of-keys-in-production-database>,
 
so I written this code based on what I saw in the clear() method at 
gluon.contrib.redis_cache 
<https://github.com/web2py/web2py/blob/master/gluon/contrib/redis_cache.py#L233>
:

def get_cache_keys(application, prefix=''):
import re
result = []
regex = ':%s*' % prefix
prefix = 'w2p:%s' % application
cache_set = 'w2p:%s:___cache_set' % application
r = re.compile(regex)
buckets = current.cache.redis.r_server.smembers(cache_set)  # get all 
buckets
if buckets:  # get all keys in buckets
keys = current.cache.redis.r_server.sunion(buckets)
else:
return result
for a in keys:
if r.match(str(a).replace(prefix, '', 1)):
result.append(a)
return result


With that code, I'm able to list all the keys cached by a web2py 
application.
As I'm also using Redis to store sessions, I want to be able to list all 
the session keys.
I've tried a similar code to the one showed above, replacing this:

prefix = 'w2p:sess:%s' % application
cache_set = 'w2p:sess:%s:id_idx' % application

But that doesn't work. Is it possible to achieve what I want? Any 
suggestion will be much appreciated.

Regards,
Lisandro.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Web2py locks row at auth_user and web2py_session_* tables and doesn't unlock them

2018-04-06 Thread Lisandro
Oh, I see, you made a good point there, I hadn't realised.

I guess I will have to take a closer look to my app code. Considering that 
the problem exists in specific accounts while others work ok, and 
considering also that the problem happens with any request that that 
specific user makes to any controller/function, I'm thinking: what does my 
app do different for a user compared to another one at request level? For 
"request level" I mean all the code the app runs in every request, to 
start, the models/db.py

I'll take a closer look to that and will post another message here if I 
find something that could signal the root cause of the issue. 

Thank you very much for your help!



El viernes, 6 de abril de 2018, 16:05:13 (UTC-3), Anthony escribió:
>
> On Friday, April 6, 2018 at 10:58:56 AM UTC-4, Lisandro wrote:
>>
>> Yes, in fact, I've been running that SQL command to check for locks, and 
>> sometimes I see that lock on other tables, but that other locks live for 
>> less than a second. However, when the problem happens, the lock on the 
>> auth_user and web2py_session tables remains there for the whole 60 seconds.
>>
>
> Yes, but that doesn't mean the lock or the database has anything to do 
> with the app hanging. The locks will be held for the duration of the 
> database transaction, and web2py wraps HTTP requests in a transaction, so 
> the transaction doesn't end until the request ends (unless you explicitly 
> call db.commit()). In other words, the app is not hanging because the locks 
> are being held, but rather the locks are being held because the app is 
> hanging. First you have to figure out why the app is hanging (it could be 
> the database, but could be something else).
>
> Anthony
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Web2py locks row at auth_user and web2py_session_* tables and doesn't unlock them

2018-04-06 Thread Lisandro
Yes, in fact, I've been running that SQL command to check for locks, and 
sometimes I see that lock on other tables, but that other locks live for 
less than a second. However, when the problem happens, the lock on the 
auth_user and web2py_session tables remains there for the whole 60 seconds. 

Forgot to mention that, when the problem presents in an specific account, I 
can experience the same problem if I try to impersonate that account. It 
seems as if something would have "break" within that account, and deleting 
web2py_session_* rows doesn't help. Actually, until now, the only way I 
could "solve" it was creating a new account (of course, that is not 
technically a solution) or moving the sessions to Redis.

In case it helps, this is the controller function that processes the post 
for the login.
I don't pretend you check all the code, because I'm not sure how this could 
be the problem. 

def user():
if request.env.request_method == 'POST':
email = request.post_vars.email.lower().strip() if 
request.post_vars.email else ''
password = request.post_vars.password
recordarme = bool(int(request.post_vars.recordarme or 0))
errores = []
if not email:
errores.append(['email', 'Ingresa tu dirección de email'])
if not password:
errores.append(['password', 'Ingresa tu contraseña'])
if errores:
return response.json({'success': False, 'errores': errores})

usuario = db(db.auth_user.email.lower() == email).select().first()
if not usuario:
session.failed_login_attempt = True
return response.json({
'success': False,
'message': 'Login data invalid'
})
elif usuario.registration_key:
return response.json({
'success': False,
'message': 'Registration is pending for confirmation'
})
else:
usuario = auth.login_bare(usuario.email, password)
if not usuario:
return response.json({
'success': False,
'message': 'Login data invalid'
})
session.auth.expiration = auth.settings.expiration
if recordarme:
session.auth.expiration = auth.settings.long_expiration
session.auth.remember_me = True
if response.cookies.get(response.session_id_name):
response.cookies[response.session_id_name]["expires"] = 
session.auth.expiration
try:
db.auth_event.insert(time_stamp=request.now, 
client_ip=request.client, user_id=usuario.id,
 origin='auth', description='User %s 
Logged-in' % usuario.id)
except:
pass
return response.json({'success': True})






El jueves, 5 de abril de 2018, 17:42:46 (UTC-3), Anthony escribió:
>
> For any select that takes place within a transaction, Postgres by default 
> automatically acquires an access share lock on the involved tables, so this 
> should be happening on all requests. However, access share locks do not 
> conflict with each other, so I don't think these locks are necessarily the 
> source of your problem.
>
> Anthony
>
> On Thursday, April 5, 2018 at 10:58:07 AM UTC-4, Lisandro wrote:
>>
>> Hi there! This is somehow related to this other topic [1], which I have 
>> closed because I've found some new evidence and I though it would be better 
>> to open a new topic.
>>
>> I'm having this problem where a simple select on auth_user table hangs 
>> indefinitely, until timeout.
>> The problem occurs in one specific user account. The user logs in, but 
>> after the successfull login, any request hangs.
>> I checked the postgres long running queries and I see this query hanging 
>> until timeout:
>>
>> SELECT auth_user.id, auth_user.first_name, auth_user.last_name, auth_user
>> .email, auth_user.password, auth_user.registration_key, auth_user.
>> reset_password_key, auth_user.registration_id, auth_user.alta,auth_user.
>> plantel, auth_user.responsable, auth_user.nombre, auth_user.telefono, 
>> auth_user.autor, auth_user.foto, auth_user.foto_temp, auth_user.moderador
>> , auth_user.descripcion, auth_user.facebook,auth_user.twitter, auth_user.
>> linkedin, auth_user.gplus FROM auth_user WHERE (auth_user.id = 2) LIMIT 1 
>> OFFSET 0;
>>
>>
>> As you see, the query is a simple select to the auth_user table. Also, 
>> notice that it uses LIMIT, so it retrieves one on

[web2py] Re: Request with login privileges hangs for a specific user account, how to debug it?

2018-04-06 Thread Lisandro
Hi Anthony, again, thank you very much for your time, I really appreciate 
it.

El jueves, 5 de abril de 2018, 17:52:36 (UTC-3), Anthony escribió:
>
> On Thursday, April 5, 2018 at 2:57:20 PM UTC-4, Lisandro wrote:
>>
>> Thank you Anthony, yes I'm aware of that.
>> I use it like that for this reason: sometimes (not very often) an 
>> external app modifies a field of the auth_user table (specifically, it sets 
>> true or false a field that I use as a flag). However that change isn't 
>> updated to auth.user. In order to do so, the user needs to logout and login 
>> again. So I retrieve the auth_user record again and store it to 
>> response.answer.
>>
>> Maybe it could be done like this:
>> if auth.is_logged_in():
>> auth.user = db.auth_user[auth.user.id]
>>
>> But I thought it could be break something with Auth methods, so I store 
>> it in response.user.
>>
>
> Got it. Yeah, don't replace auth.user -- create a separate variable.
>  
>
>> Anyway, I set this topic as "no action needed" because I opened a new 
>> topic, I've found some more info and I think the issue isn't related to 
>> that sentence.
>>
>
> But you indicated the select generated by that code was causing Postgres 
> to hang. Are you sure that is the case? In other words, is the web2py code 
> getting stuck at that line and ultimately causing your server to time out? 
> Have you tried adding some logging statements to your code to determine 
> exactly where it is getting stuck?
>

To be truth, I'm not exactly sure that is the line where the code hangs, I 
supposed that because of the select query taking too long, but I can't be 
sure.
The problem is that the incident presents sporadically, and the worst part 
is that I can't reproduce it. Also, as it happens in the production server, 
I can't afford to modify the app code in production, giving that I would be 
making changes to an application that is used by our customers, so I'm in a 
tricky situation. 

I've made have plans to move sessions to Redis, but as a developer, I would 
still like to understand the root cause of the issue :)

Anyway, I'll wait to the incident happens again, hoping that it happens in 
an app of a "small" customer so I can do some tests.


 

>
> Anthony
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[web2py] Re: Request with login privileges hangs for a specific user account, how to debug it?

2018-04-05 Thread Lisandro
Thank you Anthony, yes I'm aware of that.
I use it like that for this reason: sometimes (not very often) an external 
app modifies a field of the auth_user table (specifically, it sets true or 
false a field that I use as a flag). However that change isn't updated to 
auth.user. In order to do so, the user needs to logout and login again. So 
I retrieve the auth_user record again and store it to response.answer.

Maybe it could be done like this:
if auth.is_logged_in():
auth.user = db.auth_user[auth.user.id]

But I thought it could be break something with Auth methods, so I store it 
in response.user.

Anyway, I set this topic as "no action needed" because I opened a new 
topic, I've found some more info and I think the issue isn't related to 
that sentence. In fact, if it was related to that sentence, the problem 
would trigger with every user account. Today we just had two more of those 
cases. This is the new topic I posted:
https://groups.google.com/forum/#!topic/web2py/E9jrmf5E-B4

Regards,
Lisandro.


El jueves, 5 de abril de 2018, 12:55:26 (UTC-3), Anthony escribió:
>
> On Tuesday, April 3, 2018 at 8:43:31 AM UTC-4, Lisandro wrote:
>>
>> I store the sessions in the database, so there is no problem with a 
>> locked file.
>>
>> I've just found something interesting that could help to figure out: when 
>> the problem presents, I checked the pg_stat_activity in postgres to see if 
>> there was a long running query, and there is indeed. But the query is a 
>> simple select to the auth_user table, to select the row of the logged in 
>> user. How can this query take that long? Does web2py lock the user row? If 
>> so, how do I release it?
>>
>> Something to consider: in my db.py, at the end, I do this:
>>
>> response.user = db.auth_user[auth.user.id] if auth.is_logged_in() else 
>> None
>>
>
> FYI, auth.user is the user's record from db.auth_user (minus the password 
> field and the update_record and delete_record attributes), so depending on 
> what you are doing with response.user, you might be able to replace the 
> above with:
>
> response.user = auth.user
>
> Or just use auth.user directly in your code.
>
> Anthony
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


  1   2   3   4   >