Hi,

My db.py takes a long time to run (around 40 ms). This means that just the 
overhead of db.py limits my throughput to 25 req/s (in a single process / 
single thread configuration). This seems to me quite low.

I have tried to take a look at where the time is spent. I have seen the 
following runtimes. These are average values:

 8 ms -> session.connect(request,response,db=MEMDB(cache.memcache))
 2 ms -> db = DAL('sqlite://storage.sqlite', migrate=False)
 5 ms -> auth = Auth(db)
20 ms -> define tables

My define tables is like this:

    # Use the authorization table for users
    web2py_user_table = auth.settings.table_user_name
    # web2py_user_table = 'web2py_user'
    db.define_table(
        web2py_user_table,
        Field('org_id',               type='integer', compute=set_org_id),
        Field('email',                unique=True),
        Field('user_doc_id',          length=128, compute=create_new_user),
        Field('password',             length=512, compute=automatic_password
, type='password', readable=False, label='Password'), # TODO: use CRYPT
        # These fields are needed for authorization
        Field('registration_key',     length=512, writable=False, readable=
False, default=''),
        Field('reset_password_key',   length=512, writable=False, readable=
False, default=''),
        Field('registration_id',      length=512, writable=False, readable=
False, default=''),
        format = '%(email)s')

    # Add constraints to the web2py_user table
    web2py_user = db[web2py_user_table]
    web2py_user.email.requires        = [IS_EMAIL(error_message=auth.
messages.invalid_email), IS_NOT_IN_DB(db, '%s.email' % (web2py_user_table))]
    # web2py_user.user_doc_id.requires = 
IS_NOT_EMPTY(error_message=auth.messages.is_empty)
    # web2py_user.password.requires     = [IS_STRONG(min=5, special=0, 
upper=0), CRYPT()]

    auth.define_tables()

Those run times look to me quite high. Maybe some of you can give me hints:

   - Is there anything I can do to speed this up?
   - Why is the define tables taking such a long time, even with 
   migrate=False?
   - Would moving to postgres improve things a lot?
   - Is sqlite the bottleneck here?
   
It is difficult for me to believe that sqlite is the cultprit here: I am 
using a single client in my tests, sending requests sequentially (no 
parallel requests).

Thanks,
Daniel

-- 



Reply via email to