Hello,

I'm using web2py with a relatively large legacy database. We have
about 70 tables, and models/db.py is close to 1000 lines long.

The problem I'm facing is that all of these Field() constructors and
define_table() calls take about 150-200ms to execute, and this
overhead occurs each time any request is made to web2py. Compare this
to a minimal web2py app, which might have a (reasonable) 10-30ms of
overhead before response data starts flowing to the client.

I was hoping that a simple solution would be to declare the DAL
structure once in an importable module, which would be run once on
web2py startup rather than at each request. I could then deepcopy it
in my model file (which I think would be faster than all the calls to
define_table(), Field(), etc.), and then establish a connection to the
database from this copied DAL.

Unfortunately, there are a few problems with this:
  1. If a DAL instance is to be connected to the database, it must
happen in the constructor. It doesn't seem that you can do "db =
DAL(None)" and then establish a connection after the fact. Also, it
looks like some db-specific behavior is set up in the DAL constructor
based on the connection URL - this wouldn't happen in the case of
DAL(None).

  2. Table and Field instances have a reference to db, so it seems
that define_table() needs to be called *after* the DAL connection has
been established in order to set up these references.

I suppose it would be possible to deepcopy a DAL(None) instance that
has been established with Tables and Fields, and monkey-patch the db
references thoughout the DAL structure, but chances seem good that
this will create subtle bugs down the road if the internal DAL
implementation ever changes.

Can anyone suggest a better way to speed up DAL creation for large
schemas?

Thanks,
Kevin



Reply via email to