[sqlalchemy] Re: Possible bug in orm/state.py

2009-05-26 Thread Bob Farrell



On May 23, 7:56 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On May 23, 2009, at 1:47 PM, Bob Farrell wrote:



  Hi, using scoped_session(sessionmaker()) to create my sessions, if I
  hammer a
  request (using Pylons) by repeatedly clicking on a link that uses the
  ORM
  somewhat extensively for the relevant request, it seems that another
  thread is
  getting involved with SQLAlchemy internals and pulling the rug out
  from under
  its feet.

 that means you are sharing a mapped instance between threads.    A  
 mapped instance, when associated with a Session (i.e. not detatched),  
 should be considered as an extension of that Session's internal  
 state.  The Session isn't threadsafe so you can't share a persistent  
 instance between threads.  If you are using some kind of persistent/
 cached instances, make sure they are detatched from their original  
 Session first, or merge() the instances in to the Session local to the  
 request before using them (you can send the dont_load=True flag to  
 merge() if you want to cut down on potentially needless SELECT  
 statements).

Okay, thanks - I'll track down the problem with that in mind.



  And here's how I'm dealing with creating the sessions:

  threadlocal = threading.local()

  Session = scoped_session(sessionmaker(autocommit=True))
  Session.metadata = None

  def setup_db():
     if hasattr(threadlocal, 'engine'):
         return
     uri = config['main.engine.dburi']
     threadlocal.engine = create_engine(uri)
     Session.configure(bind=threadlocal.engine)
     if Session.metadata is None:
         Session.metadata = MetaData(threadlocal.engine)
         model.initialise(Session.metadata)

 the threading.local() is unnecessary...unless you are planning for the  
 same application to be run with different .ini files in each thread  
 which would be extremely unusual.    scoped_session() already handles  
 the thread local part for you as far as Sessions are concerned, and  
 Engine objects are threadsafe.

Ah, so engine = create_engine(...) will provide separate connections
as needed, rather than just a single connection ? The reason I wrote
this code was because the original code we had was causing big
problems - we were repeatedly getting QueuePool limit of size 30
overflow 10 reached, connection timed out, timeout 30 errors, so what
I was hoping to do here was limit the connections to one per thread,
but thinking about it I suppose this code doesn't achieve that at all,
as any code can create a new connection with engine.connect()

I'm going to have to track down what bits of code in SQLAlchemy
implicitly create new connections so I can figure out where we're not
closing them - hopefully autocommit=True will remedy this somewhat.

Anyway, thanks a lot for the info. :-)

 Session = scoped_session(sessionmaker(autocommit=True))
 metadata = None

 def setup_db():
      global metadata
      if metadata is not None:
          return
     uri = config['main.engine.dburi']
     engine = create_engine(uri)
     Session.configure(bind=engine)
     metadata = MetaData(engine)
     model.initialise(metadata)

 if OTOH you had some reason for the threadlocal engines, then you dont  
 want to use Session.configure, which configures the whole  
 scoped_session().  You'd want to say  
 Session(bind=my_threadlocal_engine).
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Performance

2009-05-26 Thread Marcin Krol

Hello everyone,

Basically, I have some performance issues. Here's a somewhat complicated 
story:

I have a Host class with several relations to OS_family, OS_version, 
OS_release, Project, Reservation, etc.

The entire (sanitized) SQL log is too big to be posted here, so I copied 
it into pastebin:

http://pastebin.com/f45ea6b6c

The search process is as follows:

1. A complicated query with multiple joins on those relations is ran at 
the beginning of the search (query below). Now, that query and SQLA 
processing takes under a second on 400 rows of Hosts. So it's not bad 
overall.

Lines in pastebin: 1-466

Hosts are collected into a list and passed on to Mako for rendering 
HTML. Mako template accesses fields and relations like OS_version in 
Host objects.

While rendering HTML remaining steps happen in SQL log:

2. Several tables are read in for drop-down lists.

Lines in pastebin: 467-608

3. Lots of small queries happen filling in details of some objects. 
Those small queries take at least several seconds!

Lines in pastebin: 609-1087

Those queries look like:

INFO:sqlalchemy.engine.base.Engine.0x...e66c:SELECT os_version.id AS 
os_version_id, os_version.os_version AS os_version_os_version
FROM os_version
WHERE os_version.id = %(param_1)s
INFO:sqlalchemy.engine.base.Engine.0x...e66c:{'param_1': 15}
DEBUG:sqlalchemy.engine.base.Engine.0x...e66c:Col ('os_version_id', 
'os_version_os_version')
DEBUG:sqlalchemy.engine.base.Engine.0x...e66c:Row (15, 'SLED')
INFO:sqlalchemy.engine.base.Engine.0x...e66c:SELECT os_version.id AS 
os_version_id, os_version.os_version AS os_version_os_version
FROM os_version
WHERE os_version.id = %(param_1)s
INFO:sqlalchemy.engine.base.Engine.0x...e66c:{'param_1': 30}

I don't get it - all the os_version objects should have been read in before?

What I do not get is why after this query takes place, SQLA runs a lot 
those small queries - I included all (most?) the necessary columns in 
the big initial query, so Host data should be filled in by SQLA eager 
loading (which is default except for self-referential objects according 
to docs, and I have no self-referential objects here)?

Is this result of subqueries in the big query? Smth else?



Here's code just in case someone is patient enough to skim it:

The big query function:


def retval(req, session, columnlist, field, ip, hostname, location, 
architecture, os_kind, os_version, os_rel, os_update, kernel_bits, 
physical_box, available, avtime, additional_info, email, virt, cpu, ram, 
project, upnrunning, direction, hostsel=None, testfordates=True):

if debug:
logger.debug('start_retval')

qtmpl = 
SELECT h.id AS hosts_id, h.ip AS hosts_ip, h.ip2 AS hosts_ip2, 
h.hostname AS hosts_hostname, h.location AS hosts_location, 
h.architecture_id AS hosts_architecture_id, h.os_kind_id AS 
hosts_os_kind_id, h.os_version_id AS hosts_os_version_id, h.os_update AS 
hosts_os_update, h.kernel_bits AS hosts_kernel_bits, h.additional_info 
AS hosts_additional_info, h.column_12 AS hosts_column_12, h.column_13 AS 
hosts_column_13, h.username AS hosts_username, h.password AS 
hosts_password, h.alias AS hosts_alias, h.virtualization_id AS 
hosts_virtualization_id, h.shareable AS hosts_shareable, 
h.shareable_between_projects AS hosts_shareable_between_projects, 
h.notes AS hosts_notes, h.cpu AS hosts_cpu, h.ram AS hosts_ram, 
h.column_24 AS hosts_column_24, h.batch AS hosts_batch, h.asset AS 
hosts_asset, h.owner AS hosts_owner, h.ssh_key_present AS 
hosts_ssh_key_present, h.machine_type_model AS hosts_machine_type_model, 
h.mac_address_eth_0 AS hosts_mac_address_eth_0, h.physical_box AS 
hosts_physical_box, h.up_n_running AS hosts_up_n_running, h.available AS 
hosts_available, h.project_id AS hosts_project_id, architecture.id AS 
architecture_id, architecture.architecture AS architecture_architecture, 
os_kind.id AS os_kind_id, os_kind.os_kind AS os_kind_os_kind, 
os_version.id AS os_version_id, os_version.os_version AS 
os_version_os_version, os_rel.id AS os_rel_id, os_rel.os_rel AS 
os_rel_os_rel, virtualization.id AS virtualization_id, 
virtualization.virtualization AS virtualization_virtualization, 
virtualization.color AS virtualization_color, project.id AS project_id, 
project.project AS project_project, email.id AS email_id, email.email AS 
email_email, cr.reservation_id AS reservation_id, 
cr.reservation_start_date AS reservation_start_date, 
cr.reservation_end_date AS reservation_end_date, cr.reservation_status 
AS reservation_status, cr.reservation_businessneed AS 
reservation_businessneed, cr.reservation_notetohwrep AS 
reservation_notetohwrep, cr.reservation_email_id AS 
reservation_email_id, cr.reservation_project_id AS reservation_project_id


FROM hosts h

LEFT OUTER JOIN
(
SELECT h.id AS host_id, h.ip AS host_ip, r.id AS reservation_id, 
r.start_date AS reservation_start_date, r.end_date AS 
reservation_end_date, r.status AS 

[sqlalchemy] Re: Performance

2009-05-26 Thread Mike Conley
On Tue, May 26, 2009 at 5:11 AM, Marcin Krol mrk...@gmail.com wrote:



 What I do not get is why after this query takes place, SQLA runs a lot
 those small queries - I included all (most?) the necessary columns in
 the big initial query, so Host data should be filled in by SQLA eager
 loading (which is default except for self-referential objects according
 to docs, and I have no self-referential objects here)?

 Is this result of subqueries in the big query? Smth else?



Are you assuming eager loading on the relation and actually getting lazy
loading?
Per the docs, lazy loading is the default unless you specify   lazy=False

http://www.sqlalchemy.org/docs/05/reference/orm/mapping.html#sqlalchemy.orm.relation

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Performance

2009-05-26 Thread Marcin Krol

Hello Mike,

Mike Conley wrote:
 Are you assuming eager loading on the relation and actually getting lazy 
 loading?
 Per the docs, lazy loading is the default unless you specify   lazy=False
 
 http://www.sqlalchemy.org/docs/05/reference/orm/mapping.html#sqlalchemy.orm.relation

I have changed all relevant relations to lazy=False and nothing has 
changed. Still lots of small queries after the main query and the same 
execution time.


Regards,
mk

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Performance

2009-05-26 Thread Michael Bayer


When you load all those objects, saying query(A, B, C,
D).from_statement(huge statement), what you're not doing is loading
collections on individual objects, and depending on what your mappings
look like you may not be loading enough information for many-to-one
associations to occur either.   Even though the ultimate objects may be
present in the session, the fact that they're attached to collections or
attributes is not assembled by a straight load - collection and attribute
loads, either lazy or eager, are the only way those get filled in.

A many-to-one association will load without using any SQL if the join
condition from parent to child represents a simple find by primary key on
the root table of the child object - otherwise it needs to reissue a
SELECT.  A collection when not yet loaded will always need to issue a
SELECT.

So if you'd like your huge query to populate collections or non-simple
many-to-one relations during the big load, you should look into using
contains_eager() so that the rows returned by your from_statement are
applied to those elements.For many-to-one relations, its possible that
some tuning to the relation() can allow them to load without any SQL,
you'd have to illustrate some here.

Also there is limitless value to boiling your huge amount of code down to
simpler test cases.   Try a SELECT that loads two tables in a simple
fashion, and load two objects at the same time.  Look at the SQL logs and
experiment with contains_eager() to get a feel for whats going on.


Marcin Krol wrote:

 Hello everyone,

 Basically, I have some performance issues. Here's a somewhat complicated
 story:

 I have a Host class with several relations to OS_family, OS_version,
 OS_release, Project, Reservation, etc.

 The entire (sanitized) SQL log is too big to be posted here, so I copied
 it into pastebin:

 http://pastebin.com/f45ea6b6c

 The search process is as follows:

 1. A complicated query with multiple joins on those relations is ran at
 the beginning of the search (query below). Now, that query and SQLA
 processing takes under a second on 400 rows of Hosts. So it's not bad
 overall.

 Lines in pastebin: 1-466

 Hosts are collected into a list and passed on to Mako for rendering
 HTML. Mako template accesses fields and relations like OS_version in
 Host objects.

 While rendering HTML remaining steps happen in SQL log:

 2. Several tables are read in for drop-down lists.

 Lines in pastebin: 467-608

 3. Lots of small queries happen filling in details of some objects.
 Those small queries take at least several seconds!

 Lines in pastebin: 609-1087

 Those queries look like:

 INFO:sqlalchemy.engine.base.Engine.0x...e66c:SELECT os_version.id AS
 os_version_id, os_version.os_version AS os_version_os_version
 FROM os_version
 WHERE os_version.id = %(param_1)s
 INFO:sqlalchemy.engine.base.Engine.0x...e66c:{'param_1': 15}
 DEBUG:sqlalchemy.engine.base.Engine.0x...e66c:Col ('os_version_id',
 'os_version_os_version')
 DEBUG:sqlalchemy.engine.base.Engine.0x...e66c:Row (15, 'SLED')
 INFO:sqlalchemy.engine.base.Engine.0x...e66c:SELECT os_version.id AS
 os_version_id, os_version.os_version AS os_version_os_version
 FROM os_version
 WHERE os_version.id = %(param_1)s
 INFO:sqlalchemy.engine.base.Engine.0x...e66c:{'param_1': 30}

 I don't get it - all the os_version objects should have been read in
 before?

 What I do not get is why after this query takes place, SQLA runs a lot
 those small queries - I included all (most?) the necessary columns in
 the big initial query, so Host data should be filled in by SQLA eager
 loading (which is default except for self-referential objects according
 to docs, and I have no self-referential objects here)?

 Is this result of subqueries in the big query? Smth else?



 Here's code just in case someone is patient enough to skim it:

 The big query function:

 
 def retval(req, session, columnlist, field, ip, hostname, location,
 architecture, os_kind, os_version, os_rel, os_update, kernel_bits,
 physical_box, available, avtime, additional_info, email, virt, cpu, ram,
 project, upnrunning, direction, hostsel=None, testfordates=True):

   if debug:
   logger.debug('start_retval')

   qtmpl = 
 SELECT h.id AS hosts_id, h.ip AS hosts_ip, h.ip2 AS hosts_ip2,
 h.hostname AS hosts_hostname, h.location AS hosts_location,
 h.architecture_id AS hosts_architecture_id, h.os_kind_id AS
 hosts_os_kind_id, h.os_version_id AS hosts_os_version_id, h.os_update AS
 hosts_os_update, h.kernel_bits AS hosts_kernel_bits, h.additional_info
 AS hosts_additional_info, h.column_12 AS hosts_column_12, h.column_13 AS
 hosts_column_13, h.username AS hosts_username, h.password AS
 hosts_password, h.alias AS hosts_alias, h.virtualization_id AS
 hosts_virtualization_id, h.shareable AS hosts_shareable,
 h.shareable_between_projects AS hosts_shareable_between_projects,
 h.notes AS hosts_notes, h.cpu AS hosts_cpu, h.ram AS 

[sqlalchemy] Re: Possible bug in orm/state.py

2009-05-26 Thread Bob Farrell



On May 26, 3:31 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 Bob Farrell wrote:

  Hi Michael, I found this (your writing) in a thread from quite a while
  back:

  A common pattern which can cause what you see there is if your
  templates are accessing lazy loaders on objects, but the Session which
  loaded those objects had been clear()'ed or otherwise closed before
  the template was allowed to render.  You should try to keep the same
  Session open throughout the full request lifecycle, and you should
  avoid removing any objects from the Session which you wish to continue
  using.

  Hi - came across this because I've started getting the same problem
  and was somewhat relieved to see that the solution is a well-known one
  (given that rendering the template raises this error). I have code
  like this:

          [do stuff involving the session]
          return self.render_response('settings.mako', t_pars)

  in my controller methods and it's the return where the error gets
  raised. Can you tell me a good approach for making the session stay
  alive here ? I'm assuming the problem is that self.render_response
  returns something lazy and so by the time the template actually
  renders the objects relating to the session have gone out of scope.

 the whole request is wrapped within a block that handles Session
 lifecycle, so that the Session is still just fine when render_response is
 being called.  In Pylons, an appropriate base.py is provided for you which
 does this.    A description is 
 athttp://www.sqlalchemy.org/docs/05/session.html#lifespan-of-a-contextu...
 .

 The Session has a behavior whereby after a commit(), it expires its
 contents.  This so that it reloads everything upon access to get access to
 what any concurrent transactions have done.   If your pattern is something
 like this:

    Session.commit()
    return self.render_response(...)

 your render phase may issue a lot of SQL to reload things (though nothing
 should break).    Two ways to work around this are to set
 expire_on_commit=False in your sessionmaker(), or to wrap your whole
 controller method in a commit, such as:

 @commits_transaction
 def my_method(self):
     do stuff
     return self.render_response(...)

 commits_transaction looks like:

 @decorator
 def commits_transaction(fn, self, *args, **kw):
     try:
         ret = fn(self, *args, **kw)
         Session.commit()
         return ret
     except:
         Session.rollback()
         raise

 the rollback() may not be needed if your overall handler calls rollback()
 in all cases.

Great, thanks very much - looks like gutting our horrible connection-
handling code and moving to scoped_session is really making things
better and this should (hopefully) be the last problem needing ironing
out. Shouldn't be a problem given what you mentioned above. You have,
as ever, been a tremendous help. :-)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Automatically filtering all queries

2009-05-26 Thread Denis S. Otkidach

On 26 май, 18:24, Michael Bayer mike...@zzzcomputing.com wrote:
 Denis S. Otkidach wrote:

  such ability in SQLAlchemy. There is a suggestion (
 http://groups.google.com/group/sqlalchemy/browse_thread/thread/bcd10e...
  ) to provide custom query_cls. This probably worked a year ago, but
  doesn't work now.

 that should absolutely work now.  doesn't work is never a very useful
 explanation of a problem.

Sure, my report is not verbose enough. Here is a part of traceback
when my QueryPublic is used:

[... here is code equivalent to:
for link in doc.links:
link.ref_doc.events_sections[0].slug # - failes here
...]
  File /home/web/third-party/SQLAlchemy-0.5.4p1-py2.5.egg/sqlalchemy/
orm/attributes.py, line 158, in __get__
return self.impl.get(instance_state(instance), instance_dict
(instance))
  File /home/web/third-party/SQLAlchemy-0.5.4p1-py2.5.egg/sqlalchemy/
orm/attributes.py, line 374, in get
value = callable_()
  File /home/web/third-party/SQLAlchemy-0.5.4p1-py2.5.egg/sqlalchemy/
orm/strategies.py, line 559, in __call__
return q.get(ident)
  File /home/web/third-party/SQLAlchemy-0.5.4p1-py2.5.egg/sqlalchemy/
orm/query.py, line 442, in get
return self._get(key, ident)
  File /home/web/third-party/SQLAlchemy-0.5.4p1-py2.5.egg/sqlalchemy/
orm/query.py, line 1396, in _get
q.__no_criterion_condition(get)
  File /home/web/third-party/SQLAlchemy-0.5.4p1-py2.5.egg/sqlalchemy/
orm/query.py, line 271, in __no_criterion_condition
raise sa_exc.InvalidRequestError(Query.%s() being called on a
Query with existing criterion.  % meth)
InvalidRequestError: Query.get() being called on a Query with existing
criterion.

And when I redefine get() it comes a bit farther:

[... here is code equivalent to:
for link in doc.links:
link.ref_doc.events_sections[0].slug
link.ref_doc.short_title # - failes here
...]
  File /home/web/third-party/SQLAlchemy-0.5.4p1-py2.5.egg/sqlalchemy/
orm/attributes.py, line 158, in __get__
return self.impl.get(instance_state(instance), instance_dict
(instance))
  File /home/web/third-party/SQLAlchemy-0.5.4p1-py2.5.egg/sqlalchemy/
orm/attributes.py, line 374, in get
value = callable_()
  File /home/web/third-party/SQLAlchemy-0.5.4p1-py2.5.egg/sqlalchemy/
orm/state.py, line 171, in __call__
attr.impl.key in unmodified
  File /home/web/third-party/SQLAlchemy-0.5.4p1-py2.5.egg/sqlalchemy/
orm/mapper.py, line 1834, in _load_scalar_attributes
result = session.query(mapper).from_statement(statement)._get
(None, only_load_props=attribute_names, refresh_state=state)
  File string, line 1, in lambda
  File /home/web/third-party/SQLAlchemy-0.5.4p1-py2.5.egg/sqlalchemy/
orm/query.py, line 51, in generate
assertion(self, fn.func_name)
  File /home/web/third-party/SQLAlchemy-0.5.4p1-py2.5.egg/sqlalchemy/
orm/query.py, line 281, in __no_clauseelement_condition
self.__no_criterion_condition(meth)
  File /home/web/third-party/SQLAlchemy-0.5.4p1-py2.5.egg/sqlalchemy/
orm/query.py, line 271, in __no_criterion_condition
raise sa_exc.InvalidRequestError(Query.%s() being called on a
Query with existing criterion.  % meth)
InvalidRequestError: Query.from_statement() being called on a Query
with existing criterion.

Condition is:

class Doc([...base declarative model...]):
[...]
state = Column(Integer, nullable=False, default=0, index=True)
[...]
PUBLISHED = 4
public_condition = (state==PUBLISHED)

Here doc (retrieved from session by id) and doc_ref (via 2 relations)
are instances of Doc class. Other models here don't have
public_condition yet.

  Here is query class I pass as query_cls argument to sessionmaker:

  def QueryPublic(entities, session=None):
      query = Query(entities, session)
      # XXX I haven't ever seen examples with several entities, so I can
  test
      # this case.
      assert len(entities)==1, entities
      # XXX Sometimes it's not a model class or mapper, so the following
  fails.
      cls = _class_to_mapper(entities[0]).class_
      public_condition = getattr(cls, 'public_condition', None)
      if public_condition is not None:
          query = query.filter(public_condition)
      return query

  This works for simple queries, but any access to relation gives an
  about Query.get() used for query with condition (this wasn't a case
  for 0.4.x).

 get() should never be called when there's any filtering applied.  its a
 nonsensical operation so 0.5 likes to tell you about it.

I don't call it explicitly, but it is called internally when I access
property defined with relation().

  OK, i've redefined get() method to be less restrictive in
  cost of performance:

  class HackedQuery(Query):

      def get(self, ident):
          # XXX This works for case when primary key is constructed of
  id field
          # only.
          if isinstance(ident, (tuple, list)):
              assert len(ident)==1
              ident = ident[0]
          return self.filter_by(id=ident).first()

  This helped for some 

[sqlalchemy] Re: Possible bug in orm/state.py

2009-05-26 Thread Michael Bayer

Bob Farrell wrote:

 Hi Michael, I found this (your writing) in a thread from quite a while
 back:

 A common pattern which can cause what you see there is if your
 templates are accessing lazy loaders on objects, but the Session which
 loaded those objects had been clear()'ed or otherwise closed before
 the template was allowed to render.  You should try to keep the same
 Session open throughout the full request lifecycle, and you should
 avoid removing any objects from the Session which you wish to continue
 using.

 Hi - came across this because I've started getting the same problem
 and was somewhat relieved to see that the solution is a well-known one
 (given that rendering the template raises this error). I have code
 like this:

 [do stuff involving the session]
 return self.render_response('settings.mako', t_pars)

 in my controller methods and it's the return where the error gets
 raised. Can you tell me a good approach for making the session stay
 alive here ? I'm assuming the problem is that self.render_response
 returns something lazy and so by the time the template actually
 renders the objects relating to the session have gone out of scope.

the whole request is wrapped within a block that handles Session
lifecycle, so that the Session is still just fine when render_response is
being called.  In Pylons, an appropriate base.py is provided for you which
does this.A description is at
http://www.sqlalchemy.org/docs/05/session.html#lifespan-of-a-contextual-session
.

The Session has a behavior whereby after a commit(), it expires its
contents.  This so that it reloads everything upon access to get access to
what any concurrent transactions have done.   If your pattern is something
like this:

   Session.commit()
   return self.render_response(...)


your render phase may issue a lot of SQL to reload things (though nothing
should break).Two ways to work around this are to set
expire_on_commit=False in your sessionmaker(), or to wrap your whole
controller method in a commit, such as:

@commits_transaction
def my_method(self):
do stuff
return self.render_response(...)

commits_transaction looks like:

@decorator
def commits_transaction(fn, self, *args, **kw):
try:
ret = fn(self, *args, **kw)
Session.commit()
return ret
except:
Session.rollback()
raise

the rollback() may not be needed if your overall handler calls rollback()
in all cases.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Changes in exc.py causing problems.

2009-05-26 Thread Bob Farrell

Hi hi.

£ diff SQLAlchemy-0.5.2-py2.5.egg/sqlalchemy/exc.py SQLAlchemy-0.5.4p1-
py2.5.egg/sqlalchemy/exc.py
134a135,139
 if len(self.params)  10:
 return ' '.join((SQLAlchemyError.__str__(self),
  repr(self.statement),
  repr(self.params[:2]),
  '... and a total of %i bound parameters' % 
 len(self.params)))

This change is resulting in this problem:

  File /home/bob/src/prospectspace/branches/sqlalchemy-eon-merge/
prospectspace/commands/pspatch.py, line 1473, in create_user
print e
  File /home/bob/src/prospectspace/branches/sqlalchemy-eon-merge/lib/
python2.5/site-packages/SQLAlchemy-0.5.4p1-py2.5.egg/sqlalchemy/
exc.py, line 138, in __str__
repr(self.params[:2]),
TypeError: unhashable type

Which is a little confusing, not sure why any hashing attempt is
happening there - I'll investigate it further and send a patch
tomorrow, unless something blaringly obvious stands out to you as to
what's causing this.

Cheers,
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Automatically filtering all queries

2009-05-26 Thread Michael Bayer
Denis S. Otkidach wrote:

 Do you mean I have to create separate model definitions for public and
 private parts of site? It doesn't seem sane to me. I've used such
 filtering across all tables for over 10 years with other DB tools, and
 it saved me a lot of work. Why not using it with SA?

I've never heard of such a feature in a tool like Hibernate, for example.

However, its quite easy to achieve.  Just use this.

class LimitingQuery(Query):
def get(self, ident):
return Query.get(self.populate_existing(), ident)

def __iter__(self):
return Query.__iter__(self.private())

@_generative()
def private(self):
crit = (self._entities[0].mapper.class_.public == True)
if self._criterion:
self._criterion = crit
else:
self._criterion = crit

full test case attached.

0.5.5 will have a new feature query.disable_assertions(*names) such that
you can stick to non-underscore names without any errors raised by
limit/one/etc.:

class LimitingQuery(Query):
def get(self, ident):
return Query.get(self.populate_existing(), ident)

def __iter__(self):
return
Query.__iter__(self.disable_assertions('limit_offset').filter_by(public=True))





--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



test.py
Description: Binary data


[sqlalchemy] Re: Changes in exc.py causing problems.

2009-05-26 Thread Michael Bayer

this is all fixed in the current trunk.  release probably today as the
issue you have below is more severe than the one I had noticed.


Bob Farrell wrote:

 Hi hi.

 £ diff SQLAlchemy-0.5.2-py2.5.egg/sqlalchemy/exc.py SQLAlchemy-0.5.4p1-
 py2.5.egg/sqlalchemy/exc.py
 134a135,139
 if len(self.params)  10:
 return ' '.join((SQLAlchemyError.__str__(self),
  repr(self.statement),
  repr(self.params[:2]),
  '... and a total of %i bound parameters' %
 len(self.params)))

 This change is resulting in this problem:

   File /home/bob/src/prospectspace/branches/sqlalchemy-eon-merge/
 prospectspace/commands/pspatch.py, line 1473, in create_user
 print e
   File /home/bob/src/prospectspace/branches/sqlalchemy-eon-merge/lib/
 python2.5/site-packages/SQLAlchemy-0.5.4p1-py2.5.egg/sqlalchemy/
 exc.py, line 138, in __str__
 repr(self.params[:2]),
 TypeError: unhashable type

 Which is a little confusing, not sure why any hashing attempt is
 happening there - I'll investigate it further and send a patch
 tomorrow, unless something blaringly obvious stands out to you as to
 what's causing this.

 Cheers,
 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Questions on SQLA Queries

2009-05-26 Thread Harish Vishwanath
Hello,

Question 1:

When there is a query like below :

q = session.query(User.name)  #(User is a class)

and when I do q.all(), a list of tuples (User.name,) is returned though a
single column is asked for. Is there a way to get a list directly from
q.all() when a single column is required?

Question 2:

I need to delete a bulky table and I want to print diagnostics after n
number of deletes. Is there a way to use Query object so that a SQL
statement like below can be generated?

 delete from movie where year in (select top 30 year from movie where year
 50); , so that a  message can be logged after every 30 deletes.

I am using Sqlite DB.


Regards,
Harish

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] SQLAlchemy 0.5.4p2 released

2009-05-26 Thread Michael Bayer

This release fixes some critical issues involving the formatting of DBAPI
exceptions which were introduced in 0.5.4 and is recommended for all
users.

Download 0.5.4p2 at:

http://www.sqlalchemy.org/download.html


0.5.4p2
===

- sql
- Repaired the printing of SQL exceptions which are not
  based on parameters or are not executemany() style.

- postgres
- Deprecated the hardcoded TIMESTAMP function, which when
  used as func.TIMESTAMP(value) would render TIMESTAMP value.
  This breaks on some platforms as Postgres doesn't allow
  bind parameters to be used in this context.  The hard-coded
  uppercase is also inappropriate and there's lots of other
  PG casts that we'd need to support.  So instead, use
  text constructs i.e. select([timestamp '12/05/09']).


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---