[sqlalchemy] Re: Custom comparator with in_ (and subqueries)

2008-05-28 Thread Martin Pengelly-Phillips


Hello again,

I have implemented the in_() method which works great for all the
'normal' fields, but throws recursion errors when trying to deal with
the relation to another entity.

Cutting it down to one line:

# print
session.query(PolymorphicProperty).filter(PolymorphicProperty.entity.in_([entity1])).all()
  ...
  File sqlalchemy/orm/properties.py, line 311, in operate
return op(self, *other, **kwargs)
  File sqlalchemy/sql/operators.py, line 47, in in_op
return a.in_(b)
  File sqlalchemy/sql/expression.py, line 1242, in in_
return self.operate(operators.in_op, other)
  File sqlalchemy/orm/properties.py, line 311, in operate
return op(self, *other, **kwargs)
  RuntimeError: maximum recursion depth exceeded

Obviously I could use
# print
session.query(PolymorphicProperty).filter(PolymorphicProperty.entity_id.in_([entity1.id])).all()

but I thought it worth checking if the first case should work.


ta,


Martin



On May 27, 9:09 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On May 27, 2008, at 3:11 PM, Martin Pengelly-Phillips wrote:



  Hi Michael,

  Thank you for the quick response. I had thought about using a
  straightforward OR statement - are you suggesting that this would form
  the body of the in_() method on the Comparator or were you referring
  more to just compiling the OR statements in the base query?

 i was saying the result of in_() would be a construct like:

 or_(*[col1.in_([x,y,z]), col2.in_([d, e, f]), ...])

  Also, what is the correct expression for defining a subquery with the
  ORM interface (I gather my embedded session.query statement currently
  evaluates as a separate statement to return the list of candidates).

 any select() constituites a subquery when placed wihtin an enclosing
 select().   Often its a good idea to further enclose it in an Alias
 construct by saying select().alias().  Read through the SQL expression
 tutorial for examples.

 the subquery() method on Query wouldn't be used inside of a Comparator
 since theres no Query object available in those methods and its an
 overly heavy-handed approach at that level.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Smart Caching Mapper Question

2008-05-28 Thread Koen Bok

Hey All,

I have a conceptual question.

You have two ways to get relations; lazy and nonlazy. Nonlazy works
great for saving queries but can get pretty slow with complicated
joins. So I was wondering if there was a third way; pre fetching all
the data for relations and let the mapper get the relation data from a
cache instead of doing another query.

It's kinda hard to explain, so I wrote an example script at:
http://paste.pocoo.org/show/55145/

I guess this should be possible by writing some MapperExtension? Did
anyone do anything like this, or maybe has some pointers?

Thanks!

Koen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Smart Caching Mapper Question

2008-05-28 Thread az

some time ago i posted a list of my ideas along this..
http://groups.google.com/group/sqlalchemy/browse_thread/thread/d886963fe58d7abb
 be ware: its all pure theory.
  -1 (horizontal) (eager) loading ONLY of the needed row attributes,
 also hierarhicaly (a.b.c.d)
  -2 (vertical) simultanously loading of columns - e.g. the lazy
 attribites - wholly, or in portions/slices (depending on UI
 visibility or other slice-size)
  -3 skipping creation of objects - only using the data, if time of
 creation gets critical. For example a simple report for a
 name.alias and age of person, the creation of 100,000 Persons can
 be ommitted. To be able to do drill-down, the person.db_id would be
 needed+stored too.
  -4 cacheing of some aggregations/calculations in special
 columns/tables, so they're not re-invented everytime
  -5 translate the whole report - calculations, aggregations,
 grouping etc. into sql and use the result as is (with same thing
 about db_id's)

except the #4/aggregation which is pretty automated now, i dont have 
yet implementation of the rest. 
i think u're talking about #2 ?

ciao
svilen


 Hey All,

 I have a conceptual question.

 You have two ways to get relations; lazy and nonlazy. Nonlazy works
 great for saving queries but can get pretty slow with complicated
 joins. So I was wondering if there was a third way; pre fetching
 all the data for relations and let the mapper get the relation data
 from a cache instead of doing another query.

 It's kinda hard to explain, so I wrote an example script at:
 http://paste.pocoo.org/show/55145/

 I guess this should be possible by writing some MapperExtension?
 Did anyone do anything like this, or maybe has some pointers?

 Thanks!

 Koen
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Smart Caching Mapper Question

2008-05-28 Thread az

thinking of it... the attributes has to be deferred/None, and set-up 
externaly by the wrapping smartie, e.g. UI pager or whatever.
but i have no idea how it can be done most nicely... some synonim() 
that returns some.cache[ mycol] else fallsback to self._mycol?

On Wednesday 28 May 2008 15:49:16 Koen Bok wrote:
 Yep, exactly that. It would speed up my (UI) app immensely. Any
 ideas how to approach something like that?

 On May 28, 5:07 pm, [EMAIL PROTECTED] wrote:
  some time ago i posted a list of my ideas along
  this..http://groups.google.com/group/sqlalchemy/browse_thread/thr
 ead/d88696...
 
   be ware: its all pure theory.
    -1 (horizontal) (eager) loading ONLY of the needed row
   attributes, also hierarhicaly (a.b.c.d)
    -2 (vertical) simultanously loading of columns - e.g. the lazy
   attribites - wholly, or in portions/slices (depending on UI
   visibility or other slice-size)
    -3 skipping creation of objects - only using the data, if time
   of creation gets critical. For example a simple report for a
   name.alias and age of person, the creation of 100,000 Persons
   can be ommitted. To be able to do drill-down, the person.db_id
   would be needed+stored too.
    -4 cacheing of some aggregations/calculations in special
   columns/tables, so they're not re-invented everytime
    -5 translate the whole report - calculations, aggregations,
   grouping etc. into sql and use the result as is (with same
   thing about db_id's)
 
  except the #4/aggregation which is pretty automated now, i dont
  have yet implementation of the rest.
  i think u're talking about #2 ?
 
  ciao
  svilen
 
   Hey All,
  
   I have a conceptual question.
  
   You have two ways to get relations; lazy and nonlazy. Nonlazy
   works great for saving queries but can get pretty slow with
   complicated joins. So I was wondering if there was a third way;
   pre fetching all the data for relations and let the mapper get
   the relation data from a cache instead of doing another query.
  
   It's kinda hard to explain, so I wrote an example script at:
  http://paste.pocoo.org/show/55145/
  
   I guess this should be possible by writing some
   MapperExtension? Did anyone do anything like this, or maybe has
   some pointers?
  
   Thanks!
  
   Koen

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Smart Caching Mapper Question

2008-05-28 Thread Koen Bok

Yep, exactly that. It would speed up my (UI) app immensely. Any ideas
how to approach something like that?

On May 28, 5:07 pm, [EMAIL PROTECTED] wrote:
 some time ago i posted a list of my ideas along 
 this..http://groups.google.com/group/sqlalchemy/browse_thread/thread/d88696...



  be ware: its all pure theory.
   -1 (horizontal) (eager) loading ONLY of the needed row attributes,
  also hierarhicaly (a.b.c.d)
   -2 (vertical) simultanously loading of columns - e.g. the lazy
  attribites - wholly, or in portions/slices (depending on UI
  visibility or other slice-size)
   -3 skipping creation of objects - only using the data, if time of
  creation gets critical. For example a simple report for a
  name.alias and age of person, the creation of 100,000 Persons can
  be ommitted. To be able to do drill-down, the person.db_id would be
  needed+stored too.
   -4 cacheing of some aggregations/calculations in special
  columns/tables, so they're not re-invented everytime
   -5 translate the whole report - calculations, aggregations,
  grouping etc. into sql and use the result as is (with same thing
  about db_id's)

 except the #4/aggregation which is pretty automated now, i dont have
 yet implementation of the rest.
 i think u're talking about #2 ?

 ciao
 svilen

  Hey All,

  I have a conceptual question.

  You have two ways to get relations; lazy and nonlazy. Nonlazy works
  great for saving queries but can get pretty slow with complicated
  joins. So I was wondering if there was a third way; pre fetching
  all the data for relations and let the mapper get the relation data
  from a cache instead of doing another query.

  It's kinda hard to explain, so I wrote an example script at:
 http://paste.pocoo.org/show/55145/

  I guess this should be possible by writing some MapperExtension?
  Did anyone do anything like this, or maybe has some pointers?

  Thanks!

  Koen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Smart Caching Mapper Question

2008-05-28 Thread az

On Wednesday 28 May 2008 15:49:16 Koen Bok wrote:
 Yep, exactly that. It would speed up my (UI) app immensely. Any
 ideas how to approach something like that?
 thinking of it... the attributes has to be deferred/None, and
 set-up externaly by the wrapping smartie, e.g. UI pager or
 whatever. but i have no idea how it can be done most nicely... some
 synonim() that returns some.cache[ mycol] else fallsback to
 self._mycol?

or maybe, for a more massive approach, using some of per-instance 
MapperExtension hooks:
  def populate_instance(self, mapper, selectcontext, row, instance
  def append_result(self, mapper, selectcontext, row, instance, result
  def create_instance(self, mapper, selectcontext, row, class_):
  def translate_row(self, mapper, context, row):
i guess which-one depends on what level the cache works. 

if u make something, i'd be interested to see it... VerticalCache of 
sorts

ciao
svilen

 On May 28, 5:07 pm, [EMAIL PROTECTED] wrote:
  some time ago i posted a list of my ideas along
  this..http://groups.google.com/group/sqlalchemy/browse_thread/thr
 ead/d88696...
 
   be ware: its all pure theory.
    -1 (horizontal) (eager) loading ONLY of the needed row
   attributes, also hierarhicaly (a.b.c.d)
    -2 (vertical) simultanously loading of columns - e.g. the lazy
   attribites - wholly, or in portions/slices (depending on UI
   visibility or other slice-size)
    -3 skipping creation of objects - only using the data, if time
   of creation gets critical. For example a simple report for a
   name.alias and age of person, the creation of 100,000 Persons
   can be ommitted. To be able to do drill-down, the person.db_id
   would be needed+stored too.
    -4 cacheing of some aggregations/calculations in special
   columns/tables, so they're not re-invented everytime
    -5 translate the whole report - calculations, aggregations,
   grouping etc. into sql and use the result as is (with same
   thing about db_id's)
 
  except the #4/aggregation which is pretty automated now, i dont
  have yet implementation of the rest.
  i think u're talking about #2 ?
 
  ciao
  svilen
 
   Hey All,
  
   I have a conceptual question.
  
   You have two ways to get relations; lazy and nonlazy. Nonlazy
   works great for saving queries but can get pretty slow with
   complicated joins. So I was wondering if there was a third way;
   pre fetching all the data for relations and let the mapper get
   the relation data from a cache instead of doing another query.
  
   It's kinda hard to explain, so I wrote an example script at:
  http://paste.pocoo.org/show/55145/
  
   I guess this should be possible by writing some
   MapperExtension? Did anyone do anything like this, or maybe has
   some pointers?
  
   Thanks!
  
   Koen

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Session close method

2008-05-28 Thread Michael Bayer


On May 27, 2008, at 7:15 PM, TP wrote:


 We're using SA 0.4.4. We're creating non-threadlocal sessions that are
 bound to an engine at session factory creation time. Any ideas for why
 even after closing one session and creating a new session with the
 session factory that we're seeing old data? If we do an explicit
 commit or rollback, then we start seeing the new data in subsequently
 created sessions. I even tried doing refresh() and expire() calls on
 the object before closing and re-creating the session, but we still
 saw the old data.

its not really clear how you could be seeing that.   If you are using  
scoped_session(), the remove() call will actually dispose of the  
session entirely so you might want to try that; but a close() does  
remove all connection resources (with the you're not bound directly  
to a connection caveat I mentioned earlier).Send along some code  
which illustrates exactly how you are configuring your session and  
what steps you are using to see these results.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Custom comparator with in_ (and subqueries)

2008-05-28 Thread Michael Bayer

in_() is not normally implemented for relation()s.  I think the  
recursion overflow on in_() was a bug at some point which has since  
been fixed (and it raises NotImplementedError instead), but I'm not  
able to isolate at the moment if it was fixed for the 0.4 series or not.


On May 28, 2008, at 6:12 AM, Martin Pengelly-Phillips wrote:



 Hello again,

 I have implemented the in_() method which works great for all the
 'normal' fields, but throws recursion errors when trying to deal with
 the relation to another entity.

 Cutting it down to one line:

 # print
 session 
 .query 
 (PolymorphicProperty 
 ).filter(PolymorphicProperty.entity.in_([entity1])).all()
  ...
  File sqlalchemy/orm/properties.py, line 311, in operate
return op(self, *other, **kwargs)
  File sqlalchemy/sql/operators.py, line 47, in in_op
return a.in_(b)
  File sqlalchemy/sql/expression.py, line 1242, in in_
return self.operate(operators.in_op, other)
  File sqlalchemy/orm/properties.py, line 311, in operate
return op(self, *other, **kwargs)
  RuntimeError: maximum recursion depth exceeded

 Obviously I could use
 # print
 session 
 .query 
 (PolymorphicProperty 
 ).filter(PolymorphicProperty.entity_id.in_([entity1.id])).all()

 but I thought it worth checking if the first case should work.


 ta,


 Martin



 On May 27, 9:09 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On May 27, 2008, at 3:11 PM, Martin Pengelly-Phillips wrote:



 Hi Michael,

 Thank you for the quick response. I had thought about using a
 straightforward OR statement - are you suggesting that this would  
 form
 the body of the in_() method on the Comparator or were you referring
 more to just compiling the OR statements in the base query?

 i was saying the result of in_() would be a construct like:

 or_(*[col1.in_([x,y,z]), col2.in_([d, e, f]), ...])

 Also, what is the correct expression for defining a subquery with  
 the
 ORM interface (I gather my embedded session.query statement  
 currently
 evaluates as a separate statement to return the list of candidates).

 any select() constituites a subquery when placed wihtin an enclosing
 select().   Often its a good idea to further enclose it in an Alias
 construct by saying select().alias().  Read through the SQL  
 expression
 tutorial for examples.

 the subquery() method on Query wouldn't be used inside of a  
 Comparator
 since theres no Query object available in those methods and its an
 overly heavy-handed approach at that level.
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Smart Caching Mapper Question

2008-05-28 Thread Michael Bayer

you'd like to do query caching.   If it were me, I'd implement this at  
the SQL level by intercepting statements for particular tables, and  
matching the criteria to a cached value.   Inspection of the criteria  
would determine if it was a candidate for caching or not (or you could  
pre-populate the cache through some other means).  But in that case  
you'd have a lot of work to do, building a ResultProxy-like object and  
such, and the mappers would still need to generate object instances  
from results.   The easiest hook we have for this is in 0.5 and is  
called ConnectionProxy.

The MapperExtension has some sort-of paths for implementing caching  
but they are not widely used and don't really cover the  
functionality.  You can instead produce your own subclass of Query  
which overrides everything needed.  The Session.query() method  
ultimately uses an attribute _query_cls to generate Query so you  
could feed it in through a quick subclass of Session  (the shard.py  
example uses this approach).


On May 28, 2008, at 7:55 AM, Koen Bok wrote:


 Hey All,

 I have a conceptual question.

 You have two ways to get relations; lazy and nonlazy. Nonlazy works
 great for saving queries but can get pretty slow with complicated
 joins. So I was wondering if there was a third way; pre fetching all
 the data for relations and let the mapper get the relation data from a
 cache instead of doing another query.

 It's kinda hard to explain, so I wrote an example script at:
 http://paste.pocoo.org/show/55145/

 I guess this should be possible by writing some MapperExtension? Did
 anyone do anything like this, or maybe has some pointers?

 Thanks!

 Koen
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Custom comparator with in_ (and subqueries)

2008-05-28 Thread Martin Pengelly-Phillips


Ah, apologies Michael - I should have mentioned that I am using the
svn 0.5 checkout.
In 0.4 it does raise NotImplementedError.
In 0.5 it causes a recursion error.

I understand that the working copy will be more susceptible to bugs
etc so please take this as a note rather than a major issue.
In the meantime I am just checking against the id field instead.


On May 28, 3:33 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 in_() is not normally implemented for relation()s.  I think the
 recursion overflow on in_() was a bug at some point which has since
 been fixed (and it raises NotImplementedError instead), but I'm not
 able to isolate at the moment if it was fixed for the 0.4 series or not.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] how many query objects in sa?

2008-05-28 Thread Lukasz Szybalski

Hello,
I have used the following to query my data:
#start
class th(object):
pass
mapper(th,th_table)

a=session.query(th).filter(sqlalchemy.and_(th.APPLIED_TEST==1,th.CODING_DATE=='20080325')).all()
#end

I noticed that there is a query
th.query().filter(sqlalchemy.and_(th.APPLIED_TEST==1,th.CODING_DATE=='20080325')).all()

but it gives me a :
AttributeError: 'generator' object has no attribute 'all'

What is the difference between these queries? Both of them are ORM
based since they used mapped python class?!?
Which one should be used? And why I wouldn't use one of them?

Lucas


-- 
Automotive Recall Database. Cars, Trucks, etc.
http://www.lucasmanual.com/recall/
TurboGears Manual-Howto
http://lucasmanual.com/pdf/TurboGears-Manual-Howto.pdf

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how many query objects in sa?

2008-05-28 Thread Michael Bayer


On May 28, 2008, at 12:36 PM, Lukasz Szybalski wrote:


 Hello,
 I have used the following to query my data:
 #start
 class th(object):
pass
 mapper(th,th_table)

 a
 =
 session
 .query
 (th
 ).filter
 (sqlalchemy.and_(th.APPLIED_TEST==1,th.CODING_DATE=='20080325')).all()
 #end

 I noticed that there is a query
 th
 .query
 ().filter
 (sqlalchemy.and_(th.APPLIED_TEST==1,th.CODING_DATE=='20080325')).all()

 but it gives me a :
 AttributeError: 'generator' object has no attribute 'all'

something else is going on there.  filter() on Query returns a new  
Query in all cases.   What is th.query() ?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: app not closing when py2exe'd it

2008-05-28 Thread Werner F. Bruhin

Werner F. Bruhin wrote:
 Werner F. Bruhin wrote:
   
 I am having a problem with my app not closing correctly when I py2exe'd it.

 Trying to track it down but with not much look so far and as sqlalchemy 
 is one of the big changes I did in this version of the app (i.e. moved 
 from another ORM to sqlalchemy) I wonder if there are some things I have 
 to watch out for.

 As I suspected SA I make sure that on close all my connections are 
 closed and for good measure I also del my session and the engine.

 Is there anything else I should watch out for? 

 Is there some way I can check that I really closed all my connections?

 Is SA using threads?  If yes, do I need to do something special to 
 ensure that they are all closed/finished?
   
 
 I resolved the issue by adding a del self.ds (which is my session)

 So, now I do something along these lines:

 self.engine = db.sa.create_engine(dburl, encoding='utf8', echo=False)
 self.Session = db.sao.sessionmaker(autoflush=True, transactional=True)
 self.Session.configure(bind=self.engine)
 self.ds = self.Session()
 ...
 self.ds.Close()
 del self.ds # if I don't do this app does not close
 del self.Session
 del self.engine

 I am using SA 0.4.3 and the engine is Firebird.

 Is this an indication that I am not closing something correctly, if yes 
 what could it be.

 BTW, I also tried self.Session.close_all()

 While the del solves my issue I am a bit nervous that I am using SA 
 somehow incorrectly.
   
Had a discussion on this problem on the wxPython list with Andrea, he 
uses SA 0.4.6 with SQLite and does not see any issue when his py2exe'd 
application when it closes.

This confirms to me that either I am doing something wrong or the 
Firebird SQL backend in SA is for some reason not closing correctly.

Can anyone give me some hints on what I should look for/at to figure out 
what is causing the hang on a normal application shutdown.

Werner

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: app not closing when py2exe'd it

2008-05-28 Thread az

On Wednesday 28 May 2008 20:13:20 Werner F. Bruhin wrote:
 Werner F. Bruhin wrote:
  Werner F. Bruhin wrote:
  I am having a problem with my app not closing correctly when I
  py2exe'd it.
 
  Trying to track it down but with not much look so far and as
  sqlalchemy is one of the big changes I did in this version of
  the app (i.e. moved from another ORM to sqlalchemy) I wonder if
  there are some things I have to watch out for.
 
  As I suspected SA I make sure that on close all my connections
  are closed and for good measure I also del my session and the
  engine.
 
  Is there anything else I should watch out for?
 
  Is there some way I can check that I really closed all my
  connections?
 
  Is SA using threads?  If yes, do I need to do something special
  to ensure that they are all closed/finished?
 
  I resolved the issue by adding a del self.ds (which is my
  session)
 
  So, now I do something along these lines:
 
  self.engine = db.sa.create_engine(dburl, encoding='utf8',
  echo=False) self.Session = db.sao.sessionmaker(autoflush=True,
  transactional=True) self.Session.configure(bind=self.engine)
  self.ds = self.Session()
  ...
  self.ds.Close()
  del self.ds # if I don't do this app does not close
  del self.Session
  del self.engine
 
  I am using SA 0.4.3 and the engine is Firebird.
 
  Is this an indication that I am not closing something correctly,
  if yes what could it be.
 
  BTW, I also tried self.Session.close_all()
 
  While the del solves my issue I am a bit nervous that I am
  using SA somehow incorrectly.

 Had a discussion on this problem on the wxPython list with Andrea,
 he uses SA 0.4.6 with SQLite and does not see any issue when his
 py2exe'd application when it closes.

 This confirms to me that either I am doing something wrong or the
 Firebird SQL backend in SA is for some reason not closing
 correctly.

 Can anyone give me some hints on what I should look for/at to
 figure out what is causing the hang on a normal application
 shutdown.

u probably have some ref-cycle between that self and and the .ds (or 
some object in the ds, etc).

my destroy() is used for clean-testcase policy and it does:
 close and del any sessions
 del all my references to mappers,tables,selectables, 
 kill _all_ caches
 orm.clear_mappers
 metadata.drop_all
 del metadata
 engine.dispose()
 del engine
 detach_instances - this is my func that walks all live instances and 
deletes x._instance_key and x._state

i have no threads, session-making is plain create_session()

do see which may do something for you...

for example, i did have a ref-cycle in some klas-cache; but it did 
break when i kill the __init__ method and the __init copy... took me 
3 days to find out why/where.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how many query objects in sa?

2008-05-28 Thread Lukasz Szybalski

On Wed, May 28, 2008 at 11:59 AM, Michael Bayer
[EMAIL PROTECTED] wrote:


 On May 28, 2008, at 12:36 PM, Lukasz Szybalski wrote:


 Hello,
 I have used the following to query my data:
 #start
 class th(object):
pass
 mapper(th,th_table)

 a
 =
 session
 .query
 (th
 ).filter
 (sqlalchemy.and_(th.APPLIED_TEST==1,th.CODING_DATE=='20080325')).all()
 #end

 I noticed that there is a query
 th
 .query
 ().filter
 (sqlalchemy.and_(th.APPLIED_TEST==1,th.CODING_DATE=='20080325')).all()

 but it gives me a :
 AttributeError: 'generator' object has no attribute 'all'

 something else is going on there.  filter() on Query returns a new
 Query in all cases.   What is th.query() ?

Since the session is bound to the mapped classes(in tg and in sa). I
shouldn't have to use  session.query(mymappedclass).somfilter

What I want to do is convert the:
session.query(th).filter(sqlalchemy.and_(th.APPLIED_TEST==1,th.CODING_DATE=='20080325')).all()
to just a query using mapped class.

Asumed that session.query(th).somefilter is same as
th.query().somefilter  would do it, but obviously these two are
different.

So I my question is, are the session.query(th).somefilter vs
th.query().somefilter different or am I missing something here.


Lucas

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how many query objects in sa?

2008-05-28 Thread Lukasz Szybalski

On Wed, May 28, 2008 at 1:07 PM, Lukasz Szybalski [EMAIL PROTECTED] wrote:
 On Wed, May 28, 2008 at 11:59 AM, Michael Bayer
 [EMAIL PROTECTED] wrote:


 On May 28, 2008, at 12:36 PM, Lukasz Szybalski wrote:


 Hello,
 I have used the following to query my data:
 #start
 class th(object):
pass
 mapper(th,th_table)

 a
 =
 session
 .query
 (th
 ).filter
 (sqlalchemy.and_(th.APPLIED_TEST==1,th.CODING_DATE=='20080325')).all()
 #end

 I noticed that there is a query
 th
 .query
 ().filter
 (sqlalchemy.and_(th.APPLIED_TEST==1,th.CODING_DATE=='20080325')).all()

 but it gives me a :
 AttributeError: 'generator' object has no attribute 'all'

 something else is going on there.  filter() on Query returns a new
 Query in all cases.   What is th.query() ?

 Since the session is bound to the mapped classes(in tg and in sa). I
 shouldn't have to use  session.query(mymappedclass).somfilter

 What I want to do is convert the:
 session.query(th).filter(sqlalchemy.and_(th.APPLIED_TEST==1,th.CODING_DATE=='20080325')).all()
 to just a query using mapped class.

sorry. I copied the wrong query

Just for clarification:
session.query(th).somefilter is same as th.query().somefilter

correct?

Lucas

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how many query objects in sa?

2008-05-28 Thread Michael Bayer


On May 28, 2008, at 2:44 PM, Lukasz Szybalski wrote:
 sorry. I copied the wrong query

 Just for clarification:
 session.query(th).somefilter is same as th.query().somefilter

 correct?

there is a query attribute added to mapped classes if you use the  
mapper function provided by ScopedSession.  Alternatively,  
ScopedSession also has a method called query_property which can be  
used to add a similar attribute without the extra functionality  
implied by ScopedSession.mapper.Otherwise, theres no such  
attribute query added to mapped classes, and you haven't specified  
if you're using one of these extensions.  The behavior of the query  
attribute provided by these libraries is to just return a Query so  
there's no difference in behavior.   It is usually common to access  
the attribute as a descriptor, i.e. cls.query.filter(..)



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how many query objects in sa?

2008-05-28 Thread Lukasz Szybalski

On Wed, May 28, 2008 at 2:07 PM, Michael Bayer [EMAIL PROTECTED] wrote:


 On May 28, 2008, at 2:44 PM, Lukasz Szybalski wrote:
 sorry. I copied the wrong query

 Just for clarification:
 session.query(th).somefilter is same as th.query().somefilter

 correct?

 there is a query attribute added to mapped classes if you use the
 mapper function provided by ScopedSession.  Alternatively,
 ScopedSession also has a method called query_property which can be
 used to add a similar attribute without the extra functionality
 implied by ScopedSession.mapper.Otherwise, theres no such
 attribute query added to mapped classes, and you haven't specified
 if you're using one of these extensions.  The behavior of the query
 attribute provided by these libraries is to just return a Query so
 there's no difference in behavior.   It is usually common to access
 the attribute as a descriptor, i.e. cls.query.filter(..)

This implementation is in Turbogears so turbogears handles the session
management. I assume they have to be using the ScopedSession.

Thanks a lot.
Lucas

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Optimisation Aid Layer

2008-05-28 Thread Geoff

Hi!

I've optimising my database interaction recently and thought that SQLA
might be able to help out with the process. Something like doing an
EXPLAIN on each query and reporting this to the developer. It could
even inspect the indices used and flag up any queries which could need
some TLC.

No paradigm shift, but would be a handy little time saver :)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Composite Columns in Subclass with Joined Table Inheritance

2008-05-28 Thread andrew cooke


Hi,

(Apologies for asking so many questions here - I really appreciate the
help).

I have been using joined table inheritance without any problems for a
while with a very simple hierarchy: a base class/table called
Measurement and various subclasses that contain a typed value column
(FloatMeasurement, IntegerMeasurement, etc).  In particular, I have
been doing a query on the base type (which has additional fields like
date) and all is sweet - I receive the correct subclass as expected.

Recently I added a new subclass that had a composite column - the
value is a ratio of two integers and the custom datatype is a
subclass of tuple.  This works fine when used directly - I can load
and save instances of RatioMeasurement with no problems.

However, if I query the base class and the query should return a
RatioMeasurement I now get an error.  What appears to be happening is
that the default ColumnLoader is failing at line 65 of strategies.py
(version 0.4.6) because the row for the initial query (made on the
base class) doesn't contain the columns required for the composite
column (since they are in the subclass table).

Adding .with_polymorphic('*') to the query makes no difference.  Nor
does adding polymorphic_fetch='deferred' to the base class mapping.

Does the above make sense?  What am I doing wrong this time?!

Thanks,
Andrew

PS The trace is:
[...]
  File /usr/lib64/python2.5/site-packages/sqlalchemy/orm/query.py,
line 907, in first
ret = list(self[0:1])
  File /usr/lib64/python2.5/site-packages/sqlalchemy/orm/query.py,
line 986, in iterate_instances
rows = [process[0](context, row) for row in fetch]
  File /usr/lib64/python2.5/site-packages/sqlalchemy/orm/query.py,
line 1551, in main
extension=context.extension,
only_load_props=context.only_load_props,
refresh_instance=context.refresh_instance
  File /usr/lib64/python2.5/site-packages/sqlalchemy/orm/mapper.py,
line 1368, in _instance
return mapper._instance(context, row, result=result,
polymorphic_from=self)
  File /usr/lib64/python2.5/site-packages/sqlalchemy/orm/mapper.py,
line 1444, in _instance
self.populate_instance(context, instance, row,
only_load_props=only_load_props, instancekey=identitykey, isnew=isnew)
  File /usr/lib64/python2.5/site-packages/sqlalchemy/orm/mapper.py,
line 1484, in populate_instance
(newpop, existingpop, post_proc) =
selectcontext.exec_with_path(self, prop.key,
prop.create_row_processor, selectcontext, self, row)
TypeError; 'NoneType' object is not iterable

As far as I can tell, the None is the result of the break at line 65
of strategies.py which is returned up the call stack until the
unpacking of the result.  I'm not completely sure about this, but the
break is the last thing I see executing with a debugger.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Composite Columns in Subclass with Joined Table Inheritance

2008-05-28 Thread andrew cooke


I'm sorry - ignore that, I am talking crap.

Adding with_polymorphic('*') *does* fix this - I was mislead because I
had the same problem in several places, so when I fixed one section of
the code another failed with a very similar stack trace.

Cheers,
Andrew
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Column name override with a relation requiring a primaryjoin

2008-05-28 Thread kremlan

My goal is to have a one-to-one relation defined using the same name
as the foreign key column underneath. I have 'contacts' table with
'created_by' and 'updated_by' columns which are FKs to contacts.id.

contacts = Table('contacts', meta,
Column('id', Integer, primary_key=True),
Column('first_name', String(25)),
Column('middle_name', String(25)),
Column('last_name', String(25)),
# etc...
Column('created_at', DateTime),
Column('updated_at', DateTime),
Column('created_by', Integer),
Column('updated_by', Integer),
ForeignKeyConstraint(['created_by'], ['contacts.id']),
ForeignKeyConstraint(['updated_by'], ['contacts.id'])
)


mapper(Contact, contacts, properties={
'_created_by': contacts.c.created_by,
'_updated_by': contacts.c.updated_by,
'created_by': relation(Contact,
primaryjoin=contacts.c.created_by==contacts.c.id, uselist=False),
'updated_by': relation(Contact,
primaryjoin=contacts.c.updated_by==contacts.c.id, uselist=False),
})

The primaryjoin is necessary due to the multiple FKs back to contacts.

This follows the pattern suggested here:
http://groups.google.pl/group/sqlalchemy/browse_thread/thread/e20bb32241ced699

No errors occur but the 'created_by' column in the table is not
actually updated upon flush/commit. Instead the _created_by attribute
is updated.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Column name override with a relation requiring a primaryjoin

2008-05-28 Thread Michael Bayer

this is a many-to-one self-referential relationship.  the  
remote_side attribute is needed on both relations() to establish  
this, and the uselist=False is not needed (its hiding the actual error  
here).  See the docs on self-referential mappings.


On May 28, 2008, at 6:11 PM, kremlan wrote:


 My goal is to have a one-to-one relation defined using the same name
 as the foreign key column underneath. I have 'contacts' table with
 'created_by' and 'updated_by' columns which are FKs to contacts.id.

 contacts = Table('contacts', meta,
Column('id', Integer, primary_key=True),
Column('first_name', String(25)),
Column('middle_name', String(25)),
Column('last_name', String(25)),
# etc...
Column('created_at', DateTime),
Column('updated_at', DateTime),
Column('created_by', Integer),
Column('updated_by', Integer),
ForeignKeyConstraint(['created_by'], ['contacts.id']),
ForeignKeyConstraint(['updated_by'], ['contacts.id'])
 )


 mapper(Contact, contacts, properties={
'_created_by': contacts.c.created_by,
'_updated_by': contacts.c.updated_by,
'created_by': relation(Contact,
 primaryjoin=contacts.c.created_by==contacts.c.id, uselist=False),
'updated_by': relation(Contact,
 primaryjoin=contacts.c.updated_by==contacts.c.id, uselist=False),
})

 The primaryjoin is necessary due to the multiple FKs back to contacts.

 This follows the pattern suggested here:
 http://groups.google.pl/group/sqlalchemy/browse_thread/thread/e20bb32241ced699

 No errors occur but the 'created_by' column in the table is not
 actually updated upon flush/commit. Instead the _created_by attribute
 is updated.


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQL Question - Find missing records with Outer Join

2008-05-28 Thread Michael Bayer


On May 28, 2008, at 8:24 PM, EricHolmberg wrote:


 CREATE TABLE alg (id INT NOT NULL, name VARCHAR(32));
 CREATE TABLE email (id INT NOT NULL, subject VARCHAR(256));
 CREATE TABLE thread (id INT NOT NULL, algid INT, emailid INT);


 This is great, I get my list of emails that don't have any threading
 records . . . but now I need to know how many emails don't have
 threading records for a particular algorithm. . . Do I have to do this
 using a temporary table?


to find members that don't have something related, the NOT EXISTS  
approach is clearest:

select * from email where not exists (select 1 from thread where  
emailid=email.id and algid=some algorithm id)


In SQLA, assuming you have an association mapping of Email-Thread- 
 Alg, the any() function is useful for generating EXISTS clauses  
easily:

some_alg = sess.query(Alg).get(8)
sess.query(Email).filter(~Email.threads.any(Thread.alg==some_alg))

The clause Thread.alg=some alg *should* just generate the clause  
thread.algid=8...it shouldn't pull in the alg table at all since  
its not needed for many-to-one comparison.





--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---