[sqlalchemy] Re: Filtering a primaryjoin

2008-11-06 Thread Joril

On 5 Nov, 18:17, Joril [EMAIL PROTECTED] wrote:
 I'd like to add a filter to a relation, so that it ignores records of
 the remote table if they're flagged as logically_deleted (a
 boolean field of the child table)

Solved,

children = relation(Child, primaryjoin=and_(id == Child.parent_id,
Child.logically_deleted==False))

Thanks anyway!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: proposed extension to SessionExtension: after_bulk_operation

2008-11-06 Thread Michael Bayer


On Nov 6, 2008, at 1:12 AM, Martijn Faassen wrote:


 Hey,

 Michael Bayer wrote:
 OK now that I see it, here's some things I think would make it more
 generally useful:

 1. lets split it into after_bulk_update() and after_bulk_delete(), so
 that we can identify what operation just occured
 2. lets pass the Query object and the QueryContext in so that more
 elaborate extensions can augment the functionality of the methods
 3. im a little torn on if the method should be called before or after
 the invalidation logic kicks in, but im leaning towards after at
 the moment.

 Implemented this way now, and (boldly) committed to the trunk. :)


great !  make yourself at home there :).



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Wrong SQL-generation

2008-11-06 Thread mraer

I have such queries:

OI  = aliased(OrderedItem, name=OrderedItem_AD)
CH   = aliased(TChart, name=Chart_AD)
ORD  = aliased(Order, name=Order_AD)

alreadyDone = session.query(CH.id).\
join((OI, CH.orderedItems)).\
join((ORD, OI.order)).\
join((User, ORD.user)).\
join((OrderStatus, ORD.status)).\
filter(User.name ==uTestUser).\
filter(OrderStatus.id == 1).\
subquery()

OI2  = aliased(OrderedItem, name=OrderedItem_CUR)
CH2   = aliased(TChart, name=Chart_CUR)
ORD2  = aliased(Order, name=Order_CUR)

orderedItemsToExclude = session.query(OI2).select_from(join(OI2, CH2)).
\
join((ORD2,  OI2.order)).\
join((alreadyDone, alreadyDone.c.id == CH2.id)).\
filter(ORD2.id == order.id)

Alchemy generate such queries:

alreadyDone:
SELECT Chart_1.id
FROM Chart AS Chart_1 JOIN OrderedItem AS OrderedItem_1 ON
Chart_1.id = OrderedItem_1.chart_id JOIN Order AS Order_1 ON
Order_1.id = OrderedItem_1.Order_id JOIN User ON User.id =
Order_1.user_id JOIN OrderStatus ON OrderStatus.id =
Order.status_id
WHERE User.name = :name_1 AND OrderStatus.id = :id_1

orderedItemsToExclude:
SELECT OrderedItem_1.downloadURL AS OrderedItem_1_downloadURL,
OrderedItem_1.chart_id AS OrderedItem_1_chart_id,
OrderedItem_1.Order_id AS OrderedItem_1_Order_id
FROM OrderedItem AS OrderedItem_1 JOIN Chart AS Chart_1 ON
Chart_1.id = OrderedItem_1.chart_id JOIN Order AS Order_1 ON
Order_1.id = OrderedItem.Order_id JOIN (SELECT Chart_2.id AS
id
FROM Chart AS Chart_2 JOIN OrderedItem AS OrderedItem_2 ON
Chart_2.id = OrderedItem_2.chart_id JOIN Order AS Order_2 ON
Order_2.id = OrderedItem_2.Order_id JOIN User ON User.id =
Order_2.user_id JOIN OrderStatus ON OrderStatus.id =
Order.status_id
WHERE User.name = :name_1 AND OrderStatus.id = :id_1) AS anon_1 ON
anon_1.id = Chart_1.id
WHERE Order_1.id = :id_2

The problem is that it doesn't use aliased name in join condition:
Order_1.id = OrderedItem.Order_id
must be
Order_1.id = OrderedItem_1.Order_id


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: persistent result set

2008-11-06 Thread Michael Bayer


On Nov 6, 2008, at 12:28 AM, Adam Ryan wrote:



 Thanks a lot for the reply...

 So to suggest an alternative to this you'd have to describe what
 additional behavior is needed, such as is it an issue of overly
 complex filter criterion being inefficient on the server side, or
 you're trying to reduce the number of persist operations, or some  
 kind
 of long-transaction-spanning concern.

 I'm working on a web application so I suppose the later is of concern.

 Specifically, once the server has responded, I no longer have a handle
 on the SQLA session or query objects (they've been removed/deleted).
 But, I do have a beaker session, so I guess my question is how to best
 recreate a user's last result set so as to perform the user's current
 manipulation on it.

 I can't store the query object in the beaker session because it can't
 be pickled, right?.  So is there a way to dump a string representation
 of a query object, then recreate it and associate it with a new SQLA
 Session?

 Or am I missing the obvious?  Thanks again for the help.

ah, that says a lot.   At first I was thinking that Query is not hard  
to pickle other than the Session attached to it, which can be  
detached.  But it seems like the mappers associated with it would need  
some work too so that they restore themselves from the mapper  
registry.  Expression objects are pickleable - so you should be able  
to pickle just query._criterion and get that back.  But, I was just  
trying this out in 0.5 and there are some roadblocks even to that  
related to some ORM-level decorating that's going on with expressions  
(which aren't in 0.4), so I will get back to you.  the Query should  
have a straightforward path to being serialized.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Wrong SQL-generation

2008-11-06 Thread mraer

Thank you! When will be a build with this fix?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mssql reflection fk issue

2008-11-06 Thread Michael Bayer

can you post a trac ticket for this please ?   thanks.


On Nov 6, 2008, at 1:21 AM, Randall Smith wrote:


 When reflecting a MSSQL table with a foreign key, the referenced table
 fails to load with the error:

 sqlalchemy.exc.NoSuchTableError: [referenced_table]

 For this case, I'm using:
 SA 0.5 RC2
 Python 2.5
 UnixODBC 2.2.11
 tdsodbc 0.63-3.2

 The test case uses schema names.  Reflection will work with t1, but  
 not
 with t2 I assume because of the foreign keys.  I tried this also  
 without
 schemas and it wouldn't reflect either table.

 # test code

 import sqlalchemy as sa

 engine = sa.create_engine('mssql://somebody:[EMAIL PROTECTED]/test')

 t1_query = create table somebody.t1 (id int primary key, name  
 char(10))
 t2_query = 
 create table somebody.t2 (id int primary key, name char(10),
 t1_id int references somebody.t1(id))
 

 engine.execute(t1_query)
 engine.execute(t2_query)

 try:
 meta = sa.MetaData(bind=engine)
 ##t1 = sa.Table('t1', meta, schema='somebody', autoload=True)
 ##print t1.columns.keys()
 t2 = sa.Table('t2', meta, schema='somebody', autoload=True)
 print t2
 finally:
 engine.execute('drop table somebody.t2')
 engine.execute('drop table somebody.t1')

 Fails with:

 sqlalchemy.exc.NoSuchTableError: t1



 --Randall



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Should create_engine() be called per-process or per-request in threaded wsgi environment?

2008-11-06 Thread Randy Syring

I am developing a WSGI based web framework with sqlalchemy.  I am
unclear about when create_engine() should be called.  I initially
thought that engine creation and metadata would be initialized per
process and each thread/request would just get a new session.
However, I have recently run into error messages when using sqlite
with the framework in a threaded WSGI server:

SQLite objects created in a thread can only be used in that same
thread...

That lead me to this thread:

http://groups.google.com/group/pylons-discuss/browse_thread/thread/3d3009cd7421c45a/ed0f3dde401ff474?lnk=gst

Can someone weigh in on this issue?  What are the performance
ramifications of needing to create an engine on each request as
opposed to each process?  Do I also need to load my meta data on each
request or could I just re-bind the engine to the metadata on each
request?  Should I not bind the engine to the metadata at all but just
bind it to the session?

Thanks.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Should create_engine() be called per-process or per-request in threaded wsgi environment?

2008-11-06 Thread John Fries
Yes, I would also like to know what is the appropriate way to use SQLAlchemy
with respect to a WSGI server. I've been using Django with SQLAlchemy (not
yet supported, but the recipe here
http://lethain.com/entry/2008/jul/23/replacing-django-s-orm-with-sqlalchemy/got
me moving), and it's not clear how to do connection pools and all that
other good stuff.

On Thu, Nov 6, 2008 at 10:29 AM, Randy Syring [EMAIL PROTECTED] wrote:


 I am developing a WSGI based web framework with sqlalchemy.  I am
 unclear about when create_engine() should be called.  I initially
 thought that engine creation and metadata would be initialized per
 process and each thread/request would just get a new session.
 However, I have recently run into error messages when using sqlite
 with the framework in a threaded WSGI server:

 SQLite objects created in a thread can only be used in that same
 thread...

 That lead me to this thread:


 http://groups.google.com/group/pylons-discuss/browse_thread/thread/3d3009cd7421c45a/ed0f3dde401ff474?lnk=gst

 Can someone weigh in on this issue?  What are the performance
 ramifications of needing to create an engine on each request as
 opposed to each process?  Do I also need to load my meta data on each
 request or could I just re-bind the engine to the metadata on each
 request?  Should I not bind the engine to the metadata at all but just
 bind it to the session?

 Thanks.
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Should create_engine() be called per-process or per-request in threaded wsgi environment?

2008-11-06 Thread Michael Bayer


On Nov 6, 2008, at 1:29 PM, Randy Syring wrote:


 I am developing a WSGI based web framework with sqlalchemy.  I am
 unclear about when create_engine() should be called.  I initially
 thought that engine creation and metadata would be initialized per
 process and each thread/request would just get a new session.
 However, I have recently run into error messages when using sqlite
 with the framework in a threaded WSGI server:

 SQLite objects created in a thread can only be used in that same
 thread...

 That lead me to this thread:

 http://groups.google.com/group/pylons-discuss/browse_thread/thread/3d3009cd7421c45a/ed0f3dde401ff474?lnk=gst

 Can someone weigh in on this issue?  What are the performance
 ramifications of needing to create an engine on each request as
 opposed to each process?  Do I also need to load my meta data on each
 request or could I just re-bind the engine to the metadata on each
 request?  Should I not bind the engine to the metadata at all but just
 bind it to the session?


you should definitely create the engine and metadata on a per-process  
basis.   When using SQLite, the engine automatically chooses the  
SingletonThreadPool connection pool, which will maintain a single  
SQLite connection per application thread, which is never moved across  
threads (unless you did so explicitly).   The error you're getting  
would only occur if you are sharing the connection returned by the  
engine across threads, which can also occur if you're using a single  
Session that's bound to a connection across threads.  When using the  
scoped_session() manager, this also should not occur - some  
description of this lifecycle is at 
http://www.sqlalchemy.org/docs/05/session.html#unitofwork_contextual_lifespan 
  .

The scoped_session approach is widely used in conjunction with sqlite  
in many web frameworks including Pylons, Turbogears and Zope so you'd  
have to ensure that your specific approach is not sharing a single  
Connection or Session between threads.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Should create_engine() be called per-process or per-request in threaded wsgi environment?

2008-11-06 Thread Michael Bayer
if you're using create_engine(), you're using a connection pool.  The  
diagram at http://www.sqlalchemy.org/docs/05/dbengine.html illustrates  
this.


On Nov 6, 2008, at 1:35 PM, John Fries wrote:

 Yes, I would also like to know what is the appropriate way to use  
 SQLAlchemy with respect to a WSGI server. I've been using Django  
 with SQLAlchemy (not yet supported, but the recipe here 
 http://lethain.com/entry/2008/jul/23/replacing-django-s-orm-with-sqlalchemy/ 
  got me moving), and it's not clear how to do connection pools and  
 all that other good stuff.

 On Thu, Nov 6, 2008 at 10:29 AM, Randy Syring [EMAIL PROTECTED]  
 wrote:

 I am developing a WSGI based web framework with sqlalchemy.  I am
 unclear about when create_engine() should be called.  I initially
 thought that engine creation and metadata would be initialized per
 process and each thread/request would just get a new session.
 However, I have recently run into error messages when using sqlite
 with the framework in a threaded WSGI server:

 SQLite objects created in a thread can only be used in that same
 thread...

 That lead me to this thread:

 http://groups.google.com/group/pylons-discuss/browse_thread/thread/3d3009cd7421c45a/ed0f3dde401ff474?lnk=gst

 Can someone weigh in on this issue?  What are the performance
 ramifications of needing to create an engine on each request as
 opposed to each process?  Do I also need to load my meta data on each
 request or could I just re-bind the engine to the metadata on each
 request?  Should I not bind the engine to the metadata at all but just
 bind it to the session?

 Thanks.



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: persistent result set

2008-11-06 Thread az

On Thursday 06 November 2008 07:28:41 Adam Ryan wrote:
 Thanks a lot for the reply...

  So to suggest an alternative to this you'd have to describe what
  additional behavior is needed, such as is it an issue of overly
  complex filter criterion being inefficient on the server side, or
  you're trying to reduce the number of persist operations, or some
  kind of long-transaction-spanning concern.

 I'm working on a web application so I suppose the later is of
 concern.

 Specifically, once the server has responded, I no longer have a
 handle on the SQLA session or query objects (they've been
 removed/deleted). But, I do have a beaker session, so I guess my
 question is how to best recreate a user's last result set so as to
 perform the user's current manipulation on it.

 I can't store the query object in the beaker session because it
 can't be pickled, right?.  So is there a way to dump a string
 representation of a query object, then recreate it and associate it
 with a new SQLA Session?

 Or am I missing the obvious?  Thanks again for the help.

i've invented a way to deal with long-user-transaction-spanning, by 
taking the transactionism out of db, although that may not apply to 
web apps. Once an user trans starts, it obtains a UserTrans id (just 
row in some table), and all objects created/touched in the 
transaction reference that id. Once the user commits the 
transaction, i.e. OK button of the outmost dialog-window, all these 
id's are replaced by None. If user hits Cancel overall, all the (new) 
objects are removed. When an object has non-None user-trans-id, noone 
but that particular user sees it. So that also solves private 
issues. In a bitemporal situation (where all updates/deletes become 
also inserts of new versions) this is enough to allow user-trans 
spanning whatever time / ui-workflow - it's not at all db-trans 
anymore. For plain situation where updates are updates and deletes 
are deletes, probably something more would be needed to simulate them 
or restore the status at Cancel.

but as i'mm rereading your stuff, it's more of 
how-to-save-the-user-built-query. u can try a recorder/player 
pattern, if all else fails...

ciao
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Should create_engine() be called per-process or per-request in threaded wsgi environment?

2008-11-06 Thread Randy Syring

Thank you for taking the time to respond, I really appreciate it!

On Nov 6, 2:46 pm, Michael Bayer [EMAIL PROTECTED] wrote:

 you should definitely create the engine and metadata on a per-process  
 basis.   When using SQLite, the engine automatically chooses the  
 SingletonThreadPool connection pool, which will maintain a single  
 SQLite connection per application thread, which is never moved across  
 threads (unless you did so explicitly).  

Ah, well there is something I have overlooked.  I have been forgetting
that there is a connection object since I never use it directly.  I
was actually thinking that the engine was the connection, but I see
now that is not accurate.  But would I need to close the connection
explicitly after each request?

 The error you're getting  
 would only occur if you are sharing the connection returned by the  
 engine across threads, which can also occur if you're using a single  
 Session that's bound to a connection across threads.  When using the  
 scoped_session() manager, this also should not occur - some  
 description of this lifecycle is 
 athttp://www.sqlalchemy.org/docs/05/session.html#unitofwork_contextual_...
   .

I do not believe that I am sharing the connection object across
threads, at least not deliberately.  The only sqlalchemy objects I am
working with are an engine (which is stored at the process level), the
metadata (which is unbound and stored at the process level), and a
scoped session.  At the end of each request, I call remove() on the
scoped session class, which I assumed was enough.  Would there be
anything else I should do at the end of a request in order to clean
up?

Also, I am using Elixir.  Is it possible that Elixir is holding on to
a connection object I don't know about?  It uses scoped sessions by
default as well.

Thanks!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: persistent result set

2008-11-06 Thread Michael Bayer


On Nov 6, 2008, at 12:28 AM, Adam Ryan wrote:


 I can't store the query object in the beaker session because it can't
 be pickled, right?.  So is there a way to dump a string representation
 of a query object, then recreate it and associate it with a new SQLA
 Session?

 Or am I missing the obvious?  Thanks again for the help.


OK, after some cobbling I have a great solution for this, I built a  
custom pickler/unpickler which stores key objects sparsely on the  
pickle side, can rebind back to any metadata/session/engine on the  
unpickle side.   So you can pickle a whole Query object, or whatever,  
with no problem.  The size of the string is still a few thousand  
characters but not nearly as huge as it would be if it was pickling  
the whole map of Table and mapper objects associated.

Check out the svn trunk, and usage is as follows (this is from the  
docstring):

 from sqlalchemy.ext.serializer import loads, dumps
 metadata = MetaData(bind=some_engine)
 Session = scoped_session(sessionmaker())

 # ... define mappers

 query =  
Session 
.query 
(MyClass).filter(MyClass.somedata=='foo').order_by(MyClass.sortkey)

 # pickle the query
 serialized = dumps(query)

 # unpickle.  Pass in metadata + scoped_session, both are optional  
depending on what you're deserializing
 query2 = loads(serialized, metadata, Session)

 print query2.all()


this is all brand new code and several structures have special  
serialization procedures.  If you're playing with it and get errors,  
we probably need to add more structures to the list of special-needs  
objects.   The unit tests are fleshing out a good deal of scenarios so  
far but I can think of more which haven't been tested yet.

For usage with Beaker, you're going to want to do this serialization  
procedure by hand and then place the resulting string in the session,  
then use the specialized loads() on the way out.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: persistent result set

2008-11-06 Thread Adam Ryan


Wow, this is great stuff.  I'll have to spend some time trying it out.

My big question, though, is how to interact with this stored query
using and_ and or_?

I went off on a different track where I would store a list of filter
objects with their respective method (and_, or_), and put them back
together in a nested filter beast like so:

# filter_list is a list of tuples: ( filter_object, filter_method)

nested = None
for filter, method  in filter_list:
if not nested:
nested = filter
else:
nested = method( filter, nested )

query = query.filter( nested )
res = query.all()

If instead I stored the query object, how would I then and_ and or_
it?

Thanks a lot Michael.  This is extremely helpful and gracious.

- Adam


On Nov 6, 3:15 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Nov 6, 2008, at 12:28 AM, Adam Ryan wrote:



  I can't store the query object in the beaker session because it can't
  be pickled, right?.  So is there a way to dump a string representation
  of a query object, then recreate it and associate it with a new SQLA
  Session?

  Or am I missing the obvious?  Thanks again for the help.

 OK, after some cobbling I have a great solution for this, I built a  
 custom pickler/unpickler which stores key objects sparsely on the  
 pickle side, can rebind back to any metadata/session/engine on the  
 unpickle side.   So you can pickle a whole Query object, or whatever,  
 with no problem.  The size of the string is still a few thousand  
 characters but not nearly as huge as it would be if it was pickling  
 the whole map of Table and mapper objects associated.

 Check out the svn trunk, and usage is as follows (this is from the  
 docstring):

      from sqlalchemy.ext.serializer import loads, dumps
      metadata = MetaData(bind=some_engine)
      Session = scoped_session(sessionmaker())

      # ... define mappers

      query =  
 Session
 .query
 (MyClass).filter(MyClass.somedata=='foo').order_by(MyClass.sortkey)

      # pickle the query
      serialized = dumps(query)

      # unpickle.  Pass in metadata + scoped_session, both are optional  
 depending on what you're deserializing
      query2 = loads(serialized, metadata, Session)

      print query2.all()

 this is all brand new code and several structures have special  
 serialization procedures.  If you're playing with it and get errors,  
 we probably need to add more structures to the list of special-needs  
 objects.   The unit tests are fleshing out a good deal of scenarios so  
 far but I can think of more which haven't been tested yet.

 For usage with Beaker, you're going to want to do this serialization  
 procedure by hand and then place the resulting string in the session,  
 then use the specialized loads() on the way out.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: declarative, autoload and late binding to engine

2008-11-06 Thread Martijn Faassen

Hi there,

MikeCo wrote:
 I have an application that will need to bind to several different
 databases with the same structure. The databases to be accessed are
 not known at import time, that will be determined later while the
 application is running. I want to use declarative and autoload the
 columns.  Autoload needs a database connection available when a class
 is first compiled to do the database introspection, that is the part I
 can't figure out. Maybe a custom metaclass?
 
 If this is already solved, can someone point me at the answer? I
 haven't found it yet.

I've solved it, but the solution is rather particular to Zope technology 
right now. It's spread around z3c.saconfig and megrok.rdb:

http://svn.zope.org/z3c.saconfig/trunk

http://svn.zope.org/megrok.rdb/trunk/

Let me sketch out what's going on:

* z3c.saconfig sets up a special scoped session, with a custom session 
factory and scopefunc. The session factory looks up in the Zope 3 
component architecture for a way to create an engine, but you could use 
some other strategy.

* the engine factory is also looked up dynamically by the session 
factory and in turn creates a SQLAlchemy engine (or returns an existing 
one if the engine is already created).

* when an engine is first created, an event is fired. In effect this 
event is fired when a session is needed for the first time in the 
application.

Now megrok.rdb hooks into this event. If will reflect any tables that 
need to be reflected and create any tables that need to be created (if 
their structure is defined in python).

reflection in the simplest case can be done like this:

metadata.reflect(bind=engine)

and creation can be done like this:

metadata.create_all(engine)

Now at the point the event is handled, previously the various 
declarative classes have been associated with the metadata object.

megrok.rdb actually doesn't use the declarative extension's metaclass 
approach, but instead drives the declarative extension's 
instrument_declarative from a grokker (see the martian library). In my 
approach I use an explicit metadata object. I believe the declarative 
extension creates one on the fly (but you can get to it with Base.metadata).

Anyway, all of this sounds very complicated, as the Zope and Grok stuff 
take particular approaches towards configurability. I think the core of 
this approach is:

* hook up your classes to a metadata (declarative does this for you)

* at the appropriate time, do metadata.reflect(bind=engine)

Regards,

Martijn



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---