[sqlalchemy] Re: Microsoft Access Support

2007-08-08 Thread Paul Johnston
Hi,

pyodbc + direct win32com calls to do table reflection. To make support more
solid, we would need to move to adodbapi (and fix problems with adodbapi).
Interestingly, you get more functional SQL through the ADO interface than
the ODBC interface. At the moment it's plenty functional enough for me
though.

More info on the wiki, http://www.sqlalchemy.org/trac/wiki/DatabaseNotes

Paul


On 8/7/07, Michael Bayer [EMAIL PROTECTED] wrote:


 hi paul -

 thanks much for this !  are we using pyodbc for this ?


 On Aug 5, 7:20 pm, Paul Johnston [EMAIL PROTECTED] wrote:
  Hi,
 
  I have just committed Microsoft Access support in the 0.4 branch. It's
  pretty basic (e.g. a lot of unit tests fail) but it does have functional
  table reflection. For me, this is a big help in migrating legacy Access
  databases to newer formats.
 
  The table reflection is based on jet2sql.py by Matt Keranen, which has
  been a real help.
 
  I've included some info on Access support on the wiki,
 http://www.sqlalchemy.org/trac/wiki/DatabaseNotes
 
  Enjoy,
 
  Paul


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using SA to move data between databases

2007-08-08 Thread Paul Johnston
Hi,

heh, adding this raw-data-copy to the autoload.py
 makes quite a database-copier/migrator...


Yes indeed, I used this yesterday to migrate a legacy database, it was
impressively quick and easy.

I can see we've got similar requirements in this area. Perhaps you and I
could work together to package up some of these techniques in a more
polished manner. Maybe dbcook is the place to do this; I've still not
downloaded it.

Regards,

Paul

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] I can't use YEAR column type with sqlalchemy/databases/mysql.py

2007-08-08 Thread hiroshiykw

Hi.
In SQLAlchemy 0.3.10 ( / Python2.5), I found that YEAR column type is
created as TEXT column type in the following simple create query:
---
from sqlalchemy import *
from sqlalchemy.databases.mysql import *
db = create_engine(mysql://[EMAIL PROTECTED]/test)
metadata = MetaData(db)
year_table = Table(year_table, metadata,
Column(y, MSYear)
)
year_table.create()


I read the source and find it seems MSYear's get_search_list() method
returns a sequence of (TEXT, MSYear, String, TypeEngine,
AbstractType).In this case,  the first item, Text, is used.

I'm not familiar with the SQLAlchemy's whole archtecture, so now I'm
avoiding this problem adding get_search_list() method in MSYear like
this:
--
class MSYear(sqltypes.String):
MySQL YEAR type, for single byte storage of years 1901-2155

def get_col_spec(self):
if self.length is None:
return YEAR
else:
return YEAR(%d) % self.length

def get_search_list(self):
return [self.__class__]
--
I think this should be fixed in proper manner.

Regards,

Hiroshi Ayukawa


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Aggregation

2007-08-08 Thread Paul Colomiets

[EMAIL PROTECTED] wrote:
 hi, i have similar idea/need within dbcook, although on a somewhat 
 higher level:
 pre
 cache_results/: (dbcook/SA) add-on for automaticaly-updated database 
 denormalisation caches of intermediate results, each one depending on 
 particular pattern of usage. Wishful syntax:

 class SomeCacheKlas( Base):
 fieldname = cache_aggregator( klas.field, AggrFilterCriteria)
 #e.g.
 #class Cache4averagePerson( Base):
 #age= cache_agregators.Average( Person.age, Filter1 )
 #salary = cache_agregators.Sum( Person.salary, Filter2 )
 /pre
 i was thinking on using triggers and/or sql-functions but then this is 
 just one way to do it - and the idea is to hide the implementation.

   
I've seen that, but found no code.
Is it something only planned for dbcook?
 Can your implementation be extended to use: 
  a) more complex aggregator expressions (e.g. average(), that is 
 sum() / count(), and similar dependencies)
   
Definetly can. Now you can just use property for that:

avg = property(lambda self: self.sum / self.count)

I currently have no idea on how to make it better (less writing?)
Also documenting extension interface is in to do list.
  b) more complex filters - e.g. not max() on all rows, but on some 
 subset/select
   
Yes. I'm looking for syntax for that. Functionality seems
very similar for `relation()` so may be
`a.Max(...,primaryjoin=...)` would do.
 think of generic report calculations/aggregations, and putting those 
 into some cacheing table. 
 Eventualy getting the report being built on-the-run - distributing the 
 big wait over the atomary updates.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] refresh() with scoped_session ?

2007-08-08 Thread Alexandre CONRAD

Hi,

I'm migrating my Pylons application to the latest version of Pylons 
(0.9.6rc2) and SA (0.4.0dev-r3205) using the new scoped_session instead 
of the deprecated SessionContext. From the SA docs (0.4), there's a note 
about how .flush() works:

http://www.sqlalchemy.org/docs/04/unitofwork.html#unitofwork_api_flush_whatis

And here I saw that the refresh() method, I never payed attention to it.

Here is my scenario (abreviated):

-
Session = scoped_session(sessionmaker(autoflush=False, transactional=False))
mapper = Session.mapper

address_table = Table(..)
class Address(object): pass
mapper(Address, address_table, props={...}, order_by=name)

user_table = Table(..)
class User(object): pass
mapper(User, user_table, props={...}, order_by=email)


user = User.query.get(id)
address = Address()

# This will auto-query the DB to get the address listing.
user.addresses.append(address)

Session.flush() # Uninstantiated 'Session' object

# This returns the user's addresses, but the newly appended address will
# be at the end of the list, not ordered correctly.
#return user.addresses

# So I re-build my query and re-fetch from the database that will return
# the user's addresses correctly ordered
return model.Address.query.filter_by(user=user).all()
-

I guess the last line is OK, but I thought using the refresh() method as 
it's proposed. But refresh() doesn't seem to be available from my 
uninstantiated scoped_session. Instantiating the Session gives me a 
refresh() method (and a bunch of others) but the code breaks earlier at 
the mapper() stage complaining:

TypeError: mapper() got an unexpected keyword argument 'order_by'

I might not be doing a correct usage of scoped_session though. Please 
let me know if I'm doing something wrong.

Regards,
-- 
Alexandre CONRAD


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: refresh() with scoped_session ?

2007-08-08 Thread Michael Bayer

hi alexandre -

I've implemented all the missing class-level methods on  
ScopedSession in r3212, so you should be able to call refresh().

But also, if you want to say Session(), then work with it, that is  
also the intended usage model, although the Session.xxx methods  
should in theory be all you need.

Also i cannot reproduce your order_by keyword argument issue; the  
mapper() function provided by ScopedSession passes all **kwargs  
generically through to the actual mapper() function, so you'll have  
to show me more specifically how you are getting that result.

- mike


On Aug 8, 2007, at 6:45 AM, Alexandre CONRAD wrote:


 Hi,

 I'm migrating my Pylons application to the latest version of Pylons
 (0.9.6rc2) and SA (0.4.0dev-r3205) using the new scoped_session  
 instead
 of the deprecated SessionContext. From the SA docs (0.4), there's a  
 note
 about how .flush() works:

 http://www.sqlalchemy.org/docs/04/ 
 unitofwork.html#unitofwork_api_flush_whatis

 And here I saw that the refresh() method, I never payed attention  
 to it.

 Here is my scenario (abreviated):

 -
 Session = scoped_session(sessionmaker(autoflush=False,  
 transactional=False))
 mapper = Session.mapper

 address_table = Table(..)
 class Address(object): pass
 mapper(Address, address_table, props={...}, order_by=name)

 user_table = Table(..)
 class User(object): pass
 mapper(User, user_table, props={...}, order_by=email)


 user = User.query.get(id)
 address = Address()

 # This will auto-query the DB to get the address listing.
 user.addresses.append(address)

 Session.flush() # Uninstantiated 'Session' object

 # This returns the user's addresses, but the newly appended address  
 will
 # be at the end of the list, not ordered correctly.
 #return user.addresses

 # So I re-build my query and re-fetch from the database that will  
 return
 # the user's addresses correctly ordered
 return model.Address.query.filter_by(user=user).all()
 -

 I guess the last line is OK, but I thought using the refresh()  
 method as
 it's proposed. But refresh() doesn't seem to be available from my
 uninstantiated scoped_session. Instantiating the Session gives me a
 refresh() method (and a bunch of others) but the code breaks  
 earlier at
 the mapper() stage complaining:

 TypeError: mapper() got an unexpected keyword argument 'order_by'

 I might not be doing a correct usage of scoped_session though. Please
 let me know if I'm doing something wrong.

 Regards,
 -- 
 Alexandre CONRAD


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DateTime version issues

2007-08-08 Thread Paul Johnston

Dave,

I recently upgraded from SQLAlchemy 0.3.8 to 0.3.10.  The only
problem I ran into is that 0.3.10 no longer allows you to set datetime
columns using strings.
  

What database are you using? I did some rework around MSSQL and dates 
between those versions.

You now need to use datetime.* objects; it was a bug that it previously 
allowed strings. The problem with string is that it makes you dependent 
on the date format configured in the database. This had caused me a 
problem is live app - when the damn database got rebuilt and configured 
a little differently.

Paul

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] PyODBC and scope_identity

2007-08-08 Thread Paul Johnston

Hi,

Those of you using MSSQL may remember the fun we've been having with 
scope_identity(). In short this is a way to reliably fetch inserted IDs 
from tables, avoiding a bug related to triggers. The problem was that 
PyODBC needed a mod by Michael Jahn to do this.

Well, I see PyODBC 2.0.37 has just been released with the changes 
included. It seems to work great. And if you use the latest trunk (or 
0.3 branch) of SA, it will now automatically use_scope_identity if you 
have a suitable version of PyODBC installed.

This has been by far the trickiest bug to fix, I'm glad it's done!

Paul

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DateTime version issues

2007-08-08 Thread Rick Morrison
FYI I believe there is a ticket to make improvements in the type system that
would allow strings to be given as date input (among other conveniences),
and I don't think it's a bad thing. Lots of databases make the conversion
anyway, and it's ultimately pretty confusing thing to have various Dialects
treat data in various ways.

I've been bugging Mike for a long, long time about a better type system, and
I think I may have oversold it and made it sound too big and grandiose.  All
it needs to be is a layer that adapts the incoming type to the desired
column type. This case would be a good example, the incoming string would be
converted into a datetime (or whatever the Dialects's DBAPI needed) and sent
to the database that way. Conversely, when fetching data, the Dialect could
adapt the data back to the standard Python type for the column (as it is
pretty much already done).

IMO this is going to become increasingly important with SA being used for
data migration as we've recently seen (thanks to you) and with upcoming
sharding support. Type differences in those cases can make life hell.

Rick


On 8/8/07, Paul Johnston [EMAIL PROTECTED] wrote:


 Dave,

 I recently upgraded from SQLAlchemy 0.3.8 to 0.3.10.  The only
 problem I ran into is that 0.3.10 no longer allows you to set datetime
 columns using strings.
 
 
 What database are you using? I did some rework around MSSQL and dates
 between those versions.

 You now need to use datetime.* objects; it was a bug that it previously
 allowed strings. The problem with string is that it makes you dependent
 on the date format configured in the database. This had caused me a
 problem is live app - when the damn database got rebuilt and configured
 a little differently.

 Paul

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DateTime version issues

2007-08-08 Thread Michael Bayer
this particular feature is easily implemented right now as an end  
user recipe, using TypeDecorator.  no new type system is needed for  
this one (although there is a ticket for such).

the advantage to TypeDecorator is that you get to define what kind of  
date representation you'd like to convert from/to, i.e. a particular  
string representation, mxDateTime, etc.  that the string format of  
MM/DD/ HH:MM:SS just happened to work doesnt seem to me to be  
terribly useful, considering that if you were in some other country  
the date would be entered as DD/MM/ or such.  hardcoding in a  
-MM-DD HH:MM:SS format into the code doesnt seem like such a  
solution.


On Aug 8, 2007, at 5:36 PM, Rick Morrison wrote:

 FYI I believe there is a ticket to make improvements in the type  
 system that would allow strings to be given as date input (among  
 other conveniences), and I don't think it's a bad thing. Lots of  
 databases make the conversion anyway, and it's ultimately pretty  
 confusing thing to have various Dialects treat data in various ways.

 I've been bugging Mike for a long, long time about a better type  
 system, and I think I may have oversold it and made it sound too  
 big and grandiose.  All it needs to be is a layer that adapts the  
 incoming type to the desired column type. This case would be a good  
 example, the incoming string would be converted into a datetime (or  
 whatever the Dialects's DBAPI needed) and sent to the database that  
 way. Conversely, when fetching data, the Dialect could adapt the  
 data back to the standard Python type for the column (as it is  
 pretty much already done).

 IMO this is going to become increasingly important with SA being  
 used for data migration as we've recently seen (thanks to you) and  
 with upcoming sharding support. Type differences in those cases can  
 make life hell.

 Rick


 On 8/8/07, Paul Johnston [EMAIL PROTECTED] wrote:

 Dave,

 I recently upgraded from SQLAlchemy 0.3.8 to 0.3.10.  The only
 problem I ran into is that 0.3.10 no longer allows you to set  
 datetime
 columns using strings.
 
 
 What database are you using? I did some rework around MSSQL and dates
 between those versions.

 You now need to use datetime.* objects; it was a bug that it  
 previously
 allowed strings. The problem with string is that it makes you  
 dependent
 on the date format configured in the database. This had caused me a
 problem is live app - when the damn database got rebuilt and  
 configured
 a little differently.

 Paul





 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DateTime version issues

2007-08-08 Thread Paul Johnston

Hi,

 I've been bugging Mike for a long, long time about a better type 
 system, and I think I may have oversold it and made it sound too big 
 and grandiose.  All it needs to be is a layer that adapts the incoming 
 type to the desired column type. 

That would help with the situation in question. I'm not sure of its need 
in migration, I think there's something a little simpler we need - some 
documentation. For all the SA types we should document the type that 
convert_result_value returns, and that convert_bind_param expects, and 
check that all the DBAPIs stick to this (probably with unit tests). I'm 
pretty sure there's some inconsistency at the minute.

Paul

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DateTime version issues

2007-08-08 Thread Rick Morrison
 I think there's something a little simpler we need - some
 documentation. For all the SA types we should document the type that
 convert_result_value returns, and that convert_bind_param expects, and
 check that all the DBAPIs stick to this (probably with unit tests). I'm
 pretty sure there's some inconsistency at the minute.

Hi Paul,

As long as you're working with only one database, everything is fine. The SA
Dialect specifies the type conversion, and
handles any discrepancies between what the DBAPI gives, and what the SA
published type is via the convert_bind_param() and convert_result_value().

The issue comes up when working with multiple databases. I first noticed the
problem when I was surprised to see all strings coming back from SQLite as
unicode(), and not str(). I had to convert the SQLite output to MSSQL input
when working with the two databases. I would have expected SA as a database
abstraction layer to
handle stuff like this. But it for the most part, the various Dialects
just return what the DBAPI gives back.


And the various DBAPI's each have their own idea of what the proper Python
type is for the various SQL types. We've seen this before in various
flavors: String vs. Unicode, float vs. Decimal, String vs. date (esp.
thinking of SQLite here), and etc. If you want to shard data across say,
SQLite and Postgresql, it would be nice not to have to worry about the data
source being the determinant of the type of data coming back. Likewise for a
data migration from one to the other. If you were to migrate from SQLite to
MSSQL, you'd end with unicode data in text columns unless you took specific
steps to convert it.

Whether the type mapping are determined by convention and unit tests as you
suggest, or by an adaption layer as I imagine is of course up for grabs.
Certainly the convention method has interia on its side.  But is it really
the best way?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DateTime version issues

2007-08-08 Thread Rick Morrison
Yeah of course date formats vary, it's one of the trickier issues in type
adaption, can be computationally expensive, etc. A full-on date
parser is probably just way out of scope for SA (the excellent
dateutil package already
handles it pretty well).  I'm of the opinion that it would not be so
horrible to restrict string-to-date conversions to the ISO format,
else fail.

On 8/8/07, Michael Bayer  [EMAIL PROTECTED] wrote:

 this particular feature is easily implemented right now as an end user
 recipe, using TypeDecorator. no new type system is needed for this one
 (although there is a ticket for such).
 the advantage to TypeDecorator is that you get to define what kind of date
 representation you'd like to convert from/to, i.e. a particular string
 representation, mxDateTime, etc. that the string format of MM/DD/
 HH:MM:SS just happened to work doesnt seem to me to be terribly useful,
 considering that if you were in some other country the date would be entered
 as DD/MM/ or such. hardcoding in a -MM-DD HH:MM:SS format into
 the code doesnt seem like such a solution.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Aggregation

2007-08-08 Thread sdobrev

On Wednesday 08 August 2007 12:18:24 Paul Colomiets wrote:
 [EMAIL PROTECTED] wrote:
  hi, i have similar idea/need within dbcook, although on a
  somewhat higher level:
  pre
  cache_results/: (dbcook/SA) add-on for automaticaly-updated
  database denormalisation caches of intermediate results, each one
  depending on particular pattern of usage. Wishful syntax:
 
  class SomeCacheKlas( Base):
  fieldname = cache_aggregator( klas.field,
  AggrFilterCriteria) #e.g.
  #class Cache4averagePerson( Base):
  #age= cache_agregators.Average( Person.age, Filter1 )
  #salary = cache_agregators.Sum( Person.salary, Filter2 )
  /pre
  i was thinking on using triggers and/or sql-functions but then
  this is just one way to do it - and the idea is to hide the
  implementation.

 I've seen that, but found no code.
 Is it something only planned for dbcook?
yes, to-be-done, as a way to avoid DB to become denormalized in some 
particular unplanned/uncontrolled way, that fits some reports and 
screws all else. It takes me a great fight to convince users that 
denormalization is out of app's bare model... its something u lay 
over it.

  Can your implementation be extended to use:
   a) more complex aggregator expressions (e.g. average(), that is
  sum() / count(), and similar dependencies)

 Definetly can. Now you can just use property for that:
 avg = property(lambda self: self.sum / self.count)
naaah, sorry, that was too easy. i mean more complex aggregation 
functions... i guess it can, if u can do sum=sum+x, then u could do 
sq=sq+x*x/2

   b) more complex filters - e.g. not max() on all rows, but on
  some subset/select

 Yes. I'm looking for syntax for that. Functionality seems
 very similar for `relation()` so may be
 `a.Max(...,primaryjoin=...)` would do.

hmmm... relation... yess, it is like a plural relation but getting a 
singular result out of it, and never ever loading the items. 
Are u using something along PropertyLoader?

ciao
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using SA to move data between databases

2007-08-08 Thread sdobrev

On Wednesday 08 August 2007 11:44:57 Paul Johnston wrote:
 Hi,

 heh, adding this raw-data-copy to the autoload.py

  makes quite a database-copier/migrator...

 Yes indeed, I used this yesterday to migrate a legacy database, it
 was impressively quick and easy.

 I can see we've got similar requirements in this area. Perhaps you
 and I could work together to package up some of these techniques in
 a more polished manner. Maybe dbcook is the place to do this; I've
 still not downloaded it.

yes, it can be done there. 
Although to me it's all plain sqlalchemy (not using anyhing out of 
dbcook layers), and just extensions of MetaData: _reflect(), 
_copy_data(), _diff(). Once these things go into some MetaData 
methods in a way or another, it can go back into UsageRecipes as it 
would be all 10-15 lines of code.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---