[sqlalchemy] Re: Automatically loading data into objects

2007-09-17 Thread Hermann Himmelbauer

Am Freitag, 14. September 2007 23:52 schrieb jason kirtland:
 Hermann Himmelbauer wrote:
  Hi,
  In one of my database tables I have a varchar that is mapped to an object
  with a string attribute. This specific varchar should however be
  represented by a certain Python object, therefore it would be very handy,
  if there would be a way to automatically load/represent this data. Is
  there support from SQLAlchemy?
 
  I thought about writing a property for my attribute but I'm unsure if
  this is compatible with SQLAlchemy, as SA sets up its own properties for
  attributes when the object is mapped, so probably mine would be
  overwritten?

 You could either create a custom type for the column to do translation
 to/from object form or use a property as a proxy to the varchar
 attribute.  They're both pretty easy to implement:

 http://www.sqlalchemy.org/docs/04/types.html#types_custom
 http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_mapper_overri
ding

Perfect, thanks!

Best Regards,
Hermann

-- 
[EMAIL PROTECTED]
GPG key ID: 299893C7 (on keyservers)
FP: 0124 2584 8809 EF2A DBF9  4902 64B4 D16B 2998 93C7

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] postgres POLYGON data type

2007-09-17 Thread mark . greenbank

Has anyone added support for the POLYGON data type in PostgreSQL? If
so, is there any code that can be shared?

-Mark


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Feature suggestion: Description attribute in Tables/Columns

2007-09-17 Thread Hermann Himmelbauer

Hi,
I am creating my database via SQLAlchemy, therefore I use several Column() 
objects in a Table() clause.

For my documentation, it is very handy to gather the table definitions 
directly from my SQLAlchemy-based python code - that's relatively easy, I 
simply import my table-definitions file and look for module attributes that 
end with _table (by convention, I always name my tables like that).

Then I can read the table/columns, format them nicely and put them in my 
documentation.

What's missing, however, are description fields: It would be perfect, if I 
could specify a description attribute for the table and the column directly, 
e.g. like that:

myperson_table = Table('person', metadata, 
  Column('id', Integer, primary_key=True, description = 'Identification 
Number'),
  Column('name', String(20), description = 'Name of a person'),
  description = 'Table for my Persons')

I could, also create my own Table/Column classes that hold this information, 
but maybe this could be implemented in SA as a standard feature?

Best Regards,
Hermann

-- 
[EMAIL PROTECTED]
GPG key ID: 299893C7 (on keyservers)
FP: 0124 2584 8809 EF2A DBF9  4902 64B4 D16B 2998 93C7

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: postgres POLYGON data type

2007-09-17 Thread sc0ttbeardsley

On Sep 17, 5:03 am, [EMAIL PROTECTED] wrote:
 Has anyone added support for the POLYGON data type in PostgreSQL? If
 so, is there any code that can be shared?

I haven't seen POLYGON but the beginnings of such support is over at
bycycle.org[1]. I'm interested in something similar for MySQL's
spatial extensions. Has anyone done MySQL yet?

Scott

[1] http://bycycle.org/2007/01/29/using-postgis-with-sqlalchemy/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Feature suggestion: Description attribute in Tables/Columns

2007-09-17 Thread svilen

suggestions:

on the descriptions... maybe right now u can inherit Columns and 
Tables and whatever by your own classes adding the .descriptions 
passing all else down.

on the table-gathering, instead of python-parsing, u could just use 
the metadata and walk it - it will have all your tables, and they 
already know their (DB) names.

Thus u can have different layout of your documentation than what is in 
the source code.

svilen

On Monday 17 September 2007 16:07:15 Hermann Himmelbauer wrote:
 Hi,
 I am creating my database via SQLAlchemy, therefore I use several
 Column() objects in a Table() clause.

 For my documentation, it is very handy to gather the table
 definitions directly from my SQLAlchemy-based python code - that's
 relatively easy, I simply import my table-definitions file and look
 for module attributes that end with _table (by convention, I
 always name my tables like that).

 Then I can read the table/columns, format them nicely and put them
 in my documentation.

 What's missing, however, are description fields: It would be
 perfect, if I could specify a description attribute for the table
 and the column directly, e.g. like that:

 myperson_table = Table('person', metadata,
   Column('id', Integer, primary_key=True, description =
 'Identification Number'),
   Column('name', String(20), description = 'Name of a person'),
   description = 'Table for my Persons')

 I could, also create my own Table/Column classes that hold this
 information, but maybe this could be implemented in SA as a
 standard feature?

 Best Regards,
 Hermann



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Error combining DISTINCT and ORDER BY in an ORM query on PostgreSQL

2007-09-17 Thread Barry Hart
I've started using add_column() as needed to work around this. Our application 
tends to have a single, fairly generic filter_sort function for each class. 
Some use cases of this function require add_column() and some don't. From the 
caller's perspective, the filter_sort() function is poorly encapsulated because 
sometimes it sometimes returns objects and sometimes tuples.

It'd be very helpful if the add_column() function took an optional parameter 
'omit_from_results' telling whether the column should be included in the 
results, then our filter_sort() functions could always return objects.

Barry

- Original Message 
From: Barry Hart [EMAIL PROTECTED]
To: sqlalchemy@googlegroups.com
Sent: Thursday, September 6, 2007 1:55:14 PM
Subject: Re: [sqlalchemy] Re: Error combining DISTINCT and ORDER BY in an ORM 
query on PostgreSQL

add_column() worked great, thanks!

Barry

- Original Message 
From: Michael Bayer [EMAIL PROTECTED]
To: sqlalchemy@googlegroups.com
Sent: Thursday, September 6, 2007 10:18:58 AM
Subject: [sqlalchemy] Re: Error combining DISTINCT and ORDER BY in an ORM query 
on PostgreSQL

the two immediate ways to do this with ORM are to use add_column() on Query, or 
to query from a fully constructed select() statement.  The former will have the 
effect of returning tuples containing the instance as well as the additional 
columns, the latter doesnt play as nicely with eager loading.add_column() 
is probably the better
 option here.
On Sep 6, 2007, at 9:09 AM, Barry Hart wrote:

My application has the following data model:

A promotion has 1 mechanic.
A promotion has 0 or more products.

I want to query for a list of promotions containing products whose name matches 
a pattern, and I want the resulting list to be sorted by a property on the 
promotion's mechanic.

This query does not work with SQLAlchemy 0.3.10 on PostgreSQL 8.1. It's because 
of the combination of using DISTINCT and ORDER BY on an table external to the 
main table for the object. (The use of DISTINCT is necessary because joining 
with the products table creates duplicate records. I'm
 showing the query results in a web application with 'n' records per page. 
Without DISTINCT, the ORM mapper will ignore the duplicates, but the record 
count for the query no longer reflects the actual number of objects to be 
returned, making it impractical to page through the data.)

The error is the following:

sqlalchemy.exceptions.SQLError: (ProgrammingError) for SELECT DISTINCT, ORDER BY
 expressions must appear in select list
 'SELECT DISTINCT promo_promotion.mechanic_id AS promo_promotion_mechanic_id, pr
omo_promotion.id AS promo_promotion_id, promo_promotion.name AS promo_promotion_
name \nFROM promo_promotion JOIN prod_product ON promo_promotion.id = prod_produ
ct.promotion_id JOIN promo_mechanic ON promo_mechanic.id = promo_promotion.mecha
nic_id \nWHERE
 lower(prod_product.name) LIKE %(lower)s ORDER BY promo_mechanic.n
ame ASC' {'lower': '%1%'}

The same program works fine on SQLite.

I could write a similar query at the relational level that explicitly includes 
the sort field, but I'd prefer to work at the ORM level. Is there a way to do 
this?

My
 test case is below.

Barry H.

--

from sqlalchemy import *
from sqlalchemy.ext.assignmapper import assign_mapper

import sqlalchemy
from sqlalchemy.ext import activemapper, sessioncontext

engine = None

def create_engine():
global engine
#engine = sqlalchemy.create_engine('sqlite://')
engine = sqlalchemy.create_engine('postgres://postgres:[EMAIL 
PROTECTED]:5432/testdb')
metadata.connect(engine)

def create_session():
return sqlalchemy.create_session(bind_to=engine)

def fuzzy_search(column,
 value):
Case insensitive search allowing partial string matches.
return func.lower(column).like('%%%s%%' % value.lower())

metadata = activemapper.metadata
create_engine()
session = activemapper.Objectstore(create_session)
activemapper.objectstore = session

##
# Classes
##

class Mechanic(object): pass
class Promotion(object):
def __repr__(self):
  return 'Promotion: %s, mechanic=%s' % (self.name, self.mechanic.name)

class Product(object): pass

##
# Tables
##

mechanic_table = Table(
'promo_mechanic', metadata,
Column('id', Integer, primary_key=True),
Column('name', Unicode, unique=True))

promotion_table = Table(
'promo_promotion', metadata,
Column('id', Integer, primary_key=True),
Column('name', Unicode, unique=True),
Column('mechanic_id', Integer, ForeignKey('promo_mechanic.id')))

product_table = Table(
'prod_product', metadata,
Column('id', Integer, primary_key=True), 
Column('name',
 Unicode),

[sqlalchemy] Re: postgres POLYGON data type

2007-09-17 Thread Ants Aasma

On Sep 17, 3:03 pm, [EMAIL PROTECTED] wrote:
 Has anyone added support for the POLYGON data type in PostgreSQL? If
 so, is there any code that can be shared?

Here's a patch implementing PGPolygon. The conversion from/to python
types should be done in dbapi IMHO (i.e. psycopg2) but this will work
until it is.

Ants

PS: any idea why the google groups web api doesn't allow attaching of
files

Index: lib/sqlalchemy/databases/postgres.py
===
--- lib/sqlalchemy/databases/postgres.py(revision 3500)
+++ lib/sqlalchemy/databases/postgres.py(working copy)
@@ -140,6 +140,28 @@
 def get_col_spec(self):
 return self.item_type.get_col_spec() + '[]'

+class PGPolygon(sqltypes.TypeEngine):
+def dialect_imp(self, dialect):
+return self
+
+def bind_processor(self, dialect):
+def process(value):
+if value is None:
+return value
+return '(' + ','.join('(%G,%G)' % (x,y) for x,y in value)
+ ')'
+return process
+
+def result_processor(self, dialect):
+points = re.compile(r'\(([0-9.Ee]+),([0-9.Ee]+)\)')
+def process(value):
+if value is None:
+return value
+return tuple(map(lambda v:tuple(map(float, v)),
points.findall(value)))
+return process
+
+def get_col_spec(self):
+return POLYGON
+
 colspecs = {
 sqltypes.Integer : PGInteger,
 sqltypes.Smallinteger : PGSmallInteger,
@@ -177,6 +199,7 @@
 'bytea' : PGBinary,
 'boolean' : PGBoolean,
 'interval':PGInterval,
+'polygon': PGPolygon,
 }

 def descriptor():


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: postgres POLYGON data type

2007-09-17 Thread jason kirtland

sc0ttbeardsley wrote:
 On Sep 17, 5:03 am, [EMAIL PROTECTED] wrote:
 Has anyone added support for the POLYGON data type in PostgreSQL? If
 so, is there any code that can be shared?
 
 I haven't seen POLYGON but the beginnings of such support is over at
 bycycle.org[1]. I'm interested in something similar for MySQL's
 spatial extensions. Has anyone done MySQL yet?

Just before the first 0.4 beta I started looking at adding support for 
the OGC types for databases that support them.  The basic column 
plumbing in SQLAlchemy is simple- an hour's work, with unit tests- but 
what's less obvious is what to map those column types *to*.  They could 
just be tuples or the like, but that didn't feel as useful as one might 
want.

I've been idly monitoring the progress summary for the GeoDjango project 
to see if their efforts scare up any clear winners for python-side 
representation of the geometry types.  They seem to be mapping to 
ctypes-fronted native classes, which feels like a good direction to me.

What do all of you GIS folks think about the python mappings?  Is there 
a clear approach, or would the type implementation (e.g. Point, Curve, 
etc.) need to be pluggable?  Some specialized expression functions to 
make query specification smooth might also be in order.

I think it would be pretty nifty to have a GeoAlchemy kind of 
extension or add-on that really rocks for GIS work.  I haven't had cause 
to use geometry in database work and so my GIS experience is pretty 
limited, but I would be more than delighted to lend a hand if folks with 
working domain knowledge are also game.

-j


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: postgres POLYGON data type

2007-09-17 Thread sdobrev

i don't have recent py-gis experience, but from past, its been 
tuple-likes and numpy arrays. Best option will be to have some 
default data-representation constructor for each SA-GIS type, and 
allow overriding that.
e.g. Point holds data by default in a tuple (Point.DataHoler=tuple), 
but i can override Point.DataHolder=my-special-point-type and i'll 
get those. There might be numerous aspects to count here before 
choosing a representation.
Even some semantix can float (e.g. is closed polygon represented as 
p1..pn + closed=true, or as p1..pn,p1 again), so have a clear 
non-twofold creation-protocol and leave actual representation to the 
programer.
Otherwise u're stick with something half people won't like.

my 2baht
svilen

On Monday 17 September 2007 21:59:00 jason kirtland wrote:
 sc0ttbeardsley wrote:
  On Sep 17, 5:03 am, [EMAIL PROTECTED] wrote:
  Has anyone added support for the POLYGON data type in
  PostgreSQL? If so, is there any code that can be shared?
 
  I haven't seen POLYGON but the beginnings of such support is over
  at bycycle.org[1]. I'm interested in something similar for
  MySQL's spatial extensions. Has anyone done MySQL yet?

 Just before the first 0.4 beta I started looking at adding support
 for the OGC types for databases that support them.  The basic
 column plumbing in SQLAlchemy is simple- an hour's work, with unit
 tests- but what's less obvious is what to map those column types
 *to*.  They could just be tuples or the like, but that didn't feel
 as useful as one might want.

 I've been idly monitoring the progress summary for the GeoDjango
 project to see if their efforts scare up any clear winners for
 python-side representation of the geometry types.  They seem to be
 mapping to ctypes-fronted native classes, which feels like a good
 direction to me.

 What do all of you GIS folks think about the python mappings?  Is
 there a clear approach, or would the type implementation (e.g.
 Point, Curve, etc.) need to be pluggable?  Some specialized
 expression functions to make query specification smooth might also
 be in order.

 I think it would be pretty nifty to have a GeoAlchemy kind of
 extension or add-on that really rocks for GIS work.  I haven't had
 cause to use geometry in database work and so my GIS experience is
 pretty limited, but I would be more than delighted to lend a hand
 if folks with working domain knowledge are also game.

 -j


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL connection url format?

2007-09-17 Thread sdobrev

see some notes at
http://www.sqlalchemy.org/trac/wiki/DatabaseNotes

also check dbcook.usage.sa_engine_defs.py at 
(svn co)
https://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk/dbcook/usage/
for some create/drop stuff, both pyodbs/pymssql.

 3. I'll be piggy backing on an existing ERP system and I'm trying
 to decide what would be the best way to store new tables
 - in the DB used by the ERP system
 - in a new DB on the same MSSQL server
 - in a SQLite DB

sqlite db will be cheapest and fastest (in the means of rapid 
develpment), but might not support all magic u want, e.g. sequences 
etc. i'll say go initialy for sqlite, dont rely on specific mssql 
features, and timely try things on mssql to prevent surprises. Then 
once u have 70+% there, specialize if need be.

 How easy does SA make it to use data from multiple DBs?
transparent enough for an antisqler like me.
see copyall.py at
https://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk/dbcook/misc/metadata/

have fun
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] MSSQL connection url format?

2007-09-17 Thread Scott Jaderholm

Hi Alchemists,

I am starting a project in Pylons that will need to access a MSSQL
database. I plan to develop on Windows and deploy on Linux. It sounds
like pyodbc is the best option and that it is reasonably well
supported. I've done a lot of Google searching but I haven't been able
to find answers to the following issues.

1. What is the correct format for the connection url?
My guess was the following:
engine = create_engine('mssql://username:[EMAIL PROTECTED]:port/database',
echo=True, module=pyodbc)
but engine.connect() fails.

Should the hostname be the IP of the server or SQL Server's
host\instance combination?

Isn't the port normally 1433?

2. I saw an archived email that said there was better support for
MSSQL in 0.3 than in 0.4. Is that still the case?

3. I'll be piggy backing on an existing ERP system and I'm trying to
decide what would be the best way to store new tables
- in the DB used by the ERP system
- in a new DB on the same MSSQL server
- in a SQLite DB

How easy does SA make it to use data from multiple DBs?

Thank you very much for your help and work on SQLAlchemy,
Scott

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Why is explicit 'and_' required for filter but not filter_by?

2007-09-17 Thread Michael Bayer


On Sep 14, 2007, at 3:52 PM, Rick Morrison wrote:

 I think it might be more historical than anything else. Back when  
 what is now filter() was a single argument to the select() call, on  
 the SQL-API side, and there couldn't take any additional arguments,  
 as the select() call was already pretty heavy with keyword  
 arguments and it was easy to get things mixed up. That and the  
 historical SelectResults() extension on which this whole API is  
 modeled, only took a single expression in its filter() method.

 Now that filter() is it's own construct, I can't see any reason why  
 it couldn't take a list of expressions that were assumed to be  
 AND'ed. Well, any good reason except that it's also pretty easy,   
 -- and arguably more readable -- to just chain multiple .filter()  
 calls using the generative style:

 .filter(and-expr-1).filter(and-expr-2)

I've raised the notion of allowing filter() to take *criterion but  
theres been some debate about this.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Why is explicit 'and_' required for filter but not filter_by?

2007-09-17 Thread Michael Bayer


On Sep 15, 2007, at 5:10 PM, Ryan wrote:


 And whatever happened to the nice select_by Query method where I could
 mix filter-type criterion arguments and filter_by-type keyword
 arguments in one call.  *That* was readable to me, and very usable in
 a pythonic way.  What's wrong with having one filter method that does
 everything?


we do still have some of the *args **kwargs thing going on with the  
newer ORM clause constructs like has() and any(), since thats a lot  
easier than adding has()/has_by(), any()/any_by(), etc.

But generally, with the generative API each method just does one  
small thing, this is mostly for the purposes of clear documentation.   
Flags, switches, and especially *args tend to be confusing to people,  
and its easier to understand diferent behaviors as grouped into  
distinct methods.   its not impossible that filter() will again  
expand into allowing filter(*whereclauses, **kwargclauses)...but i  
dont want to rush into that.  people especially get confused saying  
things like filter(somekeyword5) and then getting a no such name  
'somekeyword' python error, for example (i.e. clauseelements and  
keyword args start to get blurred together).

before we had the generative, having distinct methods like this was  
not feasable...we had  select(), select_by(), get_by(), selectfirst_by 
(), selectone_by(), etc...it was a disaster.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL connection url format?

2007-09-17 Thread Rick Morrison
Hi Scott,

 ...to develop on Windows and deploy on Linux. It sounds
 like pyodbc is the best option

pyodbc works well in Windows, but I've heard on Linux, not so much. pymssql
is your best bet for Linux. Note that pymssql does not work with unicode,
limits SQL identifiers to 30 characters, and will only support up to 255
chars in VARCHAR columns. We could use some help getting pyodbc on Linux in
good shape, I'm far too busy these days to assist much.

 I saw an archived email that said there was better support for
 MSSQL in 0.3 than in 0.4. Is that still the case?

yeah, sadly MSSQL is pretty lighly tested on 0.4 right now. AFAIK, it's not
part of the regular test cycle.

 I'll be piggy backing on an existing ERP system and I'm trying to
 decide what would be the best way to store new tables

Well if you want to join against those other tables, and I'm guessing you
do, you'd be far better off using the same DB engine as the current system.
Doesn't have to be the same database, MSSQL works fine in cross-database
joins.

SQLite is a great little database engine, but beware trying to make it scale
up for highly concurrent situations. It's fast as hell as long as there are
only readers, or only writers, but mix the two and it will fall over pretty
fast. We use it here very successfully as a static database repository,
for geocode caches, user settings, archive data, that kind of stuff. If
you're going to have lots of simultaneous users and hitting the tables hard,
I would go with something designed for concurrent access.

HTH,
Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---