[sqlalchemy] Re: DynamicMetaData question

2007-03-14 Thread Sébastien LELONG

 Hey list, are you confused by the current system ?  Please let me know.
 the only change I would favor here would be to merge connect into
 MetaData, BoundMetaData and DynamicMetaData stay around for backwards
 compat for probably forever.

Not actually confused by the system. Since I need to define the connection 
parameter in a seperate file (config file), I *need* the DND.connect() 
method. So, DND is the only I use... The global_connect() is not useful for 
me since I need multiple datasources. Again, beeing able to define connection 
parameter later is crucial, so having a MetaData.connect() would be great.

but I need to get at least 20 or 30 users on this list telling me how  
 they are using metadata.

A typical initialization of my models looks like:

_metadata = DynamicMetaDate(I'll be bound later...)
one_table = Table(blabla,_metadata, ...)

_engine = None
def init_model(dburi,**kwargs):
global _engine
if _engine:
# Engine already defined
# check if metadata bound
if _metadata.is_bound():
# nothing to do   
pass
else:
# need to connect metadata.
# occurs when starting new thread
_metadata.connect(_engine)
else:
   # first we access the model, init everything
_engine = sa.create_engine(dburi,**kwargs)
settlement_metadata.connect(_engine)



BTW, I'd posted on the TG list to know how they handle multiple datasources 
with SA. Answer was for now, only one datasource [probably due to 
global_connect], we'll see next release). So maybe global_connect will not 
even be used anymore by TG...


Cheers


Seb
-- 
Sébastien LELONG
sebastien.lelong[at]sirloon.net
http://www.sirloon.net


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DynamicMetaData question

2007-03-14 Thread Gaetan de Menten

 I think its more
 confusing for the API to break backwards-compatibility every 5 or 6
 releases.
 Also i think adding a whole new class DelayedMetaData,
 which is literally just to avoid passing a flag, well i wont say
 insane but its a little obsessive.

Agreed here.

 Hey list, are you confused by the current system ?  Please let me know.

Count me as one of the confused users. I must admit I only read
through the docs quickly (but don't we all do?) and missed the mention
of the threadlocal nature of the DynamicMetaData, which I thought only
provided a way to delay connecting it to an engine, which was the
feature I needed.

 the only change I would favor here would be to merge connect into
 MetaData, BoundMetaData and DynamicMetaData stay around for backwards
 compat for probably forever, and perhaps we add another method called
 connect_threadlocal() or something like that for people who want
 that behavior.  i would like to have just one object that does the
 whole thing, now that some of the early FUD has subsided.

+1 for the connect method on MetaData

 but I need to get at least 20 or 30 users on this list telling me how
 they are using metadata.



-- 
Gaëtan de Menten
http://openhex.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] explicit primary key in many-to-many relation

2007-03-14 Thread Glauco

in my many to many relation, i've an abiguous primary key so Sa tell me:

sqlalchemy.exceptions.ArgumentError: Error determining primary and/or 
secondary join for relationship 'xxx' between mappers 'Mapper|Yyy|yy' 
and 'Mapper|Xxx|xxx'.  If the underlying error cannot be corrected, you 
should specify the 'primaryjoin' (and 'secondaryjoin', if there is an 
association table present) keyword arguments to the relation() function 
(or for backrefs, by specifying the backref using the backref() function 
with keyword arguments) to explicitly specify the join conditions.  
Nested error is Cant determine join between 'xxx' and 'yyy'; tables 
have more than one foreign key constraint relationship between them.  
Please specify the 'onclause' of this join explicitly.


onclause is not within  relation optin, so what's the correct manner to 
explicit it?



Thank you
Glauco

-- 
++
  Glauco Uri - Programmatore
glauco(at)allevatori.com 
   
  Sfera Carta Software®  [EMAIL PROTECTED]
  Via Bazzanese,69  Casalecchio di Reno(BO) - Tel. 051591054 
++



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: explicit primary key in many-to-many relation

2007-03-14 Thread Glauco

Glauco ha scritto:
 in my many to many relation, i've an abiguous primary key so Sa tell me:

 sqlalchemy.exceptions.ArgumentError: Error determining primary and/or 
 secondary join for relationship 'xxx' between mappers 'Mapper|Yyy|yy' 
 and 'Mapper|Xxx|xxx'.  If the underlying error cannot be corrected, you 
 should specify the 'primaryjoin' (and 'secondaryjoin', if there is an 
 association table present) keyword arguments to the relation() function 
 (or for backrefs, by specifying the backref using the backref() function 
 with keyword arguments) to explicitly specify the join conditions.  
 Nested error is Cant determine join between 'xxx' and 'yyy'; tables 
 have more than one foreign key constraint relationship between them.  
 Please specify the 'onclause' of this join explicitly.


 onclause is not within  relation optin, so what's the correct manner to 
 explicit it?



 Thank you
 Glauco

   
Found.. primaryjoin

Sorry for noise.
Glauco


-- 
++
  Glauco Uri - Programmatore
glauco(at)allevatori.com 
   
  Sfera Carta Software®  [EMAIL PROTECTED]
  Via Bazzanese,69  Casalecchio di Reno(BO) - Tel. 051591054 
++



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] connection pool

2007-03-14 Thread vkuznet

Hi,
I just came across Documentation and it's not clear to me how to use
connection pooling.
When invoked
db=create_engine()
the pool parameter is set to None by default, right?

In Connection pooling section of docs, it's said
For most cases, explicit access to the pool module is not required

So that's why I'm confused. Does the pooling is turn on by default or
not. If it is not, does the following example is the right default
approach to use:

def getconn():
return MySQLdb.connect(user='ed', dbname='mydb')

engine = create_engine('mysql://', pool=pool.QueuePool(getconn,
pool_size=20, max_overflow=40))

con = engine.connect()

In this case when I invoke multiple times con=engine.connect(), does
the connection will be take from pool? If connection will timeout,
does pool guarantee to make a new one?

Thanks,
Valentin.
P.S. Even Documentation is very well written and very comprehensive,
it provides so many options and examples, that it's not clear what
average user should use for most common cases.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DynamicMetaData question

2007-03-14 Thread JP


 can I have some feedback from the list who is using the thread-local
 capability of DynamicMetaData, and/or are using global_connect() ?
 (another feature i didnt like but it was me bowing to early pressure
 from TG users).

Probably this will be no surprise, since I contributed most of the
original ProxyEngine. ;)

I use DynamicMetaData exactly as you described the main pylons/TG use
case: I define tables at the module level, using a DynamicMetaData
instance, and then call meta.connect(uri) at the start of each
request.  For me, this is the most sensible way to handle things in a
WSGI-friendly web app. Minimal overhead, maximal flexibility.
meta.connect(uri, threadlocal=True) or meta.connect_threadlocal(uri)
would be ok, but I think worse as an API since they put too much
responsibility on the caller and cause too much repetition.

JP


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DynamicMetaData question

2007-03-14 Thread svilen


 On Mar 13, 11:59 am, Gaetan de Menten [EMAIL PROTECTED] wrote:
  I only discovered (or at least understood) this thread localness
  of DynamicMetaData, and honestly, I don't understand in what case
  it can be useful. It seems like the thread localness is limited
  to the engine connected to the metadata. So what I'd like to
  understand is when anyone wouldn't want to use a global engine?
  As long as the connections are thread-local, we are fine, right?

 Not me -- I have cases where the same schema is used with two
 different databases by two different apps (eg, admin uses a main
 postgres db and public uses a local sqlite cache), so the current
 API does what I need, but (if I understand what you're saying) only
 allowing the connect string of a global engine to vary per request,
 rather than allowing completely different engines, would not.

u said different apps - or different threads?
in anycase, would it be better if all metadata setup stays in a func 
with a parameter the metadata - thus u can call it to setup over 
diff. pre-made  metadatas?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DynamicMetaData question

2007-03-14 Thread Gaetan de Menten

On 3/14/07, JP [EMAIL PROTECTED] wrote:

 On Mar 13, 11:59 am, Gaetan de Menten [EMAIL PROTECTED] wrote:
  I only discovered (or at least understood) this thread localness of
  DynamicMetaData, and honestly, I don't understand in what case it can
  be useful. It seems like the thread localness is limited to the engine
  connected to the metadata. So what I'd like to understand is when
  anyone wouldn't want to use a global engine? As long as the
  connections are thread-local, we are fine, right?

 Not me -- I have cases where the same schema is used with two
 different databases by two different apps (eg, admin uses a main
 postgres db and public uses a local sqlite cache), so the current API
 does what I need, but (if I understand what you're saying) only
 allowing the connect string of a global engine to vary per request,
 rather than allowing completely different engines, would not.

Yeah, I figured in the meantime that this case could cause problem.
The point I have is that this case is _in my opinion_ much less common
than the case where you need to delay the setup of your tables and use
the DynamicMetaData for that. So I think it would have been better to
have a DynamicMetaData with threadlocal=False by default. If that were
the case, people would not have to connect the metadata to the engine
in each and every thread. Now it's too late if we don't want to break
backward compatibility but simply adding a connect method to the
standard MetaData, as Michael proposed, would suit my needs just
fine.

Anyway, I've already rambled too much on this issue... Thanks for
having taken the time to answer my question.

-- 
Gaëtan de Menten
http://openhex.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: connection pool

2007-03-14 Thread vkuznet

Hi,
it's not obvious, nothing said in a docs about default pool setup and
The Database options section has:
pool=None - an actual pool instance.
that's why I conclude that pool is NOT setup by default. I just want
to confirm that.

Valentin.

On Mar 14, 1:34 pm, Sébastien LELONG [EMAIL PROTECTED]
securities.fr wrote:
  So that's why I'm confused. Does the pooling is turn on by default or
  not ?

 IIRC, a default pool is set according to the type of engine (eg.
 SingletonThreadPool for sqlite, QueuePool for MySQL, or the like...). So it's
 set by default, but you can of course override this with your own pool,   
   as
 you described.

  In this case when I invoke multiple times con=engine.connect(), does
  the connection will be take from pool? If connection will timeout,
  does pool guarantee to make a new one?

 Use pool_recycle parameter so prevent any timeout.

 Hope it helps.

 Cheers

 Seb
 --
 Sébastien LELONG
 sebastien.lelong[at]sirloon.nethttp://www.sirloon.net


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: count function problem

2007-03-14 Thread Michael Bayer

On Mar 14, 2007, at 12:49 PM, Glauco wrote:

 This is perfect but when i try to use count function  the SQL  
 composer try to do an expensive sql.


 In [63]: print select([tbl['azienda'].c.id],  tbl['azienda']).count()
 SELECT count(id) AS tbl_row_count
 FROM (SELECT azienda.id AS id
 FROM azienda)


what makes you think that query is expensive ?  anyway, more succinct  
to just say table.count().


 Another question:

 Does anyone know how to use this object ?

 select([tbl['azienda'].c.id],  tbl['azienda']).count()
 sqlalchemy.sql.Select object at 0xb6c803ac


select(...).count().execute()

or

engine.connect().execute(select(...).count())





--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update() with a unique but not primary key

2007-03-14 Thread Ken Kuhlman
Perhaps he's looking for an upsert function?  That's sometimes handy, and to
be truly useful would have to be able to use any given key on the table.

I hacked up an upsert for SQLObject once, but it was so ugly I never
contributed it.   It did make the poor man's replication system that I was
working on simpler, though.


On 3/13/07, Michael Bayer [EMAIL PROTECTED] wrote:



 save_or_update() doesnt take any kind of primary key or unique key
 argument.  no specification of anything is needed.

 Sean Davis wrote:
 
  On Tuesday 13 March 2007 07:35, Sean Davis wrote:
  We are creating a database that will have a set of autoincrement
 primary
  keys on the tables.  However, many of the tables also have one or more
  unique keys associated with them.  Can we use save_or_update() (and, by
  extension, cascade='save_or_update', etc.) by specifying one of the
  unique
  keys rather than specifying the primary key directly?
 
  Tried it.  Looks like not.  Sorry for the noise on the list.
 
  Sean
 
  
 


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: pyodbc and tables with triggers

2007-03-14 Thread polaar

On 12 mrt, 21:47, polaar [EMAIL PROTECTED] wrote:
 FYI, specifying module=pyodbc didn't seem to help wrt the
 ConcurrentModificationError. Didn't have very much time, had a (very)
 quick look at the code in mssql.py, and at first sight, it would seem
 that sane_rowcount is a global variable that is only set in the
 use_pyodbc() (resp. adodbapy/pymssql) function, which in turn is only
 called from use-default(), this would seem to mean only when you don't
 specify a module... Either I'm completely wrong (which is very well
 possible ;-), as I said, I only took a quick look, and I'm not
 familiar with the code), or this means that you may not have adodbapi
 (or pymssql) installed in order to use pyodbc correctly???


Update: it indeed seems to work like that. I tried changing the order
of preference in  mssql.py so that it first tries pydobc, and that
seems to work: the ConcurrentModificationError no longer occurs. I now
also get a warning about using pyodbc that I didn't get before. (by
the way: I did have to keep the 'set nocount on'  in order to prevent
the invalid cursor state problem)

I guess something could be done with changing the following line from
the __init__ method of class MSSQLDialect:
self.module = module or dbmodule or use_default()
to something that calls use_pyodbc/use_pymssql/use_adodbapi based on
module.__name__? (I'm not sure though: use_default seems to be called
already when the mssql is imported and it sets the global dbmodule, so
I'm not confident that this is where it should be done*)
Something like this?

{'pyodbc': use_pyodbc, 'adodbapi': use_adodbapi, 'pyodbc':
use_pyodbc}.get(module.__name__, use_default)()

Steven

* can't test it at home (using linux), and as using python at work is
mostly 'under the radar', I can't spend a lot of time on it there, so
sorry if I can't provide you with a well-tested patch ;-)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: connection pool

2007-03-14 Thread Michael Bayer

if you read the entire document, it should be pretty clear that the  
engine does not function without a connection pool, so there has to  
be one present by default.

The Engine will create its first connection to the database when a  
SQL statement is executed. As concurrent statements are executed, the  
underlying connection pool will grow to a default size of five  
connections, and will allow a default overflow of ten. Since the  
Engine is essentially home base for the connection pool, it follows  
that you should keep a single Engine per database established within  
an application, rather than creating a new one for each connection.

poolclass=None - a sqlalchemy.pool.Pool subclass that will be  
instantated in place of the default connection pool.

The close method on Connection does not actually remove the  
underlying connection to the database, but rather indicates that the  
underlying resources can be returned to the connection pool. When  
using the connect() method, the DBAPI connection referenced by the  
Connection object is not referenced anywhere else.


On Mar 14, 2007, at 2:26 PM, vkuznet wrote:


 Hi,
 it's not obvious, nothing said in a docs about default pool setup and
 The Database options section has:
 pool=None - an actual pool instance.
 that's why I conclude that pool is NOT setup by default. I just want
 to confirm that.

 Valentin.

 On Mar 14, 1:34 pm, Sébastien LELONG [EMAIL PROTECTED]
 securities.fr wrote:
 So that's why I'm confused. Does the pooling is turn on by  
 default or
 not ?

 IIRC, a default pool is set according to the type of engine (eg.
 SingletonThreadPool for sqlite, QueuePool for MySQL, or the  
 like...). So it's
 set by default, but you can of course override this with your own  
 pool, as
 you described.

 In this case when I invoke multiple times con=engine.connect(), does
 the connection will be take from pool? If connection will timeout,
 does pool guarantee to make a new one?

 Use pool_recycle parameter so prevent any timeout.

 Hope it helps.

 Cheers

 Seb
 --
 Sébastien LELONG
 sebastien.lelong[at]sirloon.nethttp://www.sirloon.net


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: case() behavior when else_=0

2007-03-14 Thread Rick Morrison
OK, guess not. Committed in rev 2411

On 3/4/07, Rick Morrison [EMAIL PROTECTED] wrote:

 This is a one-liner fix that I already have in my tree. It might be good
 practice one to get your feet wet if you want to take a swag at it, Bret...

 Rick


 On 3/4/07, Bret Aarden [EMAIL PROTECTED] wrote:
 
 
  I discovered that case() doesn't produce an ELSE clause when its else_
  argument is set to else_=0. This is easily fixed by setting else_='0'
  instead, but I wonder if this is ideal behavior.
 
  -Bret.
 
 
   
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: don't automatically stringify compiled statements - patch to base.py

2007-03-14 Thread Michael Bayer

well at least make a full blown patch that doesnt break all the other  
DB's.  notice that an Engine doesnt just execute Compiled objects, it  
can execute straight strings as well.  thats why the dialect's  
do_execute() and do_executemany() take strings - they are assumed to  
go straight to a DBAPI representation.  to take the stringness out  
of Engine would be a large rework to not just Engine but all the  
dialects.

im surprised the execute_compiled() method works for you at all, as  
its creating a cursor, calling result set metadata off the cursor,  
etc. all these DBAPI things which you arent supporting.  it seems  
like it would be cleaner for you if you werent even going through  
that implementation of it.

the theme here is that the base Engine is assuming a DBAPI  
underneath.  if you want an Engine that does not assume string  
statements and DBAPI's it might be easier for you to just provide a  
subclass of Engine instead (or go even lower level, subclass  
sqlalchemy.sql.Executor).   either way you can change what  
create_engine() returns by using a new strategy to create_engine(),  
which is actually a pluggable API.  e.g.

from sqlalchemy.engine.strategies import DefaultEngineStrategy

class NDBAPIEngineStrategy(DefaultEngineStrategy):
   def __init__(self):
 DefaultEngineStrategy.__init__(self, 'ndbapi')

 def get_engine_cls(self):
 return NDBAPIEngine

# register the strategy
NDBAPIEngineStrategy()

now you connect via:

create_engine(url, strategy='ndbapi')

if you want to go one level lower, which i think you do because you  
dont really want pooling or any of that either, you dont even need to  
have connection pooling or anything like thatyou can totally  
override what create_engine() does, have different connection  
parameters, whatever.  just subclass EngineStrategy directly:

class NDBAPIEngineStrategy(EngineStrategy):
 def __init__(self):
 EngineStrategy.__init__(self, 'ndbapi')
 def create(self, *args, **kwargs):
 # this is some arbitrary set of arguments
 return NDAPIEngine(kwargs.get('connect_string'), kwargs.get 
('some_other_argument'), new NDBAPIDialect(*args), etc etc)
# register
NBAPIEngineStrategy()

then you just say:

create_engine(connect_string='someconnectstring',  
some_other_argument='somethingelse', strategy='ndbapi')

i.e. whatever you want.  create_engine() just passes *args/**kwargs  
through to create() after pulling out the strategy keyword.

if you dont like having to send over strategy i can add a hook in  
there to look it up on the dialect, so it could be more like  
create_engine('ndbapi://whatever').  but anyway this method would  
mean we wouldnt have to rewrite half of Engine's internals.


On Mar 14, 2007, at 7:28 PM, Monty Taylor wrote:


 Hi - I'd attach this as a patch file, but it's just too darned  
 small...

 I would _love_ it if we didn't automatically stringify compiled
 statements here in base.py, because there is no way to override this
 behavior in a dialect. I'm working on an NDBAPI dialect to support
 direct access to MySQL Cluster storage, and so I never actually have a
 string representation of the query. Most of the time this is fine, but
 to make it work, I had to have pre_exec do the actual execution,
 because by the time I got to do_execute, I didn't have my real object
 anymore.

 I know this would require some other code changes to actually get
 applied - namely, I'm sure there are other places now where the
 statement should be str()'d to make sense.

 Alternately, we could add a method to Compiled. (I know there is
 already get_str()) like get_final_query() that gets called in this
 context instead. Or even, although it makes my personal code less
 readable, just call compiled.get_str() here, which is the least
 invasive, but requires non-string queries to override a method called
 get_str() to achieve a purpose that is not a stringification.

 Other than this, so far I've actually got the darned thing inserting
 records, so it's going pretty well... other than a whole bunch of test
 code I put in to find out why it wasn't inserting when the problem was
 that I was checking the wrong table... *doh*

 Thanks!
 Monty

 === modified file 'lib/sqlalchemy/engine/base.py'
 --- lib/sqlalchemy/engine/base.py   2007-02-13 22:53:05 +
 +++ lib/sqlalchemy/engine/base.py   2007-03-14 23:17:40 +
 @@ -312,7 +312,7 @@
 return cursor
 context = self.__engine.dialect.create_execution_context()
 context.pre_exec(self.__engine, proxy, compiled, parameters)
 -proxy(str(compiled), parameters)
 +proxy(compiled, parameters)
 context.post_exec(self.__engine, proxy, compiled, parameters)
 rpargs = self.__engine.dialect.create_result_proxy_args 
 (self, cursor)
 return ResultProxy(self.__engine, self, cursor, context,
 typemap=compiled.typemap, columns=compiled.columns, **rpargs)
 @@ -342,7 +342,7 @@
   

[sqlalchemy] Re: pyodbc and tables with triggers

2007-03-14 Thread Rick Morrison
It's the second case, that is, it sniffs out what modules are installed.
As I said before, this
(along with other modules that effectively do the same thing), is up
for a clean-up soon, see ticket #480.

Rick

On 3/14/07, polaar [EMAIL PROTECTED] wrote:


  {'pyodbc': use_pyodbc, 'adodbapi': use_adodbapi, 'pyodbc':
  use_pyodbc}.get(module.__name__, use_default)()

 Sorry, should be pymssql instead of pyodbc twice, but I guess you got
 that...


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Quoting column names on table create?

2007-03-14 Thread sdobrev

because 
 a) SQL-standard says names are caseless - Fun anf fUn are same thing
 b) most SQLs allow mixed case but require it in quotes, and some are 
_very_ picky about it (postgres)
 c) readability - lowercase names differs well from uppercase reserved 
words

On Thursday 15 March 2007 06:26:03 Karlo Lozovina wrote:
 Hi guys,

 I was just wondering, why does SA quote column names that have
 mixed case in them, and leaves them unquoted for lowercase column
 names? Here is what echo looks like:

 CREATE TABLE songs
 (
 key INTEGER NOT
 NULL,
 path
 TEXT,
 name
 TEXT,
 price
 INTEGER,
 fileHash
 TEXT,
 PRIMARY KEY
 (key)
 )

 And yes, I know it's a stupid and irrlevant question, but I was
 just wondering why does it do like this? Btw, I'm using SQLite in
 this example.

 Thanks, and once more, sorry for the stupid question :,
 Klm.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---