[sqlalchemy] eager loading and aliasization

2006-12-27 Thread Manlio Perillo


Hi.

I want to disable aliasization in eager loading query generation.
Is this possible?


Thanks  Manlio Perillo

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] sess.query and correlated subqueries

2006-12-27 Thread Manlio Perillo


Hi.

I have noted that subqueries in sess.query does not get correlated.
Is this a feature or a bug?



Thanks  Manlio Perillo


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] pooled connections with failover database

2006-12-27 Thread [EMAIL PROTECTED]


Hi,
I'm using straight kinterbasdb  connections to firebird in a
multithreaded SocketServer, and actually I open a new connection at
every request. Searching for a connection pooling solution I found that
pool.py sqlalchemy can be great for my needs.

I whish modify my application so it can reuse a pool o firebird
connections, but must be possible also connect automatically to a
second failower database if the main db is unreachable.

Passing a function to a custom pool such as:

def getconn():
   try:
   conn = kinterbasdb.connect( dsn=dns1, ...)
   except kinterbasdb.OperationalError:
   conn = kinterbasdb.connect( dsn=dns2, ...)
   return conn

I can switch to a secondary database if the main db is not working, but
this works only for new connections, existing connections in the pool
will not be automatically restored.

I'm missing something? Can this be accomplished with sqlalchemy?
DBUtils from webware can catch errors on calls to dbmodule.cursor() and
try to reconnect but lacks the custom pool construction of sqlalchemy

Ezio


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sess.query and correlated subqueries

2006-12-27 Thread Michael Bayer


need an example.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: pooled connections with failover database

2006-12-27 Thread Michael Bayer


in theory the connections to the original database should become
invalidated when the cursor() function on the connection throws an
error.  an invalidated connection will then try to connect again on the
next reuse which will call your function.

however in practice ive noticed thats cursor() is not where most DBAPIs
throw the error, they throw it upon the execute() call.  if this is the
case with firebird, then for firebird to automatically detect dropped
connections the do_execute() method in the FB dialect would need to be
implemented, which catches the specific exception raised corresponding
to a non-connecting database and invalidates the connection.   see the
mysql.py module for an example.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Performance of data mapping and plain access

2006-12-27 Thread david mugnai


Hi all,
while trying the ORM side of sqlalchemy, I noticed a remarkable speed
difference
between using the ORM machinery and the plain db access.

The attached test consists of 1000 inserts and 1000 selects; its
purpose is
just to test the overhead of the data mapping system.

The ORM version takes about 160% more time than the plain version in
user-land
time, and 60% in wall clock time:

ORM
Starting test
Total time: 77.8519508839

real1m18.526s
user0m22.277s
sys 0m0.392s


PLAIN
Starting test
Total time: 47.6633858681

real0m48.319s
user0m8.605s
sys 0m0.216s


I'm using the trunk version of SA and PostgreSQL 8.1.4 (Ubuntu Edgy
package).

Is this difference to be expected? What can be done to reduce it, if
anything?

I attach three files:
- userhandling1.py (the plain version)
- userhandling2.py (the ORM version)
- test-userhandling.py (the test program)

Thanks in advance.

david


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Performance of data mapping and plain access

2006-12-27 Thread Michael Bayer


the ORM is going to be slower in all cases since there is the overhead
of creating new object instances and populating them, as well as
initializing their attribute instrumentation and also a copy of their
attributes for the purposes of tracking changes when you issue a
flush() statement.  this strategy was copied from that of hibernate's,
and provides the greatest stability and predictability with regards to
tracking history changes.

we do have some performance tests in the test/ directory which ive used
in the past for profiling and improving these operations, and they are
actually a lot better than they've been in the past.  if your tests are
useful, I might add them as well (but note that your attachments didnt
come through, so try again).

one thing that could make ORM loads much faster would be if you knew
the objects would not need to be flushed() at a later point, and you
disabled history tracking on those instances.  this would prevent the
need to create a copy of the object's attributes at load time.

while theres no option available for that right now its something that
could be added.  that would probably give you a 20-40% speed boost at
least if not more.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Cannot Drop my Table!

2006-12-27 Thread vbgunz


Here are the usual suspects and versions

1. Ubuntu Edgy 6.10 (Linux Distribution)
2. Python 2.5 Final
3. PostgreSQL 8.1.4
4. SQLAlchemy 0.2.8-1 or 3.3
5. python-psycopg2 version 2.0.5.1-1.


I am having a serious lockup problem. The summary is I am following the
tutorial to the letter. When it comes to Selecting, I keep
experiencing a lock-up upon trying to drop the table at the end of my
script. the problem lock-up on my machine is 100% reproducible in
examples 01 and 02.


# Following the SQLAlchemy tutorial to the letter (-fetchall())!
# -- 01.
s = users_table.select()   # create SELECT ClauseElement object
r = s.execute()# necessary evil to make executable?
print r.fetchone() # successfully fetch the first row
users_table.drop(checkfirst=True)  # table never gets dropped!!!


# I've narrowed it down to s.execute()!
# -- 02.
s = users_table.select()   # create SELECT ClauseElement object
r = s.execute()# necessary evil to make executable?
users_table.drop(checkfirst=True)  # table never gets dropped!!!


# Modifying the above into the following doesn't cause a freeze...
# -- 03.
s = users_table.select()   # create SELECT ClauseElement object
print s.execute().fetchone()   # successfully fetch the first row
users_table.drop(checkfirst=True)  # table drops just fine!


I am not sure if this can shed some light on the subject *but* if I
take example 01 and add a fetchall() method after fetchone() (e.g.
04.), the table will drop in the end. The table dropping also works if
I replace the fetchone() method with fetchall() (e.g. 05.). It also
works if I emulate fetchone() with fetchall()[0] (e.g. 06.).


# Following the SQLAlchemy tutorial to the letter (+fetchall())!
# -- 04.
s = users_table.select()   # create SELECT ClauseElement object
r = s.execute()# necessary evil to make executable?
print r.fetchone() # successfully fetch the first row
print r.fetchall() # successfully fetch all rows
users_table.drop(checkfirst=True)  # table drops just fine!


# Following the SQLAlchemy tutorial to the letter (-fetchone())!
# -- 05.
s = users_table.select()   # create SELECT ClauseElement object
r = s.execute()# necessary evil to make executable?
print r.fetchall() # successfully fetch all rows
users_table.drop(checkfirst=True)  # table drops just fine!


# imitating fetchone() with fetchall[0] works too!
# -- 06.
s = users_table.select()   # create SELECT ClauseElement object
r = s.execute()# necessary evil to make executable?
print r.fetchall()[0]  # successfully fetch all rows
users_table.drop(checkfirst=True)  # table drops just fine!


In the end, the problem is with examples 01 and 02. At the bare
minimum, 02, the smallest example is where I feel the problem lies.
Maybe I am just bugging out or doing something wrong but examples 01
and 02 will never let me drop the table in the end. Could someone
please advise on what the problem is. I would really appreciate the
help here. Thank you!


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Cannot Drop my Table!

2006-12-27 Thread vbgunz


Here are the usual suspects and versions

1. Ubuntu Edgy 6.10 (Linux Distribution)
2. Python 2.5 Final
3. PostgreSQL 8.1.4
4. SQLAlchemy 0.2.8-1 or 3.3
5. python-psycopg2 version 2.0.5.1-1.


I am having a serious lockup problem. The summary is I am following the
tutorial to the letter. When it comes to Selecting, I keep
experiencing a lock-up upon trying to drop the table at the end of my
script. the problem lock-up on my machine is 100% reproducible in
examples 01 and 02.


# Following the SQLAlchemy tutorial to the letter (-fetchall())!
# -- 01.
s = users_table.select()   # create SELECT ClauseElement object
r = s.execute()# necessary evil to make executable?
print r.fetchone() # successfully fetch the first row
users_table.drop(checkfirst=True)  # table never gets dropped!!!


# I've narrowed it down to s.execute()!
# -- 02.
s = users_table.select()   # create SELECT ClauseElement object
r = s.execute()# necessary evil to make executable?
users_table.drop(checkfirst=True)  # table never gets dropped!!!


# Modifying the above into the following doesn't cause a freeze...
# -- 03.
s = users_table.select()   # create SELECT ClauseElement object
print s.execute().fetchone()   # successfully fetch the first row
users_table.drop(checkfirst=True)  # table drops just fine!


I am not sure if this can shed some light on the subject *but* if I
take example 01 and add a fetchall() method after fetchone() (e.g.
04.), the table will drop in the end. The table dropping also works if
I replace the fetchone() method with fetchall() (e.g. 05.). It also
works if I emulate fetchone() with fetchall()[0] (e.g. 06.).


# Following the SQLAlchemy tutorial to the letter (+fetchall())!
# -- 04.
s = users_table.select()   # create SELECT ClauseElement object
r = s.execute()# necessary evil to make executable?
print r.fetchone() # successfully fetch the first row
print r.fetchall() # successfully fetch all rows
users_table.drop(checkfirst=True)  # table drops just fine!


# Following the SQLAlchemy tutorial to the letter (-fetchone())!
# -- 05.
s = users_table.select()   # create SELECT ClauseElement object
r = s.execute()# necessary evil to make executable?
print r.fetchall() # successfully fetch all rows
users_table.drop(checkfirst=True)  # table drops just fine!


# imitating fetchone() with fetchall[0] works too!
# -- 06.
s = users_table.select()   # create SELECT ClauseElement object
r = s.execute()# necessary evil to make executable?
print r.fetchall()[0]  # successfully fetch all rows
users_table.drop(checkfirst=True)  # table drops just fine!


In the end, the problem is with examples 01 and 02. At the bare
minimum, 02, the smallest example is where I feel the problem lies.
Maybe I am just bugging out or doing something wrong but examples 01
and 02 will never let me drop the table in the end. Could someone
please advise on what the problem is. I would really appreciate the
help here. Thank you!


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] many-to-many relationship select question

2006-12-27 Thread 卓强 Zhuo, Qiang


Hi All,

Happy X'max :)

I'm implementing a tag schema using sqlalchemy, I follow the official
document's way:

articles_table = Table('articles', metadata,
   Column('article_id', Integer, primary_key = True),
   Column('headline', String(150), key='headline'),
   Column('body', TEXT, key='body'),
)

keywords_table = Table('keywords', metadata,
   Column('keyword_id', Integer, primary_key = True),
   Column('keyword_name', String(50))
)

itemkeywords_table = Table('article_keywords', metadata,
   Column('article_id', Integer, ForeignKey(articles.article_id)),
   Column('keyword_id', Integer, ForeignKey(keywords.keyword_id))
)

# class definitions
class Keyword(object):
   def __init__(self, name):
   self.keyword_name = name

class Article(object):
   pass

mapper(Keyword, keywords_table)

# define a mapper that does many-to-many on the 'itemkeywords'
association
# table
mapper(Article, articles_table, properties = dict(
   keywords = relation(Keyword, secondary=itemkeywords_table,
lazy=False)
   )
)

just that I didn't want to minimize duplication, so I remove all the
table name prefix in the column name, that is instead of keyword_name
I choose name (since it's in keyword table it must be keyword name),
here's the problem, if I try to get all the articles which have a
specific keyword, the doc says:

# select articles based on a keyword.  select_by will handle the extra
joins.
sqlarticles = session.query(Article).select_by(keyword_name='politics')


but since I rename the keyword_name to name, how can I do this query
now ?
or there's no easy way to do it so I should duplicate all the table
name as the prefix in column name(which is ugly IMHO  ;))?

Thanks
Qiang


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Cannot Drop my Table!

2006-12-27 Thread Michael Bayer


as mentioned on IRC, you have to lose your reference to the ResultProxy
first (or close() it) so that the underlying connection is returned to
the connection pool, where it has a ROLLBACK issued on it.  this will
release the locks so that other connections can drop the table.  in the
case of fetchall(), when all results are consumed the ResultProxy
automatically performs this operation, in the case of fetchone() it
does not (more rows may be pending).

alternatively you can also use an explicit connection, so that there
are no locking issues (provided no concurrent connections getting
involved elsewhere):

conn = engine.connect()
r = conn.execute(users_table.select())
print r.fetchone()
users_table.drop(connectable=conn)
conn.close()


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: many to many on another many to many

2006-12-27 Thread dischdennis


the tables have all foreign key relationships, but the buggy software I
use (db designer) forgot to print it out

I will set up a complete testcase now that tries to do what I want to
do. but do you see any problems in general to relate from one
association object to another?

Dennis


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Performance of data mapping and plain access

2006-12-27 Thread Lele Gaifax


Michael Bayer wrote:


the ORM is going to be slower in all cases since there is the overhead
of creating new object instances and populating them, as well as
initializing their attribute instrumentation and also a copy of their
attributes for the purposes of tracking changes when you issue a
flush() statement.


Couldn't an approach like dbrow (http://opensource.theopalgroup.com/) 
bring some benefit in this area? It surely overlaps here and there with 
SA, but expecially thanks to the C impl (there's also a pure Python one) 
it's pretty fast in delivering lots of instances of the same class.


ciao, lele.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sess.query and correlated subqueries

2006-12-27 Thread Manlio Perillo


Michael Bayer ha scritto:


need an example.



from sqlalchemy import *


db = create_engine('postgres://manlio:[EMAIL PROTECTED]/test')


metadata = BoundMetaData(db)
programmers = Table(
'programmers', metadata,
Column('name', String, primary_key=True)
)

languages = Table(
'languages', metadata,
Column('name', String, primary_key=True)
)

programmer_languages = Table(
'programmer_languages', metadata,
Column('programmer_name', String, ForeignKey('programmers.name')),
Column('language_name', String, ForeignKey('languages.name')),
)

metadata.create_all()


class Programmer(object):
def __init__(self, name):
self.name = name

class Language(object):
def __init__(self, name):
self.name = name


programmersMapper = mapper(Programmer, programmers)
languagesMapper = mapper(Language, languages)


try:
conn = db.connect()
sess = create_session(bind_to=conn)

guido = Programmer(name='Guido')
bjarne = Programmer(name='Bjarne')

python = Language('Python')
cplusplus = Language('C++')

sess.save(guido)
sess.save(bjarne)
sess.save(python)
sess.save(cplusplus)
sess.flush()


i = programmer_languages.insert()
conn.execute(
i, programmer_name='Guido', language_name='Python'
)

# Find the programmers without a programming language
query = sess.query(Programmer)
db.echo = True
r = query.select(
not_(exists(
[programmer_languages.c.programmer_name],
programmer_languages.c.programmer_name == 
programmers.c.name

)
 )
)
db.echo = False

print 'result:'
for p in r:
print p.name
finally:
metadata.drop_all()



The generated query is:
SELECT programmers.name AS programmers_name
FROM programmers
WHERE NOT (EXISTS (SELECT programmer_languages.programmer_name AS 
programmer_name

FROM programmer_languages, programmers
WHERE programmer_languages.programmer_name = programmers.name)) ORDER BY 
programmers.name






Regards  Manlio Perillo

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: eager loading and aliasization

2006-12-27 Thread Michael Bayer


the eager load queries are not meant to be modified by the query that
you send to query.select()youre not really supposed to have any
awareness of the eager loads at all.  eager loading and lazy loading
both load the full list of child items in all cases.  heres a relevant
FAQ entry:

http://www.sqlalchemy.org/trac/wiki/FAQ#ImusinglazyFalsetocreateaJOIN/OUTERJOINandSQLAlchemyisnotconstructingthequerywhenItrytoaddaWHEREORDERBYLIMITetc.whichreliesupontheOUTERJOIN

heres a new section in the docs that illustrates how to load relations
only partially (though lazily):

http://www.sqlalchemy.org/docs/adv_datamapping.myt#advdatamapping_properties_working

if you totally want to eagerly load the child items using a specialized
criterion, this would be a job for result set mapping, where you make
whatever query you want.  the basic idea is described here:

http://www.sqlalchemy.org/docs/adv_datamapping.myt#advdatamapping_properties_working

however, i need to update those docs.  to get the eager loads in your
query to work, you need to send along some special information about
which relations you are eager loading in your query, like this:

q = session.query(SomeClass).options(contains_eager('somerelation'),
contains_eager('someotherrelation'))
result = q.instances(myselectstatement.execute())


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sess.query and correlated subqueries

2006-12-27 Thread Michael Bayer


its a bug.  its fixed in revision 2173.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Cannot Drop my Table!

2006-12-27 Thread Michael Bayer


you can close on the result returned by the execute() call.  however,
if you just lose the reference to the result, that is as good a close
since python's garbage collection can clean it up.  also, you arent
usually going to have code as in your example that locks that harshly
(i.e. youll find updates and inserts and stuff interact pretty easily
with concurrent selects in postgres).


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Getting Relation Information on the fly

2006-12-27 Thread Michael Bayer


you should write this code against the 0.3 series since the
organization of MapperProperty objects is a little better.  but also,
im not sure if traversing mapper.props() is the best way to go here
(also what is mapped_class.c.self ? easier to say
class_mapper(someclass)...) ...why not just traverse the underlying
Table objects ?  (which from the mapper is
class_mapper(someclass).mapped_table)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: many to many on another many to many

2006-12-27 Thread Michael Bayer


without the foreignkeys i cant tell much.  i dont see anything
obviously wrong with it.   would also need to see how you are creating
the objects/assigning/etc.   thats why the simple one-file testcase
says it all...your tables, your mappers, what youre doing exactly to
create the issue, heres the exact problem happening , etc.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Performance of data mapping and plain access

2006-12-27 Thread Michael Bayer


ive committed in r2174 some speed enhancements, not including the
abovementioned change to deferring the on-load copy operation (which
is a more involved change), that affords a 20% speed improvement in
straight instance loads and a 25% speed improvement in instances loaded
via eager loaders...mostly due to the removal of an expensive
__getattribute__ call that was involved as well as an AttributeError
being thrown/caught for each instance created.

deferring the on-load copy operation which i mentioned earlier it
turns out would probably knock another 12% off the loading time, so its
not as dramatic as I thought originally.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: startswith/endswith and bindparam error

2006-12-27 Thread jose


Michael Bayer wrote:



the startswith/endswith functions are just sticking a % on either
side of a string argument and using LIKE.  im not exactly sure how they
could accept a bindparam argument since there is a string concatenation
that must take place within the python space.  


Since the startswith parameter should be a string, you don't need to use 
the + operator to concatenate, but...


 '%(oper)s%(literal)s' % {'oper': 'my', 'literal': '%'}
'my%'

instead of...
 '%(oper)s + %(literal)s' % {'oper': 'my', 'literal': '%'}
 'my + %'


you should instead use
LIKE directly with your own bindparam that includes a % on the left
or right side of the string expression.


Yes, I know, but if I can use LIKE instead of start/end{swith}. I wonder 
why we have those operators in sqlalchemy, with partial functionality? ;-)


jo







--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---