[sqlalchemy] Any way to determine whether the url exists or not?

2008-04-30 Thread Olli Wang

Hi, I'm trying to determine whether the url's database exists or not,
but didn't find any approach. I created a sqlite engine:

engine = create_engine('sqlite:///test.sqlite')

And tried to connect it:

engine.connect()

But it just created the database file if not existed. So is there any
way to achieve what I expect? Thanks.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Any way to determine whether the url exists or not?

2008-04-30 Thread Kyle Schaffrick

On Wed, 30 Apr 2008 03:34:16 -0700 (PDT)
Olli Wang [EMAIL PROTECTED] wrote:

 
 Hi, I'm trying to determine whether the url's database exists or not,
 but didn't find any approach. I created a sqlite engine:
 
 engine = create_engine('sqlite:///test.sqlite')
 
 And tried to connect it:
 
 engine.connect()
 
 But it just created the database file if not existed. So is there any
 way to achieve what I expect? Thanks.
 

Since the part after 'sqlite:///' is just a regular path on the
filesystem, so one solution could be to use OS calls like stat() to test
for the file:

import os

engine = create_engine('sqlite:///test.sqlite')
try:
os.stat(engine.url.database)
# Code for when the database exists, such as:
engine.connect()
except OSError:
# Code for when the database does not exist

I don't think there's any way to get that information directly from the
engine, because I don't think SQLite's API provides for it. Maybe
someone can correct me if I'm wrong.

Good luck,
Kyle

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Any way to determine whether the url exists or not?

2008-04-30 Thread az

there's no generic way AFAIK.
u could switch on the dbtype part of dburl, and do different things 
accordingly. see sqlalchemy/engine/url.py / make_url()
or see dbcook.usage.sa_engine_defs.py for similar approach:
http://dbcook.svn.sourceforge.net/viewvc/dbcook/trunk/dbcook/usage/


On Wednesday 30 April 2008 15:23:36 Kyle Schaffrick wrote:
 On Wed, 30 Apr 2008 03:34:16 -0700 (PDT)

 Olli Wang [EMAIL PROTECTED] wrote:
  Hi, I'm trying to determine whether the url's database exists or
  not, but didn't find any approach. I created a sqlite engine:
 
  engine = create_engine('sqlite:///test.sqlite')
 
  And tried to connect it:
 
  engine.connect()
 
  But it just created the database file if not existed. So is there
  any way to achieve what I expect? Thanks.

 Since the part after 'sqlite:///' is just a regular path on the
 filesystem, so one solution could be to use OS calls like stat() to
 test for the file:

 import os

 engine = create_engine('sqlite:///test.sqlite')
 try:
 os.stat(engine.url.database)
 # Code for when the database exists, such as:
 engine.connect()
 except OSError:
 # Code for when the database does not exist

 I don't think there's any way to get that information directly from
 the engine, because I don't think SQLite's API provides for it.
 Maybe someone can correct me if I'm wrong.

 Good luck,
 Kyle

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] branch:user_defined_state questions

2008-04-30 Thread az

-
replacing __init__(...) - i see that some effort is taken to keep the 
original signature. But the result wont be debuggable IMO.

cant it be some_init(*a,**kw) doing whatever( *a,**kw) and/or calling 
original_init(*a,**kw) ? whats inside is not changing as 
sequence/logic anyway...

OR, maybe fix/hack with the co_filename and co_firstlineno 
code-attributes or whatever so inspect.getsource( 
damnedclass.__init__) works...

-
anyway.
i have some obj.pre_save() hook-method that is called just before 
session.save_or_update( obj), to do last-point validations and/or 
setup of timestamp-like fields.
There's an idea/ need to replace that with 
mapper_extension.before_insert() but i'm not sure if these are going 
to be equivalent flow-wise. To me, save() is much higher level thing 
than before_insert - nothing has happened yet, and an exception 
thrown in wanna-be save() isnt going to cancel anything big... while
in before_insert() is going to force a rollback, no? i.e. undo stuff 
that IS already done to the db.
Also pulling things into flush isnt possible in before_insert() - not 
that i'm doing such things but who knows who's gonna use that hook 
for what..

maybe i have to split into 2 levels? one for save() and one for 
before_insert? right now the latter is needed only for 
relations and similar associated things, pulled by some save(), to be 
also setup correctly. Any specific hook for those i can use?
or maybe i should have both... 

now, why i got here at all... in order to call the pre_save() i am 
duplicating the save() behaviour to setup a _state if an instance 
doesnt have one. do i really need that? 
looks like i may want to hook before session.save() calls the 
session._save_impl(), when the state is already there... which is 
only possible in the branch now. ok i'll do my own _state_setup() for 
backward compatibility


sql.visitors.ClauseVisitor:
 - seems the place/order of actual visitation has changed... before 
order was children then parents (post-walk, depth first), now it 
jumps straight onto parent (e.g. visit_binary) before any child 
(pre-walk ?). Isn't quite a time to fix this visiting pattern, 
allowing more flexibility? As i see, the iterate_depthfirst is what i 
need, and i hacked module-level iterate to point to that, but that 
isn't a good way - i need it temporarily. IMO the visitors.traverse() 
should have one more argument iterator_func, and that to be passed 
whatever the Visitor's .iterate is, instead of using hardcoded 
module-level iterate.
 - those name[6:] things... mmh.

-
hey, i start to notice some pattern ... where i use something 
sa-internal, the more quick/hackish/inflexible it has been done, the 
more (frequently) it changes - so i have many many versions of ways 
of using it. do u want some statistix which items got biggest number 
of versions so far ? (:-)

--
btw the concrete-polymorphism errors come at v4371, and branch has 
them too. Next thing i'll be looking at.
is the branch going to be merged any soon or not really? 

ciao
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: is MapperExtension.after_update() called before the UPDATE is issued to the DB?

2008-04-30 Thread Michael Bayer


On Apr 29, 2008, at 5:55 PM, David Bonner wrote:


 Hi, I'm trying to write a mapper extension that notifies a daemon
 about changes made to the DB that it needs to care about.  But it
 looks like after_update() is actually getting called before the UPDATE
 is sent to the db.

 Not knowing a better way to debug it, I just threw a pdb.set_trace()
 into my after_update method to pause the process making the change,
 then queried the db directly using sql.  It looks like the change
 hasn't made it to the DB yet, even though pdb.set_trace() is being
 triggered.


after_update() is called after all UPDATE statements have been issued  
for that particular mapper.  This includes *only* the table that is  
mapped by that mapper, not any other mappers.

Is it possible that you are seeing an UPDATE being issued for an item  
that is related to your parent via a many-to-one ?   Or that you have  
multiple ME's with after_update() at play and perhaps you're seeing a  
different one fire off.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: subquery and inheritance

2008-04-30 Thread Michael Bayer


On Apr 30, 2008, at 1:48 AM, kris wrote:


 If I add a simple correlate feature to Query in 0.5, you can use  
 the
 raw Table object to bypass the ORM meaning of Dataset and Base.
 the query above is not quite complete but I can get an approximation
 like this:

 Is this functionality available currently or am I waiting for
 sqlalchemy 0.5?



 So I think going forward, the concept of use the Table objects
 directly when you want to bypass the higher level meaning of
 Dataset.id, i.e. that its selecting from a join of Dataset and  
 Base,
 might be a good way to go with this.

 Is there a branch (or trunk) that I should be using


0.5 is soon to be moved to trunk but is currently in the user defined  
state branch at 
http://svn.sqlalchemy.org/sqlalchemy/branches/user_defined_state 
  .   but yes being able to send column expressions into  
session.query() is a totally new thing.

in the UDS branch, I made some major changes to adjust for what you're  
trying to do.  Over there, if you create a Query using only the  
columns bound to a table, i.e. like mytable.c.somecolumn, no clause  
adaption occurs and no mapper definitions affect those expressions,  
meaning you can in fact hand-create joins across the individual tables  
in the inheritance setup regardless of how the inheriting mappers were  
configured (i.e. even if you told your inheriting mappers to by  
default load from a UNION).   Only class-bound properties, ie.  
MyClass.somecolumn, are subject to the ORM rules.   So in effect the  
Query can now act as a pure pass-through to select().




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: branch:user_defined_state questions

2008-04-30 Thread Michael Bayer


On Apr 30, 2008, at 8:50 AM, [EMAIL PROTECTED] wrote:


 -
 replacing __init__(...) - i see that some effort is taken to keep the
 original signature. But the result wont be debuggable IMO.

 cant it be some_init(*a,**kw) doing whatever( *a,**kw) and/or calling
 original_init(*a,**kw) ? whats inside is not changing as
 sequence/logic anyway...

 OR, maybe fix/hack with the co_filename and co_firstlineno
 code-attributes or whatever so inspect.getsource(
 damnedclass.__init__) works...

are we talking about the __init__ placed on instances ?  how is that  
not debuggable ?   I know that pdb is forced to illustrate one line in  
the trace as being part of a string but thats not such a big deal.

 i have some obj.pre_save() hook-method that is called just before
 session.save_or_update( obj), to do last-point validations and/or
 setup of timestamp-like fields.
 There's an idea/ need to replace that with
 mapper_extension.before_insert() but i'm not sure if these are going
 to be equivalent flow-wise. To me, save() is much higher level thing
 than before_insert - nothing has happened yet, and an exception
 thrown in wanna-be save() isnt going to cancel anything big... while
 in before_insert() is going to force a rollback, no? i.e. undo stuff
 that IS already done to the db.
 Also pulling things into flush isnt possible in before_insert() - not
 that i'm doing such things but who knows who's gonna use that hook
 for what..

you should use a SessionExtension for this hook.  It'll give you a pre- 
flush() which I think it what you're looking for.  flushes do roll  
back still.   Some DB's can't continue after various kinds of  
constraint violations occur (Postgres after a primary key violation  
comes to mind).  Although using SAVEPOINTs, flushes can be made to be  
atomic within a larger transaction (and we're going to have great  
support for that).



 now, why i got here at all... in order to call the pre_save() i am
 duplicating the save() behaviour to setup a _state if an instance
 doesnt have one. do i really need that?
 looks like i may want to hook before session.save() calls the
 session._save_impl(), when the state is already there... which is
 only possible in the branch now. ok i'll do my own _state_setup() for
 backward compatibility

the ORM ensures that instances always have a _state.  I think the  
primary entry point on that is via the instrumented __init__() method  
as well as the on-load hooks.   I dont know why you'd need to set up a  
_state even in 0.4, that's an internal detail which the ORM handles  
(unless you're blowing away SA's __init__ decorator, which would be  
bad).


 sql.visitors.ClauseVisitor:
 - seems the place/order of actual visitation has changed... before
 order was children then parents (post-walk, depth first), now it
 jumps straight onto parent (e.g. visit_binary) before any child
 (pre-walk ?). Isn't quite a time to fix this visiting pattern,
 allowing more flexibility?


 As i see, the iterate_depthfirst is what i
 need, and i hacked module-level iterate to point to that, but that
 isn't a good way - i need it temporarily. IMO the visitors.traverse()
 should have one more argument iterator_func, and that to be passed
 whatever the Visitor's .iterate is, instead of using hardcoded
 module-level iterate.

I just added traverse_depthfirst() for you.In reality, I dont need  
iterate/traverse_depthfirst() at all and I was almost going to just  
remove them the other day, so maybe you should just maintain these on  
your end if you need a very specific traversal.

As far as the callable, thats nice for a general use API but thats not  
really what visitors is right now - I dont need to have all of my  
ORM functions piecing together iterate + traversal functions since for  
my needs the order of iteration is never important.  When I need to  
copy and modify the structure,  I have cloned_traverse() and  
replacement_traverse() which have very specific traversal orders based  
on what they need to do.


 - those name[6:] things... mmh.

there was a metaclass trick I was working on there but it ran into  
problems.  so that approach there is temporary until something class- 
level can be worked out.


 hey, i start to notice some pattern ... where i use something
 sa-internal, the more quick/hackish/inflexible it has been done, the
 more (frequently) it changes - so i have many many versions of ways
 of using it. do u want some statistix which items got biggest number
 of versions so far ? (:-)

heres an idea - dont write your application to sqlalchemy  
internals !   of course the more hacky ones are going to change more  
frequently since we're still figuring out the best way to do them  
(thats why they're...hacky !)I cant think of any code even in  
0.5's progress that isnt subject to being massively changed yet again  
(for example, sync.py and dependency.py are very solid now, and aren't  
changing at all...but what if someday we reduce the 

[sqlalchemy] Re: is MapperExtension.after_update() called before the UPDATE is issued to the DB?

2008-04-30 Thread Rick Morrison
 then queried the db *directly using sql*.  It looks like the change
 hasn't made it to the DB yet

Also possible is that you're using a an MVCC DB such as Postgres or Oracle,
and you're looking at an old, pre-update version of the data, as your direct
SQL would be in a separate transaction

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: branch:user_defined_state questions

2008-04-30 Thread jason kirtland

Michael Bayer wrote:
 
 On Apr 30, 2008, at 8:50 AM, [EMAIL PROTECTED] wrote:
 
 -
 replacing __init__(...) - i see that some effort is taken to keep the
 original signature. But the result wont be debuggable IMO.

 cant it be some_init(*a,**kw) doing whatever( *a,**kw) and/or calling
 original_init(*a,**kw) ? whats inside is not changing as
 sequence/logic anyway...

 OR, maybe fix/hack with the co_filename and co_firstlineno
 code-attributes or whatever so inspect.getsource(
 damnedclass.__init__) works...
 
 are we talking about the __init__ placed on instances ?  how is that  
 not debuggable ?   I know that pdb is forced to illustrate one line in  
 the trace as being part of a string but thats not such a big deal.

Also, the __init__ decorator is optional in UDS/0.5.  The class 
instrumentor will receive a 'install_member('__init__', sa's default 
genned function)' call and can do whatever it likes with that.  The 
toolkit is in place for building and substituting your own non-exec'd 
__init__ that does the setup work SA wants done on init.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: is MapperExtension.after_update() called before the UPDATE is issued to the DB?

2008-04-30 Thread David Bonner

On Wed, Apr 30, 2008 at 10:56 AM, Rick Morrison [EMAIL PROTECTED] wrote:
  then queried the db directly using sql.  It looks like the change
   hasn't made it to the DB yet

 Also possible is that you're using a an MVCC DB such as Postgres or Oracle,
 and you're looking at an old, pre-update version of the data, as your direct
 SQL would be in a separate transaction

We are using Postgres (8.1.6), but in the case of the sqlalchemy code
I've got transactional set to off for the session.  I'm not too
familiar with the specifics of MVCC (just had to google it,
actually)...is it possible the new data isn't visible to the other
process if the updating process still has a cursor live?

-- 
david bonner
[EMAIL PROTECTED]

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: is MapperExtension.after_update() called before the UPDATE is issued to the DB?

2008-04-30 Thread David Bonner

On Wed, Apr 30, 2008 at 9:29 AM, Michael Bayer [EMAIL PROTECTED] wrote:
  after_update() is called after all UPDATE statements have been issued
  for that particular mapper.  This includes *only* the table that is
  mapped by that mapper, not any other mappers.

  Is it possible that you are seeing an UPDATE being issued for an item
  that is related to your parent via a many-to-one ?   Or that you have
  multiple ME's with after_update() at play and perhaps you're seeing a
  different one fire off.

It was, but I've managed to reproduce the problem with a fairly
minimal test case:

import pdb
import datetime
import pprint

from pkg_resources import require
require('SQLAlchemy==0.4.3')
from sqlalchemy import *
from sqlalchemy.orm import *

class MyExtension (MapperExtension) :
   def after_update (self, mapper, connection, instance) :
  #pdb.set_trace()
  pprint.pprint(instance)
  return EXT_CONTINUE

class Schedule (object) : pass

dburl = 'postgres://names have been changed to protect the innocent'
engine = create_engine(dburl, strategy='threadlocal')
meta = MetaData(engine)
Session = scoped_session(sessionmaker(bind=engine,
  autoflush=False,
  transactional=False))
mapper = Session.mapper
meta.reflect()
mapper(Schedule, meta.tables['schedules'], extension=MyExtension())

s = Schedule.query.first()
s.notes = str(datetime.datetime.now())
Session.flush([s])



When this code drops me into pdb, the data in instance.notes looks
like the new value, but querying the db in a separate process gets me
the old value.

-- 
david bonner
[EMAIL PROTECTED]

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: is MapperExtension.after_update() called before the UPDATE is issued to the DB?

2008-04-30 Thread Rick Morrison
I suppose that depends on the behavior of the DB-API interface, in this case
I guess that's psycopg.

Anyway, I'm certainly not sure if an MVCC snapshot that's causing your
problem, but it does seem like at least a possibility. The certain way is to
check the update status inside the same transaction that did the update --
in general you can't count on transaction B to see transaction A changes
unless B starts after A commits. Whether an open pyscopg cursor implies an
open transaction -- beats me.

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: branch:user_defined_state questions

2008-04-30 Thread Michael Bayer


On Apr 30, 2008, at 8:50 AM, [EMAIL PROTECTED] wrote:

 should have one more argument iterator_func, and that to be passed
 whatever the Visitor's .iterate is, instead of using hardcoded

fine.  r4607:

def traverse_using(iterator, obj, visitors):
 visit the given expression structure using the given iterator  
of objects.




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: is MapperExtension.after_update() called before the UPDATE is issued to the DB?

2008-04-30 Thread David Bonner

oops, uncommenting the pdb.set_trace(), obviously.  sorry.

  class MyExtension (MapperExtension) :
def after_update (self, mapper, connection, instance) :
   #pdb.set_trace()
   pprint.pprint(instance)
   return EXT_CONTINUE

-- 
david bonner
[EMAIL PROTECTED]

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: is MapperExtension.after_update() called before the UPDATE is issued to the DB?

2008-04-30 Thread Michael Bayer


On Apr 30, 2008, at 11:22 AM, David Bonner wrote:


 When this code drops me into pdb, the data in instance.notes looks
 like the new value, but querying the db in a separate process gets me
 the old value.

flush() always uses a transaction, so when your pdb process hits, the  
transaction has not been committed yet and results are not visible  
outside of the transaction.  the transactional keyword on Session  
does not mean don't use transactions at all, it means don't  
automatically enter a transaction outside of a flush.  Its been  
renamed to autocommit in 0.5.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: is MapperExtension.after_update() called before the UPDATE is issued to the DB?

2008-04-30 Thread David Bonner

On Wed, Apr 30, 2008 at 11:30 AM, Michael Bayer
[EMAIL PROTECTED] wrote:
  flush() always uses a transaction, so when your pdb process hits, the
  transaction has not been committed yet and results are not visible
  outside of the transaction.  the transactional keyword on Session
  does not mean don't use transactions at all, it means don't
  automatically enter a transaction outside of a flush.  Its been
  renamed to autocommit in 0.5.

Ah, yeah, that would do it.  Any suggestions for other hooks that
might do what I'm looking for, or should I just handle this myself
before and after the flush?

-- 
david bonner
[EMAIL PROTECTED]

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] sequence-related question

2008-04-30 Thread Eric Lemoine

Hello

I insert a new line in a table using this:

campfacility = Campfacility(prop1, prop2)
model.Session.save(campfacility)
model.Session.commit()

The campfacility id is handled by a postgres sequence.

What I'd like to do is:

campfacility = Campfacility(prop1, prop2)
seq = Sequence('some_sequence')
model.Session.execute(seq)
model.Session.save(campfacility)
model.Session.commit()

to know before inserting the line what id it will get. The above code
doesn't seem thread-safe to me:

thread 1 thread 2
execute(seq) - nextid = n

execute(seq) - nextid = n
model.Session.save(campfacility)

model.Session.save(campfacility) - BUG, nextid isn't correct

I'm sure there's a way to make this thread-safe but right now I just
don't know how.

Can someone help?

Thanks a lot,

PS: I just love SA ;-)
--
Eric

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: branch:user_defined_state questions

2008-04-30 Thread az

On Wednesday 30 April 2008 18:25:25 Michael Bayer wrote:
 On Apr 30, 2008, at 8:50 AM, [EMAIL PROTECTED] wrote:
  should have one more argument iterator_func, and that to be
  passed whatever the Visitor's .iterate is, instead of using
  hardcoded

 fine.  r4607:

 def traverse_using(iterator, obj, visitors):
  visit the given expression structure using the given
 iterator of objects.

yeah, i how i get the ClauseVisitor to use that...
The idea was for the ClauseVisitor's traverse to use ClauseVisitor's 
iterate (or _iterate), pointing by default to module's plain 
iterate(), so inheriting and replaceing _iterate with e.g. 
iterate_depth_first (or whatever fancy) would work, without a need to 
reinvent the traverse mechanism at all.
unless u're fading out those classes for a reason or another?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: is MapperExtension.after_update() called before the UPDATE is issued to the DB?

2008-04-30 Thread Michael Bayer


On Apr 30, 2008, at 12:15 PM, David Bonner wrote:


 On Wed, Apr 30, 2008 at 11:30 AM, Michael Bayer
 [EMAIL PROTECTED] wrote:
 flush() always uses a transaction, so when your pdb process hits, the
 transaction has not been committed yet and results are not visible
 outside of the transaction.  the transactional keyword on Session
 does not mean don't use transactions at all, it means don't
 automatically enter a transaction outside of a flush.  Its been
 renamed to autocommit in 0.5.

 Ah, yeah, that would do it.  Any suggestions for other hooks that
 might do what I'm looking for, or should I just handle this myself
 before and after the flush?

if you want pre/post flush activities theres a SessionExtension which  
hooks into Session for that.  You can set it up with the  
sessionmaker() function so that its always plugged in.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: branch:user_defined_state questions

2008-04-30 Thread Michael Bayer


On Apr 30, 2008, at 12:26 PM, [EMAIL PROTECTED] wrote:


 On Wednesday 30 April 2008 18:25:25 Michael Bayer wrote:
 On Apr 30, 2008, at 8:50 AM, [EMAIL PROTECTED] wrote:
 should have one more argument iterator_func, and that to be
 passed whatever the Visitor's .iterate is, instead of using
 hardcoded

 fine.  r4607:

 def traverse_using(iterator, obj, visitors):
 visit the given expression structure using the given
 iterator of objects.

 yeah, i how i get the ClauseVisitor to use that...
 The idea was for the ClauseVisitor's traverse to use ClauseVisitor's
 iterate (or _iterate), pointing by default to module's plain
 iterate(), so inheriting and replaceing _iterate with e.g.
 iterate_depth_first (or whatever fancy) would work, without a need to
 reinvent the traverse mechanism at all.
 unless u're fading out those classes for a reason or another?


no theyre hanging around.  Just subclass ClauseVisitor for now.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: is MapperExtension.after_update() called before the UPDATE is issued to the DB?

2008-04-30 Thread David Bonner

On Wed, Apr 30, 2008 at 2:50 PM, Michael Bayer [EMAIL PROTECTED] wrote:
  if you want pre/post flush activities theres a SessionExtension which
  hooks into Session for that.  You can set it up with the
  sessionmaker() function so that its always plugged in.

thanks, i'll look into that.

and thanks again for the amazingly-quick and helpful replies.

-- 
david bonner
[EMAIL PROTECTED]

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sequence-related question

2008-04-30 Thread Michael Bayer


On Apr 30, 2008, at 12:56 PM, Eric Lemoine wrote:


 thread 1 thread 2
 execute(seq) - nextid = n

 execute(seq) - nextid = n
 model.Session.save(campfacility)

 model.Session.save(campfacility) - BUG, nextid isn't correct

whats correct here, you'd like the integer identifier to be in exact  
row-insert order ?   if the column is a non-primary key column, the  
sequence will be executed inline within the executed SQL so that it  
will in fact be in row insert order (i.e. update table set  
foo_id=nextval(myseq)).  you can also do this at flush time by  
assigning func.nextval(literal_column(my_sequence_name)) to the  
mapped attribute (assuming its not a PK).

for primary keys we need to know the ID beforehand in most cases since  
PG historically has not had a way to get that ID back nicely after  
insert (it has INSERT RETURNING now but we haven't standardized on  
that yet).

So if its a PK, I would question why you actually need an incrementing  
id in row-insert order on the table in the first place.  Usually, if I  
want to load records in the order in which they were inserted in a  
foolproof way, I'll use a UTC timestamp column with an index for  
thatsince the information you are looking for here is what was  
inserted when?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: branch:user_defined_state questions

2008-04-30 Thread az
  i have some obj.pre_save() hook-method that is called just before
  session.save_or_update( obj), to do last-point validations and/or
  setup of timestamp-like fields.
  There's an idea/ need to replace that with
  mapper_extension.before_insert() but i'm not sure if these are
  going to be equivalent flow-wise. To me, save() is much higher
  level thing than before_insert - nothing has happened yet, and an
  exception thrown in wanna-be save() isnt going to cancel anything
  big... while in before_insert() is going to force a rollback, no?
  i.e. undo stuff that IS already done to the db.
  Also pulling things into flush isnt possible in before_insert() -
  not that i'm doing such things but who knows who's gonna use that
  hook for what..

 you should use a SessionExtension for this hook.  It'll give you a
 pre- flush() which I think it what you're looking for.  flushes do
 roll back still.   Some DB's can't continue after various kinds of
 constraint violations occur (Postgres after a primary key violation
 comes to mind).  Although using SAVEPOINTs, flushes can be made to
 be atomic within a larger transaction (and we're going to have
 great support for that).
hmm i did look at SessionExtension but thats seems lower/later level 
than i need. but i'll think about it.

  now, why i got here at all... in order to call the pre_save() i
  am duplicating the save() behaviour to setup a _state if an
  instance doesnt have one. do i really need that?
  looks like i may want to hook before session.save() calls the
  session._save_impl(), when the state is already there... which is
  only possible in the branch now. ok i'll do my own _state_setup()
  for backward compatibility

 the ORM ensures that instances always have a _state.  I think the
 primary entry point on that is via the instrumented __init__()
 method as well as the on-load hooks.   I dont know why you'd need
 to set up a _state even in 0.4, that's an internal detail which the
 ORM handles (unless you're blowing away SA's __init__ decorator,
 which would be bad).
ok then, u bet its not needed... for whatever reason it is there - i 
dont remember exactly - around v3463 /InstanceState first 
appearance... i'll take it out and see what breaks.

  hey, i start to notice some pattern ... where i use something
  sa-internal, the more quick/hackish/inflexible it has been done,
  the more (frequently) it changes - so i have many many versions
  of ways of using it. do u want some statistix which items got
  biggest number of versions so far ? (:-)

 heres an idea - dont write your application to sqlalchemy
 internals !   of course the more hacky ones are going to change
 more frequently since we're still figuring out the best way to do
 them (thats why they're...hacky !)I cant think of any code even
 in 0.5's progress that isnt subject to being massively changed yet
 again (for example, sync.py and dependency.py are very solid now,
 and aren't changing at all...but what if someday we reduce the
 overall complexity of the flush process such that they aren't
 needed ?  its possible)
heh, dont take me wrong. it's not for the first time u know - i'm not 
complaining. My idea is that this statistical way maybe we can 
figure out possibly shaky pieces not yet under proper interface... 
e.g. few examples of my pokings so far are: polymorphism, whole 
attribute-access story, mapper's selecttable, get_equivalent_columns, 
sql- expressions, operators and visitors, property_get_join, 
properties access and iteration, collection.append, ...
As of SA internals... they represent quite a powerful language, much 
more juicy than the official API, that's why i'm using it... i just 
have to. i guess (or i hope?) sooner or later most of what i have 
found usable will become an interface in a way or another. 
ah nevermind. 

  btw the concrete-polymorphism errors come at v4371, and branch
  has them too. Next thing i'll be looking at.
  is the branch going to be merged any soon or not really?

 its going to merge extremely soon.
ok.. the error is about B inheriting A, concrete, inserting instance-
 of B, then doing session.query(A), and that yields nothing. Note the 
polymunion contents... if i add A there, then error goes away (???). 
i do remember some similar case maybe half an year ago... test 
attached.

ciao
svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



_test_ABC_all.py
Description: application/python


[sqlalchemy] Re: sequence-related question

2008-04-30 Thread Eric Lemoine

On Wed, Apr 30, 2008 at 9:02 PM, Michael Bayer [EMAIL PROTECTED] wrote:


  On Apr 30, 2008, at 12:56 PM, Eric Lemoine wrote:

  
   thread 1 thread 2
   execute(seq) - nextid = n
  
   execute(seq) - nextid = n
   model.Session.save(campfacility)
  
   model.Session.save(campfacility) - BUG, nextid isn't correct

  whats correct here, you'd like the integer identifier to be in exact
  row-insert order ?   if the column is a non-primary key column, the
  sequence will be executed inline within the executed SQL so that it
  will in fact be in row insert order (i.e. update table set
  foo_id=nextval(myseq)).  you can also do this at flush time by
  assigning func.nextval(literal_column(my_sequence_name)) to the
  mapped attribute (assuming its not a PK).

  for primary keys we need to know the ID beforehand in most cases since
  PG historically has not had a way to get that ID back nicely after
  insert (it has INSERT RETURNING now but we haven't standardized on
  that yet).

Yes, the sequence is my table's PK. What I want to know is the PK
value of the line I'm going to insert (or I've just inserted). So I
guess this is indeed INSERT RETURNING.

  So if its a PK, I would question why you actually need an incrementing
  id in row-insert order on the table in the first place.  Usually, if I
  want to load records in the order in which they were inserted in a
  foolproof way, I'll use a UTC timestamp column with an index for
  thatsince the information you are looking for here is what was
  inserted when?

That's no what I want. See above. I hope it's clear enough this time.

Thanks a lot Michael,

--
Eric

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sequence-related question

2008-04-30 Thread Eric Lemoine

On Wed, Apr 30, 2008 at 9:02 PM, Michael Bayer [EMAIL PROTECTED] wrote:


  On Apr 30, 2008, at 12:56 PM, Eric Lemoine wrote:

  
   thread 1 thread 2
   execute(seq) - nextid = n
  
   execute(seq) - nextid = n
   model.Session.save(campfacility)
  
   model.Session.save(campfacility) - BUG, nextid isn't correct

Just realized that my schema with the two threads didn't come up as
expected, which problably didn't help understand my issue.

--
Eric

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sequence-related question

2008-04-30 Thread Michael Bayer


On Apr 30, 2008, at 3:23 PM, Eric Lemoine wrote:


 Yes, the sequence is my table's PK. What I want to know is the PK
 value of the line I'm going to insert (or I've just inserted). So I
 guess this is indeed INSERT RETURNING.

if you pre-execute the sequence, the number you get back from it is  
yours to keep and will never come up again (unless the sequence is  
manually manipulated).   Its safe to use for a primary key value at  
any time regardless of concurrent threads which also use that sequence.

SQLAlchemy does this process for you automatically, so if you just set  
the Sequence() on your table's primary key Column, you can safely save  
and flush your instances without assigning any identifier, and the  
newly generated id is present on the corresponding class attributes,  
i.e.:

mytable = Table('mytable', metadata, Column('id', Sequence('my_seq'),  
primary_key=True), ...)

mapper(MyClass, mytable, ...)

x = MyClass()
session.save(x)
session.flush()

newly_inserted_id = x.id

no threading issues to worry about.   Same thing happens with raw  
inserts, if you leave the id column out of the values list:

result = engine.execute(mytable.insert())
newly_inserted_id = result.last_inserted_ids()[0]








--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sequence-related question

2008-04-30 Thread Eric Lemoine

On Wed, Apr 30, 2008 at 9:23 PM, Eric Lemoine [EMAIL PROTECTED] wrote:
 On Wed, Apr 30, 2008 at 9:02 PM, Michael Bayer [EMAIL PROTECTED] wrote:
  
  
On Apr 30, 2008, at 12:56 PM, Eric Lemoine wrote:
  

 thread 1 thread 2
 execute(seq) - nextid = n

 execute(seq) - nextid = n
 model.Session.save(campfacility)

 model.Session.save(campfacility) - BUG, nextid isn't correct
  
whats correct here, you'd like the integer identifier to be in exact
row-insert order ?   if the column is a non-primary key column, the
sequence will be executed inline within the executed SQL so that it
will in fact be in row insert order (i.e. update table set
foo_id=nextval(myseq)).  you can also do this at flush time by
assigning func.nextval(literal_column(my_sequence_name)) to the
mapped attribute (assuming its not a PK).
  
for primary keys we need to know the ID beforehand in most cases since
PG historically has not had a way to get that ID back nicely after
insert (it has INSERT RETURNING now but we haven't standardized on
that yet).

  Yes, the sequence is my table's PK. What I want to know is the PK
  value of the line I'm going to insert (or I've just inserted). So I
  guess this is indeed INSERT RETURNING.

With psycopg2, I know people using this:

sql = INSERT INTO \%s\ (%s) VALUES (%s) % (self.table, columns, values)
cursor = db.cursor()
cursor.execute(str(sql), values)
cursor.execute(SELECT currval('%s'); % sequence_name)
id = cursor.fetchone()[0]
self.db.commit()

I'm wondering if this is safe. And if so, if there's a way to do the
same with SA.

Thanks,

--
Eric

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Aliasing automatic joins with relation.any()

2008-04-30 Thread Yannick Gingras


Hi, I have two classes, Item and ItemId where one Item can have
multiple ItemIds accessible from its ref_ids relation.

I can do:

  Item.query().filter(not_(Item.ref_ids.any(ref_id = OP-10-47000)))

if I want all the items except the ones with an ItemId with ref_id
set to OP-10-47000 and I can do

  Item.query().filter(not_(Item.ref_ids.any(ref_id = OP-10-47000)))\
  .join(ref_ids, aliased=True).filter_by(ref_id=OP-10)

and I will get all the Items with an ItemId of OP-10 except the ones
with OP-10-47000.  This is great.  

However, if I flip the order and I do:

  Item.query().join(ref_ids, aliased=True).filter_by(ref_id=OP-10)\
  .filter(not_(Item.ref_ids.any(ref_id = OP-10-47000)))
  
I get the following error:

  class 'sqlalchemy.exceptions.InvalidRequestError': Select
  statement 'SELECT 1 FROM items, item_ids AS item_ids_1 WHERE
  items.id = item_ids_1.item_id AND item_ids_1.ref_id =
  :item_ids_ref_id_1' is overcorrelated; returned no 'from' clauses

I had the same error with the first query before I aliased it so I
assume that it's an aliasing problem.  How can I alias the
ref_ids.any() clause?

-- 
Yannick Gingras

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sequence-related question

2008-04-30 Thread Eric Lemoine

On Wed, Apr 30, 2008 at 9:44 PM, Michael Bayer [EMAIL PROTECTED] wrote:


  On Apr 30, 2008, at 3:23 PM, Eric Lemoine wrote:

  
   Yes, the sequence is my table's PK. What I want to know is the PK
   value of the line I'm going to insert (or I've just inserted). So I
   guess this is indeed INSERT RETURNING.

  if you pre-execute the sequence, the number you get back from it is
  yours to keep and will never come up again (unless the sequence is
  manually manipulated).   Its safe to use for a primary key value at
  any time regardless of concurrent threads which also use that sequence.

  SQLAlchemy does this process for you automatically, so if you just set
  the Sequence() on your table's primary key Column, you can safely save
  and flush your instances without assigning any identifier, and the
  newly generated id is present on the corresponding class attributes,
  i.e.:

  mytable = Table('mytable', metadata, Column('id', Sequence('my_seq'),
  primary_key=True), ...)

  mapper(MyClass, mytable, ...)

  x = MyClass()
  session.save(x)
  session.flush()

  newly_inserted_id = x.id

  no threading issues to worry about.   Same thing happens with raw
  inserts, if you leave the id column out of the values list:

  result = engine.execute(mytable.insert())
  newly_inserted_id = result.last_inserted_ids()[0]

Great! Thanks,

--
Eric

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sequence-related question

2008-04-30 Thread Michael Bayer


On Apr 30, 2008, at 3:45 PM, Eric Lemoine wrote:


 With psycopg2, I know people using this:

 sql = INSERT INTO \%s\ (%s) VALUES (%s) % (self.table, columns,  
 values)
 cursor = db.cursor()
 cursor.execute(str(sql), values)
 cursor.execute(SELECT currval('%s'); % sequence_name)
 id = cursor.fetchone()[0]
 self.db.commit()

 I'm wondering if this is safe. And if so, if there's a way to do the
 same with SA.


this is a slight bit less safe than SQLA's default practice, in the  
sense that if the application used the same connection in two  
concurrent threads (which is a bad practice in itself), the results  
may be incorrect.

What I dont see above is how the sequence is getting executed.  Is the  
column a SERIAL column, and the sequence is executed automatically ?
or is the nextval(seqname) embedded into the VALUES clause above  
literally ?

also still curious why usage of currval is even needed.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: branch:user_defined_state questions

2008-04-30 Thread az

  fine.  r4607:
  def traverse_using(iterator, obj, visitors):
  visit the given expression structure using the given
  iterator of objects.
 
  yeah, i how i get the ClauseVisitor to use that...
  The idea was for the ClauseVisitor's traverse to use
  ClauseVisitor's iterate (or _iterate), pointing by default to
  module's plain iterate(), so inheriting and replaceing _iterate
  with e.g. iterate_depth_first (or whatever fancy) would work,
  without a need to reinvent the traverse mechanism at all.
  unless u're fading out those classes for a reason or another?

 no theyre hanging around.  Just subclass ClauseVisitor for now.

ok, i need the traverse_using(), the iterate_depth_first (which i can 
do myself as u said so its not really needed), and the pre-traverse 
name[6:] filtering:
   visitors = dict( (name[6:], getattr(self, name))
 for name in dir(self)
 if name.startswith('visit_') )
u have this filtering copied twice, i guess it has to be in one place. 
so the actual traverse-call has to be one more method.. to be easy 
replaceable.
that should make a good clean interface.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Aliasing automatic joins with relation.any()

2008-04-30 Thread Michael Bayer


On Apr 30, 2008, at 3:51 PM, Yannick Gingras wrote:



 Hi, I have two classes, Item and ItemId where one Item can have
 multiple ItemIds accessible from its ref_ids relation.

 I can do:

  Item.query().filter(not_(Item.ref_ids.any(ref_id = OP-10-47000)))

 if I want all the items except the ones with an ItemId with ref_id
 set to OP-10-47000 and I can do

  Item.query().filter(not_(Item.ref_ids.any(ref_id = OP-10-47000)))\
  .join(ref_ids, aliased=True).filter_by(ref_id=OP-10)

 and I will get all the Items with an ItemId of OP-10 except the ones
 with OP-10-47000.  This is great.

 However, if I flip the order and I do:

  Item.query().join(ref_ids, aliased=True).filter_by(ref_id=OP-10)\
  .filter(not_(Item.ref_ids.any(ref_id = OP-10-47000)))

 I get the following error:

  class 'sqlalchemy.exceptions.InvalidRequestError': Select
  statement 'SELECT 1 FROM items, item_ids AS item_ids_1 WHERE
  items.id = item_ids_1.item_id AND item_ids_1.ref_id =
  :item_ids_ref_id_1' is overcorrelated; returned no 'from' clauses

 I had the same error with the first query before I aliased it so I
 assume that it's an aliasing problem.  How can I alias the
 ref_ids.any() clause?

OK the item_ids inside the any() is getting aliased per the  
join(refids, aliased=True) so that both items and item_ids is  
correlating to the parent.   The fix for this would be to call  
reset_joinpoint() after the filter_by() to cancel out future filter  
criterion from sticking to the refids join.

Although, I think it may be wise here if SQLA set the correlate  
value on the expression returned by any() to prevent these errors from  
occuring at all.  Below is a patch that does it.  It needs a little  
bit of tweaking to work with inheritance though so i might add a  
ticket for this.


Index: lib/sqlalchemy/orm/properties.py
===
--- lib/sqlalchemy/orm/properties.py(revision 4592)
+++ lib/sqlalchemy/orm/properties.py(working copy)
@@ -332,21 +332,21 @@
  if criterion and target_adapter:
  criterion = target_adapter.traverse(criterion)

-return j, criterion, dest
+return j, criterion, source, dest

  def any(self, criterion=None, **kwargs):
  if not self.prop.uselist:
  raise exceptions.InvalidRequestError('any()' not  
implemented for scalar attributes. Use has().)
-j, criterion, from_obj =  
self._join_and_criterion(criterion, **kwargs)
+j, criterion, source, from_obj =  
self._join_and_criterion(criterion, **kwargs)

-return sql.exists([1], j  criterion, from_obj=from_obj)
+return sql.exists([1], j  criterion,  
from_obj=from_obj).correlate(source)

  def has(self, criterion=None, **kwargs):
  if self.prop.uselist:
  raise exceptions.InvalidRequestError('has()' not  
implemented for collections.  Use any().)
-j, criterion, from_obj =  
self._join_and_criterion(criterion, **kwargs)
+j, criterion, source, from_obj =  
self._join_and_criterion(criterion, **kwargs)

-return sql.exists([1], j  criterion, from_obj=from_obj)
+return sql.exists([1], j  criterion,  
from_obj=from_obj).correlate(source)

  def contains(self, other):
  if not self.prop.uselist:
@@ -360,8 +360,8 @@

  def _negated_contains_or_equals(self, other):
  criterion = sql.and_(*[x==y for (x, y) in  
zip(self.prop.mapper.primary_key,  
self.prop.mapper.primary_key_from_instance(other))])
-j, criterion, from_obj =  
self._join_and_criterion(criterion)
-return ~sql.exists([1], j  criterion, from_obj=from_obj)
+j, criterion, source, from_obj =  
self._join_and_criterion(criterion)
+return ~sql.exists([1], j  criterion,  
from_obj=from_obj).correlate(source)

  def __ne__(self, other):
  if other is None:


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Aliasing automatic joins with relation.any()

2008-04-30 Thread Michael Bayer


On Apr 30, 2008, at 4:20 PM, Michael Bayer wrote:


 Although, I think it may be wise here if SQLA set the correlate
 value on the expression returned by any() to prevent these errors from
 occuring at all.  Below is a patch that does it.  It needs a little
 bit of tweaking to work with inheritance though so i might add a
 ticket for this.


OK, you'll have to wait for 0.5 for this feature :).  I have it  
working over there along with some other refinements.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Session.merge vs. mapper extensions

2008-04-30 Thread Rick Morrison
I just started using session.merge, and I noticed that on session.flush(),
the before_update mapper extension for the objects that have been merged
into the session are not called.

These are new instances, not previously persisted.

Is there something I need to do to trigger this, or eesss a bug?

Thx,
Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Session.merge vs. mapper extensions

2008-04-30 Thread Michael Bayer


On Apr 30, 2008, at 10:47 PM, Rick Morrison wrote:

 I just started using session.merge, and I noticed that on  
 session.flush(), the before_update mapper extension for the objects  
 that have been merged into the session are not called.

 These are new instances, not previously persisted.

so..before_insert() would be called in that case no ?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Session.merge vs. mapper extensions

2008-04-30 Thread Rick Morrison
right, sorry, before_insert


On Wed, Apr 30, 2008 at 11:18 PM, Michael Bayer [EMAIL PROTECTED]
wrote:



 On Apr 30, 2008, at 10:47 PM, Rick Morrison wrote:

  I just started using session.merge, and I noticed that on
  session.flush(), the before_update mapper extension for the objects
  that have been merged into the session are not called.
 
  These are new instances, not previously persisted.

 so..before_insert() would be called in that case no ?


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Session.merge vs. mapper extensions

2008-04-30 Thread Rick Morrison
is not being called. as you call tell, I'm having trouble with the
keyboard tonight. first message was a typo,

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---