[sqlalchemy] Re: Object state change tracking

2008-02-08 Thread klaus

Well, we are using SQLAlchemy in a multi-threaded environment. It is
integrated into Zope by means of z3c.sqlalchemy. All sessions should
be cleared before they are discarded.

There are no exotic types involved.

We have no useful profile output. The info is provided by the request
monitor. Calls to __cleanup show up in all unusual places (when the
garbage collector is activated) and take up considerable time.

Best regards
  Klaus


On 7 Feb., 16:00, Michael Bayer [EMAIL PROTECTED] wrote:
 On Feb 7, 2008, at 8:14 AM, klaus wrote:



  State tracking, the method __cleanup in sqlalchemy.orm.attributes in
  particular, is among the most time consuming tasks on our platform.
  The mutex lock seems to be a real bottleneck.

  Is there any way to work around this problem? Apparently, the weakrefs
  cannot be switched off (as in the session). Can you recommend
  something special to avoid in an application, something that triggers
  this resurrection?

 this is the first im hearing about that mutex being an issue (its a
 straight Lock; mutexes like that are extremely fast if used primarily
 in just a single thread which is the case here).  I would imagine that
 the checks inside the __resurrect method are whats actually taking up
 the time...but even that check is extremely fast *unless* you are
 using a mutable scalar type, such as a mapped Pickle column.  And if
 youre using a mapped Pickle column you can set the PickleType to
 mutable=False to disable the deep checking in that case - this
 condition is the one thing that could specifically make the cleanup
 operation a significant operation versus an almost instantaneous one.
 So I'd want to see more specifically what's causing a slowdown here.

 To avoid the critical section altogether, calling session.close() or
 session.clear() on any session that is about to be discarded should
 prevent any cleanup handlers from hitting that section (and of course,
 not dropping references to dirty objects on the outside until they
 are flushed).  If it truly is just overal latency of __cleanup, in
 theory its not needed if using a strongly-referenced identity map so
 we could perhaps disable it in that case.

 id definitely need to see some profile output to determine more
 accurately what the cause of the slowdown is.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Search in object list by field value

2008-02-08 Thread svilen

On Friday 08 February 2008 14:26:04 maxi wrote:
a) let SQl do it 
  p1 = session.query(Person).filter_by(id==123).first()
#see .filter_by syntax
b) get all people, then plain python:
 for p in people:
   if p.id == 123: break
 else: p=None

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Polymorphic and inheritance with missing children

2008-02-08 Thread Michael Bayer


On Feb 7, 2008, at 9:28 PM, Richard Levasseur wrote:


 Ok, so I've been looking into getting it to work when the child record
 doesn't exist.  I haven't had much luck.

 I wrote a stub mapper extension for translate_row and
 populate_instance, but it doesn't seem to be called when it goes to
 fetch the information for the child row.
 mapper(Parent, parents, polymorphic.)
 mapper(Child, children, extension=MyExtension(def translate_row, def
 populate_instance))

 query(Parent)
 my translate_row called
 my populate_instance called
 mapper.py:_get_poly_select_loader, the first inline post_execute is
 called to fetch the data:
  row = selectcontext.session.connection(self).execute(statement,
 params).fetchone()
  self.populate_instance(selectcontext, instance, row, isnew=False,
 instancekey=identitykey, ispostselect=True)

 Shouldn't translate_row be called before then?  In the working case
 (child exists), neither of my extension methods are called, too.  It
 makes sense that they would only be called once per result, but in
 this case a single result is actually 2 records fetch separately.

can you supply a more specific example of how this schema works ?
also, if a single result is a 2 record fetch, thats going to be  
extremely difficult to do - I'd want to look into perhaps supplying an  
alternate mapping for the schema.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Search in object list by field value

2008-02-08 Thread maxi

Thanks Simon,
I was doing of that manner.

Now, is advisable implement my own custom collection ? (One what
implement a locate method for example)
I was reading Custom Collection Implementations on sqlalchemy doc, but
I'am not very clear over how implement this.

Any help with this?

Regards.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: building mappers for an existing database

2008-02-08 Thread svilen

On Friday 08 February 2008 16:09:28 Chris Withers wrote:
 Hi All,

 Almost similar to my last question, how do you go about building
 mappers for an existing database schema?

 What happens if you don't get it quite right? :-S

 cheers,

 Chris

search the group for things related to migrate (i call it migrene :); 
there are 2 approaches:
 - make the db match the py-model
 - make the model match the db 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: schema changes

2008-02-08 Thread Paul Johnston
Hi Chris,

What happens when the schema expected by the mappers doesn't match up to
 the schema in the database?


If the SQLAlchemy table definitions don't match the database, you will
usually get SQL errors when you try to use them.

The TurboGears admin tool can tell you the differences between the database
and the SA table definitions. I use this quite a lot, to check the database
is ok. To make changes, I first change the SA definitions, run tg-admin sql
status, make the changes to the DB by hand, and run it again to check I did
it right. This works for me; it could be improved, but that isn't greatly
urgent.

For getting started with an existing database, try this:
http://code.google.com/p/sqlautocode/

Paul

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] schema changes

2008-02-08 Thread Chris Withers

Hi All,

What happens when the schema expected by the mappers doesn't match up to 
the schema in the database?

I'm particularly thinking about what happens when you want to upgrade a 
piece of software that uses SA for it's data layer, and you want to 
change the schema...

How have people tackled these kinds of upgrades? And what happens, more 
generally, if the mappers don't match the database's schema?

cheers,

Chris

-- 
Simplistix - Content Management, Zope  Python Consulting
- http://www.simplistix.co.uk

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: InvalidRequestError

2008-02-08 Thread Paul Johnston

Hi,

I do a session.clear(), all time, after of
session.save_or_update([obj])
Can this be the problem?
  

That is almost certainly your problem. Try removing the session.clear() 
and see if it then works.

When is advisable do a session.clear() ?
  

When you're done with a batch of processing. In web applications this 
fits nicely at the end of a request.

Paul

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Newbie question: sAsync maintained? In use?

2008-02-08 Thread Dwig

'm just getting started with SA, and I'll want to be using it in a
back-end server with an existing mssql database.  I'm considering
using Twisted as the basis for the server, and I've looked a bit into
using sAsync.  I've had trouble getting the Twisted XML-RPC Server
Example to run, and that led me to wonder about the status of the
package.

Is anyone here using sAsync, or knows enough to comment on its
viability for the kind of thing I've described.

Thanks for any good words,

--
Don Dwiggins
Advanced Publishing Technology

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Polymorphic and inheritance with missing children

2008-02-08 Thread Richard Levasseur
On Feb 8, 2008 7:04 AM, Michael Bayer [EMAIL PROTECTED] wrote:



 On Feb 7, 2008, at 9:28 PM, Richard Levasseur wrote:

 
  Ok, so I've been looking into getting it to work when the child record
  doesn't exist.  I haven't had much luck.
 
  I wrote a stub mapper extension for translate_row and
  populate_instance, but it doesn't seem to be called when it goes to
  fetch the information for the child row.
  mapper(Parent, parents, polymorphic.)
  mapper(Child, children, extension=MyExtension(def translate_row, def
  populate_instance))
 
  query(Parent)
  my translate_row called
  my populate_instance called
  mapper.py:_get_poly_select_loader, the first inline post_execute is
  called to fetch the data:
   row = selectcontext.session.connection(self).execute(statement,
  params).fetchone()
   self.populate_instance(selectcontext, instance, row, isnew=False,
  instancekey=identitykey, ispostselect=True)
 
  Shouldn't translate_row be called before then?  In the working case
  (child exists), neither of my extension methods are called, too.  It
  makes sense that they would only be called once per result, but in
  this case a single result is actually 2 records fetch separately.

 can you supply a more specific example of how this schema works ?
 also, if a single result is a 2 record fetch, thats going to be
 extremely difficult to do - I'd want to look into perhaps supplying an
 alternate mapping for the schema.


Ok, so I'm talking about 2 slightly different things at once:  One is that
sqlalchemy doesn't call the mapper extensions when loading the data from the
inherited tables.  The second is that it can't load instances unless a
record in the inherited table exists.

One:
By 2 record fetch I meant, a single instance of Child requires one record
to be fetched from `parents` and another record to be fetched from `kids`,
and sqlalchemy is correctly doing this with the inherited tables.  What I'm
pointing out is that, when it fetches the record from the child table, it
doesn't run the mapper extensions (unless i'm doing it wrong).
1) session.query(Parent).get(id)
2) select ... from parents ...
3) translate_row and translate row extensions
4) populate instance and populate instance extensions
5) I'm polymorphic and am a Child, so query kids: select ... from kids ...
6) populate instance (with the data from kids), but extensions aren't run.
(I don't see any calls to translate_row?)
7) return instance

This is my mapper extension:
class CustomMapper(sqlalchemy.orm.MapperExtension):
  def translate_row(self, context, row):
print translate row called
return row
  def populate_instance(self, mapper, context, row, instance, **flags):
print populate instance called
return sqlalchemy.orm.EXT_CONTINUE

mapper(Child, kids_table, inherits=Parent, extension=CustomMapper())

***

Two:
I've attached an example, essentially this:

Table(parents, Column(id), Column(type))
Table(kids, Column(child_id), Column(parent_id, FK(parents.id)))
engine.execute(insert into parents (id, type) values(1, 'child')) # not
inserting child record
session.query(Parent).get(1) # gives an error, NoneType not iteratable;
the row is None

Note that no record exists in `kids`, this is intentional, it isn't always
guaranteed to be there because of the way the system behaves (the workflow
engine may change the type outside the application)

It works in our current app (custom php) because when we save the record, we
check if the `child_id` is null, if it is we insert, otherwise we update.
During loading, if the `child_id` is null, we just set everything else to
null.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

from sqlalchemy import create_engine
from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
from sqlalchemy.orm import mapper, sessionmaker

__metaclass__ = type

metadata = MetaData()
engine = create_engine(sqlite:///:memory:, echo=True)
Session = sessionmaker(bind=engine, autoflush=False, transactional=True)
parents = Table(parents, metadata,
Column(id, Integer, primary_key=True),
Column(type, String(100)))

kids = Table(kids, metadata,
 Column(pid, Integer, ForeignKey(parents.id)),
 Column(cid, Integer, primary_key=True),
 )

metadata.create_all(engine)

class Parent:
  def __repr__(self):
return %s(%s, %s) % (self.__class__.__name__, self.id, self.type)

class Child(Parent):
  pass

mapper(Parent, parents, polymorphic_on=parents.c.type,
   polymorphic_identity=parent)
mapper(Child, kids, inherits=Parent, polymorphic_identity=child)

# Insert a Child, 

[sqlalchemy] how to role back a delete from the session.

2008-02-08 Thread Mike Bernson

I have a case where a delete was requested and on the commit to the transaction 
the delete fails with a
integrity error. Problem is a foreign key constraint will not let the record be 
delete. Now I have a
session which tries to delete this record any time I flush things. I have 
rolled back the transaction
and now want to stop the session from delete the record.

How do I stop the record from being marked as needed to be delete ?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Search in object list by field value

2008-02-08 Thread King Simon-NFHD78

 -Original Message-
 From: sqlalchemy@googlegroups.com 
 [mailto:[EMAIL PROTECTED] On Behalf Of maxi
 Sent: 08 February 2008 14:30
 To: sqlalchemy
 Subject: [sqlalchemy] Re: Search in object list by field value
 
 
 Thanks Simon,
 I was doing of that manner.
 
 Now, is advisable implement my own custom collection ? (One what
 implement a locate method for example)
 I was reading Custom Collection Implementations on sqlalchemy doc, but
 I'am not very clear over how implement this.
 
 Any help with this?
 
 Regards.
 

It really depends where you want your 'locate' method to appear. The
custom collection classes mentioned in the docs are used for
relationships between mapped classes.

For example, if Person has a one-to-many relationship with Address, via
a property 'addresses', then by default that 'addresses' property is a
list. By creating a custom collection class with your 'locate' method,
you could write code like this:

person = session.query(Person).first()

address = person.addresses.locate('id', 12)

If that is really what you are trying to do, then this collection class
would probably work (untested):



class searchable_list(list):
def locate(self, attrname, value):
for obj in self:
if getattr(obj, attrname) == value:
return obj



Then when you create your 'addresses' relation, you pass
collection_class=searchable_list. However, this doesn't seem massively
useful to me, so perhaps I haven't understood what you're trying to do.

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Search in object list by field value

2008-02-08 Thread King Simon-NFHD78

 -Original Message-
 From: sqlalchemy@googlegroups.com 
 [mailto:[EMAIL PROTECTED] On Behalf Of maxi
 Sent: 08 February 2008 13:47
 To: sqlalchemy
 Subject: [sqlalchemy] Re: Search in object list by field value
 
 
 On 8 feb, 09:58, svilen [EMAIL PROTECTED] wrote:
  On Friday 08 February 2008 14:26:04 maxi wrote:
  a) let SQl do it
p1 = session.query(Person).filter_by(id==123).first()
  #see .filter_by syntax
 
 Yes, of course, I know it. But my question appoint to how search in an
 object list.
 
  b) get all people, then plain python:
   for p in people:
 if p.id == 123: break
   else: p=None
 
 Something like this, but through some build in function.
 
 Thanks for your replay.
 Regards.

I don't think there's anything built in, but it is a very easy
stand-alone function to write:



def get_first_match(objects, attrname, value):

Return the first object where the named attribute
has the given value.
Returns None if no match is found

for obj in objects:
if getattr(obj, attrname) == value:
return obj


# Assuming 'people' is a list of Person objects
person_one = get_first_match(people, 'id', 1)



Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Polymorphic and inheritance with missing children

2008-02-08 Thread Richard Levasseur
On Feb 8, 2008 11:27 AM, Michael Bayer [EMAIL PROTECTED] wrote:


 On Feb 8, 2008, at 2:01 PM, Richard Levasseur wrote:

 Ok, so I'm talking about 2 slightly different things at once:  One is that
 sqlalchemy doesn't call the mapper extensions when loading the data from the
 inherited tables.  The second is that it can't load instances unless a
 record in the inherited table exists.

 One:
 By 2 record fetch I meant, a single instance of Child requires one
 record to be fetched from `parents` and another record to be fetched from
 `kids`, and sqlalchemy is correctly doing this with the inherited tables.
 What I'm pointing out is that, when it fetches the record from the child
 table, it doesn't run the mapper extensions (unless i'm doing it wrong).
 1) session.query(Parent).get(id)
 2) select ... from parents ...
 3) translate_row and translate row extensions
 4) populate instance and populate instance extensions
 5) I'm polymorphic and am a Child, so query kids: select ... from kids ...
 6) populate instance (with the data from kids), but extensions aren't
 run.  (I don't see any calls to translate_row?)
 7) return instance


 thats actually correct - the second query to the child table is not a
 top-level ORM query, its internal to the get() call, so we currently don't
 have any extension hooks there (all of that call a second query
 functionality was just introduced in 0.4.)


 However, the whole _post_instance step where that happens is optional.
  In your case, I really think you want to be using a polymorphic join or
 union so that the mapper can load parents and children from one query.  You
 do this by specifying the select_table argument to the mapper(), and
 usually its an outerjoin among all the involved tables or in some cases can
 be a big UNION of all the separate subtables.  But if your select_table only
 includes the base table, that will work too, it just wont populate the
 secondary attributes unless you did something else to make it happen (like
 in your extension).

 If you do that, the whole _post_instance() thing won't happen at all; your
 translate_row() will get the only row dealt with, as will your
 populate_instance().  So you *can* in theory issue a second SQL query within
 your own populate_instance() call that simulates what _post_instance() does,
 if you wanted to.  Or I would think you could just load everything in one
 shot here using a series of outerjoins among all the tables; theres an
 example in the docs and in the examples/polymorphic  folder illustrating how
 to load in that way.


 Two:
 I've attached an example, essentially this:

 Table(parents, Column(id), Column(type))
 Table(kids, Column(child_id), Column(parent_id, FK(parents.id)))
 engine.execute(insert into parents (id, type) values(1, 'child')) # not
 inserting child record
 session.query(Parent).get(1) # gives an error, NoneType not iteratable;
 the row is None

 Note that no record exists in `kids`, this is intentional, it isn't always
 guaranteed to be there because of the way the system behaves (the workflow
 engine may change the type outside the application)

 It works in our current app (custom php) because when we save the record,
 we check if the `child_id` is null, if it is we insert, otherwise we
 update.  During loading, if the `child_id` is null, we just set everything
 else to null.



 yup, works if you do it like this:

 mapper(Parent, parents, polymorphic_on=parents.c.type,
   polymorphic_identity=parent, select_table=parents.outerjoin(kids),
 primary_key=[parents.c.id])
 mapper(Child, kids, inherits=Parent, polymorphic_identity=child)

 note the primary_key=[parents.c.id] there, which is to override the fact
 that parents.outerjoin(kids) in fact has a primary key of [parents.c.id,
 kids.c.cid].



Hm, we could do that, but that means that we have to outerjoin to ~15 other
tables.  The primary table has ~200+ columns on it, each child table has
10-20 columns, there are over a million records, and our base filter
criteria can be a bit complex.  Its indexed and such, but the query needs to
run in  0.7 seconds (about how fast it goes on dev).  I have concerns it
won't perform as well (we tried that and it really killed performance, but
that was prior to a lot of performance improvements we've made, so it might
be feasible again).

I'll see how it goes. Thanks :)

What we do now is figure out what tables to join with based upon the user
and other conditions.  We end up joining to 2 or 3 tables instead of all
tables.  I'm not sure how to do this transparently (so we can still do
session.query(Parent) and add arbitrary pieces) in sqlalchemy, though.  The
table it selects from it defined at mapper time, and if I understand
correctly, we shouldn't be re-mapping classes on-the-fly.

What I'm currently thinking (if joining across them all is too slow) is
writing a mapper extension that does the necessary logic so that the base
session.query(Parent) is actually 

[sqlalchemy] Re: how to role back a delete from the session.

2008-02-08 Thread Michael Bayer


On Feb 8, 2008, at 1:06 PM, Mike Bernson wrote:


 I have a case where a delete was requested and on the commit to the  
 transaction the delete fails with a
 integrity error. Problem is a foreign key constraint will not let  
 the record be delete. Now I have a
 session which tries to delete this record any time I flush things. I  
 have rolled back the transaction
 and now want to stop the session from delete the record.

 How do I stop the record from being marked as needed to be delete ?

ooh, we don't have a specific method for that atm.  i think you'd have  
to expunge() it then put it back in using save_or_update().  you can  
also probably do the (non-public) equivalent of what an undelete()  
method would do, which is del session.uow.deleted[obj._state].  we  
can add an undelete() which does it too.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how to role back a delete from the session.

2008-02-08 Thread Mike Bernson

Michael Bayer wrote:
 On Feb 8, 2008, at 1:06 PM, Mike Bernson wrote:

   
 I have a case where a delete was requested and on the commit to the  
 transaction the delete fails with a
 integrity error. Problem is a foreign key constraint will not let  
 the record be delete. Now I have a
 session which tries to delete this record any time I flush things. I  
 have rolled back the transaction
 and now want to stop the session from delete the record.

 How do I stop the record from being marked as needed to be delete ?
 

 ooh, we don't have a specific method for that atm.  i think you'd have  
 to expunge() it then put it back in using save_or_update().  you can  
 also probably do the (non-public) equivalent of what an undelete()  
 method would do, which is del session.uow.deleted[obj._state].  we  
 can add an undelete() which does it too.


   
The del session.uow.deleted[obj._state] did not work. I get a exception
but the expunge(), save_or_update() did the trick for me.

Have a undelete() as a public method work be great. I needed to do
this to clean session so other can continue to work with the session
and commit when all the work is done.  This allow me to back out the
bad delete and continue to work.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---