[sqlalchemy] SQLAlchemy training sessions

2012-09-12 Thread erikj
Hello,

For the second time this year, we are organizing a SQLAlchemy
training day.  So if you or your colleagues need to get up and 
running with SQLAlchemy, this is your chance.

- October 27 : Leipzig, Germany (this is the weekend right before PyConDE)
- November 15 : Antwerp, Belgium

Practical details can be found here :

http://www.conceptive.be/training.html
http://www.python-academy.com

Or you can always contact me if you want more information.

Best regards,

Erik

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/iHleVzSG-6QJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] safe usage within before_update/before_insert events

2012-09-12 Thread Kent
You've mentioned multiple times (to me and others) that some operations, 
such as reaching across relationships or loading relationships from within 
a before_update Mapper event is not safe.


   - I understand this is safe from within Session event before_flush(), 
   correct?
   - We mentioned at some point adding a Mapper level before_flush() 
   event.  To avoid duplicate work, has that been done?


   - I presume that other queries, particularly those with 
   populate_existing() are also unsafe from within before_update?  Are such 
   safe from before_flush()?


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/21Y9d4mkEPsJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: safe usage within before_update/before_insert events

2012-09-12 Thread Michael Bayer

On Sep 12, 2012, at 2:33 PM, Kent wrote:

 Posted before I was done, sorry...
 
 Is it safe to do all these things from Mapper event after_update?
relationships that are being persisted in that flush are not necessarily done 
being persisted, so loading a relationship is not a great idea in there, just 
not sure if results are predictable


 Is it harmful to invoke session.flush() from within after_update?

it's not possible because you are already inside the flush.

 
 On Wednesday, September 12, 2012 2:30:45 PM UTC-4, Kent wrote:
 You've mentioned multiple times (to me and others) that some operations, such 
 as reaching across relationships or loading relationships from within a 
 before_update Mapper event is not safe.
 
 I understand this is safe from within Session event before_flush(), correct?

session is fully usable, except for flush() of course, inside of before_flush()


 We mentioned at some point adding a Mapper level before_flush() event.  To 
 avoid duplicate work, has that been done?
this feature would just be a filter function, and the hard part here is coming 
up with a clean API that works for all cases and isn't terribly non-performant

 I presume that other queries, particularly those with populate_existing() are 
 also unsafe from within before_update?  Are such safe from before_flush()?

i dont know that a populate_existing is unsafe, just consider that when 
you're in before_update, the session has worked out the full list of every 
single object its going to save or delete, as well as all the inter-object 
relationships that are going to be synchronized.  It is now working through 
that list, and halfway through, a query with populate_existing() blows through 
and resets any number of those changes from having taken place, except some 
portion of them have already been flushed to the DB, some portion of them have 
not, and I really can't say what the effect would be.it certainly is 
extremely difficult to be deterministic, as any change in the structure of your 
mappings or data can change what the flush plan will be, meaning the effects 
of your flush-plan-invalidating populate_existing() can't really be determined.

Another aspect of flush is that, it updates the attributes of in-memory objects 
that it knows about.  if your query(), even without populate_existing, loads 
some new collections in, and those collections are then impacted by something 
like ON DELETE CASCADE, the flush plan might not to know to mark those 
objects as deleted, depending on when they got loaded in.  In this case as 
well, I don't offhand have some surprise failure conditions to show you, but 
the fact that I've been compelled to put all those warnings in the docs is 
almost certainly because people came to me with bugs where they were 
expecting the session to act completely normally inside of before_update() or 
after_update(), which it plainly cannot.

As I'm sure I've said before, the flush is where the session is just taking 
everything that's pending, and flushing it out.  you should not have to do any 
ORM queries in the middle of the flush plan.  If you have logic that needs to 
run through all the objects pending to determine something else that might have 
to happen, you should probably iterate through those objects yourself, rather 
than relying upon the convenience of before_update() to provide this iteration 
for you.  


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: safe usage within before_update/before_insert events

2012-09-12 Thread Michael Bayer

On Sep 12, 2012, at 5:12 PM, Kent wrote:

 I should step back and present my current issue:
 
 When updates to instances of a certain class (table) are to be issued, there 
 are sometimes other related updates that need to be made also.  But I need to 
 be able to flush() these changes in between processing the updating 
 instances.  I can't do this inside before_ (or after_) _update (or _flush), 
 because I need to issue a flush(), but will already be within a flush().  

there is a flag I don't like to encourage called batch=False for mapper(), 
which causes the mapper's save everything of class X step to run just one 
object at a time, emitting the before/after_insert/update events fully for that 
one object, before going onto the next.  it is a much more inefficient system, 
but we need it for the nested sets example - since nested sets requires 
that this giant UPDATE across the entire table occur after every row inserted, 
changing the steps to be taken for the next row.   not sure if this is relevant 
to your situation.

Usually, the job of the flush is, take this whole graph of pending changes, and 
turn it into SQL.  If you need object Y to change when object X does, the 
idea is, all those changes can be done pre-flush, or within before_flush().   
you try not to consider the changes to objects in terms of SQL, the SQL is 
something that will happen later when you have the whole thing set up.   The 
*only* time you truly need the flushes to go through in order to determine next 
steps is when you need to fire off defaults, triggers, or something like that, 
which then produce new state you need to get at.

 
 It's almost like I need an event that happens truly after an update/flush; 
 after I'm out of the flush.  

yeah there's an exceedingly slim series of scenarios where multiple flushes are 
needed.  The scenario of deleting a row and replacing it with another inserted 
row, where both rows share a UNIQUE value comes to mind.   I gave someone (was 
it you?) an example of how to work around that.  

 
 I considered recording the fact that I need to make updates once flush() is 
 finished and then monkey patch the session.flush() function so I can iterate 
 over the recently updated items after I'm outside the flush().  I don't like 
 that solution one bit.  Do you have any ideas how you might approach this?

I would:

a. establish that there is absolutely no other way to do something other than 
two flushes, then

b. if it is only *two*flushes, then I'd just do some stuff inside of 
after_flush_postexec(), and just ignore the second flush.  it will happen on 
its own later

c. no, I really need two flushes, and i really need all the flush things to 
happen right now without just waiting for it.  That's never happened to me, if 
I really knew of that situation, a new event could be proposed, though I could 
just  as well use a Session subclass (no monkeypatching needed).   But I cannot 
add a new event without a rock solid, definitely no-other-way-to-do-it use 
case, which would then become the example placed in the documentation for that 
event as the rationale for when to use it.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] autocommit=False, autoflush on begin, after_flush event, unintended side effect?

2012-09-12 Thread Yap Sok Ann
This is semi-related to the latest post from Kent. I just noticed that I 
have been abusing the autoflush on begin behavior (by the 
_take_snapshot() method in orm/session.py) to create additional instances 
within the after_flush Session Event. Here's some sample code to illustrate 
that:


from sqlalchemy import event
from sqlalchemy.engine import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, scoped_session, sessionmaker
from sqlalchemy.schema import Column, ForeignKey
from sqlalchemy.types import Integer, Text

Base = declarative_base()

engine = create_engine('postgresql://postgres@localhost/test')

Session = scoped_session(sessionmaker(
autoflush=False,
autocommit=False,
))

Session.configure(bind=engine)

class Ticket(Base):
__tablename__ = 'tickets'

id = Column(Integer, primary_key=True)
description = Column(Text, nullable=False)

class Notification(Base):
__tablename__ = 'notifications'

id = Column(Integer, primary_key=True)
ticket_id = Column(Integer, ForeignKey('tickets.id'), nullable=False)
ticket = relationship('Ticket', backref='notifications')
content = Column(Text, nullable=False)

def send_notification(session, flush_context):
for instance in session.new:
if isinstance(instance, Ticket):
Notification(
ticket=instance,
content='Ticket %d created' % instance.id,
)
# No flush or commit!

event.listen(Session, 'after_flush', send_notification)

Base.metadata.create_all(engine)

ticket = Ticket(description='test')
Session.add(ticket)
Session.commit()

query = Session.query(Notification).filter_by(
content='Ticket %d created' % ticket.id
)
assert query.count()


Although the code only does Session.commit() once, it actually executes 2 
INSERT statements in 2 separate transactions. I am pretty sure this is not 
an intended use case, right?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/euIwN8AVPoYJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.