Re: [sqlalchemy] Belonging to a session vs being in the session

2014-07-25 Thread George Sakkis


On Thursday, 24 July 2014 21:56:11 UTC+3, Michael Bayer wrote:


 expunge() is a bug:


 https://bitbucket.org/zzzeek/sqlalchemy/issue/3139/expunge-after-delete-fails

 it’s supposed to raise (hence can’t fix this til 1.0).

 so ignore expunge.  if you were to emit session.rollback(), you’d find the 
 DELETE is rolled back and you’d again get True, True.   The Session still 
 knows about this object within the transaction’s “ready to roll back” 
 state.   The object isn’t permanently not part of the Session until the 
 commit():

 session.commit()
 log(commit)


 output:

 commit: (False, False)


Thanks, good to know that expunge() is a bug. As for my other questions:
- is session.commit() the only way to dissociate the object from the 
session?
- I guess a (False, True) combination is not ever possible, correct?
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Belonging to a session vs being in the session

2014-07-25 Thread Michael Bayer

On Jul 25, 2014, at 4:43 AM, George Sakkis george.sak...@gmail.com wrote:

 
 
 On Thursday, 24 July 2014 21:56:11 UTC+3, Michael Bayer wrote:
 
 expunge() is a bug:
 
 https://bitbucket.org/zzzeek/sqlalchemy/issue/3139/expunge-after-delete-fails
 
 it's supposed to raise (hence can't fix this til 1.0).
 
 so ignore expunge.  if you were to emit session.rollback(), you'd find the 
 DELETE is rolled back and you'd again get True, True.   The Session still 
 knows about this object within the transaction's ready to roll back state.  
  The object isn't permanently not part of the Session until the commit():
 
   
 session.commit()
 log(commit)
 
 
 output:
 
 commit: (False, False)
 
 
 Thanks, good to know that expunge() is a bug. As for my other questions:
 - is session.commit() the only way to dissociate the object from the session?
 - I guess a (False, True) combination is not ever possible, correct?

you use expunge(), just not after a delete :)   (after a delete, you should be 
forgetting about that object totally). And no the False/True combination is 
not possible.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Multiple tables and foreignkey constraints

2014-07-25 Thread Ken Roberts
This is probably violating some normal form, but after playing with it 
(Thanks Michael and Jonathan), this seems to work for what I'm using. 
Suggestions welcome on improving it.

#!/usr/bin/env python

from sqlalchemy import Column, ForeignKey, Integer, MetaData, Sequence, 
String, \
create_engine
from sqlalchemy.orm import backref, joinedload, relationship, \
scoped_session, sessionmaker
from sqlalchemy.ext.declarative import declarative_base, declared_attr
from sqlalchemy.pool import NullPool

engine = create_engine(usqlite:///./test.sql, poolclass=NullPool)
metadata = MetaData(bind=engine)
session = scoped_session(sessionmaker(autoflush=True,
  autocommit=False,
  bind=engine)
 )
Base = declarative_base(metadata=metadata)

# Table base class to assign table name based on class name and
# add id column
class CommonBase(object):
@declared_attr
def __tablename__(cls):
return cls.__name__.lower()
id = Column(Integer, primary_key=True)

# Table class to keep track of manufacturers
class Manufacturer(CommonBase, Base):
def __repr__(self):
return Manufacturer(name='%s') % self.name

name = Column(String(30))
models = relationship(Model, 
  order_by=Model.id, 
  backref=manufacturer,
  cascade=all, delete-orphan,
  
primaryjoin=Manufacturer.id==Model.manufacturer_id,
  lazy='joined')

# table class to keep track of models related to Manufacturer.
class Model(CommonBase, Base):
def __repr__(self):
return Model(name=%s') % self.name
manufacturer_id = Column(Integer, ForeignKey(manufacturer.id))
name = Column(String(20))
sources = relationship(Source, 
   order_by=Source.id, 
   backref=model,
   cascade=all, delete-orphan,
   primaryjoin=Model.id==Source.model_id,
   lazy='joined')

# Table class to keep track of sources related to Model.
class Source(CommonBase, Base):
def __repr__(self):
return Source(pjlink_name='%s', pjlink_code='%s', text='%s') % \
(self.pjlink_name, self.pjlink_code, self.text)
model_id = Column(Integer, ForeignKey('model.id'))
pjlink_name = Column(String(15))
pjlink_code = Column(String(2))
text = Column(String(30))

# Table class to keep track of installed projectors.
# Use a separate query to get manufacturer/model/sources rather than
# using a relationship (one-time at program start so minimal impact).
class Installed(CommonBase, Base):
name = Column(String(20))
location = Column(String(30))
ip = Column(String(50))

# Class for projector instances.
class Projector(object):
def __init__(name=None, location=None, ip=None):
self.name = name
self.location = location
self.ip = ip
# Following variables will be filled-in after projector connected.
self.make = None
self.model = None
# Following variable will be filled in after projector connected 
and db queried.
# List of pjlink dictionary items: [ {name='', code='', text=''}, 
...]
self.sources = None

metadata.create_all()

m = Manufacturer(name='eiki')
m.models = [ Model(name='lc/xl200') ]
m.models[0].sources = [
Source(pjlink_name='RGB1', pjlink_code='11', text='HDMI RGB'),
Source(pjlink_name='RGB2', pjlink_code='12', text='VGA RGB')
]
session.add(m)
session.commit()

p = session.query(Manufacturer, Model).\
  options(joinedload(Manufacturer.models), joinedload(Model.sources)).\
  filter(Manufacturer.name=='eiki', Model.name=='lc/xl200').\
  all()

if len(p) == 0:
m = Manufacturer(name='eiki')
m.models = [ Model(name='lc/xl200') ]
m.models[0].sources = [
Source(pjlink_name='RGB1', pjlink_code='11', text='HDMI RGB'),
Source(pjlink_name='RGB2', pjlink_code='12', text='VGA RGB')
]
session.add(m)
session.commit()

p = session.query(Manufacturer, Model).\
options(joinedload(Manufacturer.models), joinedload(Model.sources)).\
filter(Manufacturer.name=='eiki', Model.name=='lc/xl200').\
all()

m=p[0][0]

print m.name
print m.models[0].name
for i in m.models[0].sources:
print PJLink name: %s  code: %s  Text: %s % \
(i.pjlink_name, i.pjlink_code, i.text)



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Multiple tables and foreignkey constraints

2014-07-25 Thread Ken Roberts
Thanks for the help Jonathan and Michael.

The below seems to work for what I'm doing. Let me know what normal forms 
I'm violating with it :)


from sqlalchemy import Column, ForeignKey, Integer, MetaData, Sequence, 
String, \
create_engine
from sqlalchemy.orm import backref, joinedload, relationship, \
scoped_session, sessionmaker
from sqlalchemy.ext.declarative import declarative_base, declared_attr
from sqlalchemy.pool import NullPool

engine = create_engine(usqlite:///./test.sql, poolclass=NullPool)
metadata = MetaData(bind=engine)
session = scoped_session(sessionmaker(autoflush=True,
autocommit=False,
bind=engine)
)
Base = declarative_base(metadata=metadata)

# Table base class to assign table name based on class name and
# add id column
class CommonBase(object):
@declared_attr
def __tablename__(cls):
return cls.__name__.lower()
id = Column(Integer, primary_key=True)

# Table class to keep track of manufacturers
class Manufacturer(CommonBase, Base):
def __repr__(self):
return Manufacturer(name='%s') % self.name

name = Column(String(30))
models = relationship(Model, 
order_by=Model.id, 
backref=manufacturer,
cascade=all, delete-orphan,

primaryjoin=Manufacturer.id==Model.manufacturer_id,
lazy='joined')

# table class to keep track of models related to Manufacturer.
class Model(CommonBase, Base):
def __repr__(self):
return Model(name=%s') % self.name
manufacturer_id = Column(Integer, ForeignKey(manufacturer.id))
name = Column(String(20))
sources = relationship(Source, 
order_by=Source.id, 
backref=model,
cascade=all, delete-orphan,
primaryjoin=Model.id==Source.model_id,
lazy='joined')

# Table class to keep track of sources related to Model.
class Source(CommonBase, Base):
def __repr__(self):
return Source(pjlink_name='%s', pjlink_code='%s', 
text='%s') % \
(self.pjlink_name, self.pjlink_code, self.text)
model_id = Column(Integer, ForeignKey('model.id'))
pjlink_name = Column(String(15))
pjlink_code = Column(String(2))
text = Column(String(30))

# Table class to keep track of installed projectors.
# Use a separate query to get manufacturer/model/sources rather than
# using a relationship (one-time at program start so minimal impact).
class Installed(CommonBase, Base):
name = Column(String(20))
location = Column(String(30))
ip = Column(String(50))

# Class for projector instances.
class Projector(object):
def __init__(name=None, location=None, ip=None):
self.name = name
self.location = location
self.ip = ip
# Following variables will be filled-in after projector 
connected.
self.make = None
self.model = None
# Following variable will be filled in after projector 
connected and db queried.
# List of pjlink dictionary items: [ {name='', code='', 
text=''}, ...]
self.sources = None

metadata.create_all()

m = Manufacturer(name='eiki')
m.models = [ Model(name='lc/xl200') ]
m.models[0].sources = [
Source(pjlink_name='RGB1', pjlink_code='11', text='HDMI RGB'),
Source(pjlink_name='RGB2', pjlink_code='12', text='VGA RGB')
]
session.add(m)
session.commit()

p = session.query(Manufacturer, Model).\
options(joinedload(Manufacturer.models), joinedload(Model.sources)).\
filter(Manufacturer.name=='eiki', Model.name=='lc/xl200').\
all()

if len(p) == 0:
m = Manufacturer(name='eiki')
m.models = [ Model(name='lc/xl200') ]
m.models[0].sources = [
Source(pjlink_name='RGB1', pjlink_code='11', text='HDMI RGB'),
Source(pjlink_name='RGB2', pjlink_code='12', text='VGA RGB')
]
session.add(m)
session.commit()

p = session.query(Manufacturer, Model).\
options(joinedload(Manufacturer.models), 
joinedload(Model.sources)).\
filter(Manufacturer.name=='eiki', Model.name=='lc/xl200').\
all()

m=p[0][0]

print m.name
print m.models[0].name
for i in m.models[0].sources:
print PJLink name: %s  code: %s  Text: %s % \
(i.pjlink_name, i.pjlink_code, i.text)





-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, 

[sqlalchemy] Dangers of setting a session's expire_on_commit=False?

2014-07-25 Thread Paul Molodowitch
Hi all - so I'm thinking of disabling the expire_on_commit property for my
default sessionmaker object, and I was wondering what the potential issues
with this were.  Is it simply that the next access of the data on it could
be using out-of-date information?  Don't objects potentially have this
problem anyway, in the sense that if they are accessed TWICE after a
commit, the second access will use the data cached from the first, and
could again be out of date?

To give some background - we're in the middle of converting an existing
codebase to use sqlalchemy, and there are number of classes that act both
as database wrappers, AND data structures.  That is, when first
constructed, they are populated with data from the database; but from then
on out, they just keep the cached data.  So they would behave similarly to
ORM-mapped objects, if expire_on_commit is False.  The thinking here is
that for most of these classes, the data changes fairly infrequently, and
it's not catastrophic if it's somewhat out of date. Also we don't want to
keep hitting the database more than necessary...  and, finally, we might
need to have access to the cached data for a long time (ie, as long as the
user has a ui window open).

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Dangers of setting a session's expire_on_commit=False?

2014-07-25 Thread Michael Bayer

On Jul 25, 2014, at 6:22 PM, Paul Molodowitch elron...@gmail.com wrote:

 Hi all - so I'm thinking of disabling the expire_on_commit property for my 
 default sessionmaker object, and I was wondering what the potential issues 
 with this were.  Is it simply that the next access of the data on it could be 
 using out-of-date information?  

pretty much, yup


 Don't objects potentially have this problem anyway, in the sense that if they 
 are accessed TWICE after a commit, the second access will use the data cached 
 from the first, and could again be out of date?

only if you have a low transaction isolation level set up.   The Session tries 
to make a choice as to the most reasonable place that concurrent changes should 
be anticipated.  Transaction demarcations are the best place.  If you are 
expecting to code your app to specifically expect READ COMMITTED or READ 
UNCOMMITTED behavior where your transaction relies upon seeing a row change 
value from a concurrent transaction, that's a special use case, in which case 
you can use expire() for those object that have this requirement.   The ORM 
Session can obviously not guess when such an expiration is to be detected 
otherwise.


 
 To give some background - we're in the middle of converting an existing 
 codebase to use sqlalchemy, and there are number of classes that act both as 
 database wrappers, AND data structures.  That is, when first constructed, 
 they are populated with data from the database; but from then on out, they 
 just keep the cached data.  So they would behave similarly to ORM-mapped 
 objects, if expire_on_commit is False.  The thinking here is that for most of 
 these classes, the data changes fairly infrequently, and it's not 
 catastrophic if it's somewhat out of date. Also we don't want to keep hitting 
 the database more than necessary...  and, finally, we might need to have 
 access to the cached data for a long time (ie, as long as the user has a ui 
 window open).
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Dangers of setting a session's expire_on_commit=False?

2014-07-25 Thread Claudio Freire
On Fri, Jul 25, 2014 at 7:55 PM, Michael Bayer mike...@zzzcomputing.com wrote:
 Don't objects potentially have this problem anyway, in the sense that if
 they are accessed TWICE after a commit, the second access will use the data
 cached from the first, and could again be out of date?


 only if you have a low transaction isolation level set up.   The Session
 tries to make a choice as to the most reasonable place that concurrent
 changes should be anticipated.  Transaction demarcations are the best place.
 If you are expecting to code your app to specifically expect READ COMMITTED
 or READ UNCOMMITTED behavior where your transaction relies upon seeing a row
 change value from a concurrent transaction, that’s a special use case, in
 which case you can use expire() for those object that have this requirement.
 The ORM Session can obviously not guess when such an expiration is to be
 detected otherwise.


I don't see how transaction isolation levels relate to this.

The effect of disabling expire_on_commit is that of not seeing
subsequent commits. It would be a fictious DO NOT READ COMMITTED
level. Having it on, somewhat caters to possible SERIALIZED settings,
where strict ordering is to be expected, since without serialized
transactions there's no way expiring helps correctness in any way.
None of those seem overly common to me, so I don't see how one can
ignore the serialization level in effect or possible concurrent
updates that are happening at the same time, with or without
expire_on_commit.

IMHO, expire_on_commit is something that really has no sensible
default. You pick your own, the library authors pick one default
because, well, why not?

For the record, I'm using expire_on_commit=off, because I also use
model instances outside the scope of their originating transaction.
I've had no problems with it, but I did have to be very careful with
the semantics and lifetime of those objects, and of expiring manually
anything I put on concurrently-accessed structures, lest someone
modify it before the session's scope is over and it gets (or tries to
be) committed to the DB.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Strange error after SQLAlchemy update: 'list' object has no attribute '_all_columns'

2014-07-25 Thread ThiefMaster
Nicely formatted version: 
http://stackoverflow.com/questions/24930155/strange-error-after-sqlalchemy-update-list-object-has-no-attribute-all-colu

After updating to the most recent SQLAlchemy one of my slightly more 
complex query started to fail with an error inside SQLalchemy.

Here's a simplified version of my query:

subquery = (select(['latitude'])
.select_from(func.unnest(func.array_agg(Room.latitude))
 .alias('latitude')).limit(1).as_scalar())
Room.query.with_entities(Room.building, 
subquery).group_by(Room.building).all()

When executing it I get an error deep inside SQLAlchemy:


  File .../sqlalchemy/sql/selectable.py, line 429, in columns
self._populate_column_collection()
  File .../sqlalchemy/sql/selectable.py, line 992, in 
_populate_column_collection
for col in self.element.columns._all_columns:
AttributeError: 'list' object has no attribute '_all_columns'

Inspecting it in a debugger shows me this:

 self.element
sqlalchemy.sql.functions.Function at 0x7f72d4fcae50; unnest
 str(self.element)
'unnest(array_agg(rooms.id))'
 self.element.columns
[sqlalchemy.sql.functions.Function at 0x7f72d4fcae50; unnest]

The problem started with SQLAlchemy 0.9.4; in 0.9.3 everything worked fine.

When running it in SQLAlchemy 0.9.3 the following query is executed (as 
expected):


SELECT rooms.building AS rooms_building,
   (SELECT latitude
FROM unnest(array_agg(rooms.latitude)) AS latitude
LIMIT 1) AS anon_1
FROM rooms
GROUP BY rooms.building

Am I doing something wrong here or is it a bug in SQLAlchemy?




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] table.update() and CTE

2014-07-25 Thread Jon Nelson
I seem to be having some trouble generating an update statement like this:

with baz AS (
  select
name,
coalesce(
exists (
  select 1 from bar where bar.name=foo.name
),
False) AS found_name
  from
foo
),
update FOO
SET
found_name = baz.found_name
FROM
  baz
WHERE
  foo.name = baz.name
;

I first built the select that would go into the CTE, and turned it
into a CTE with .cte()
Then I built an update, referencing the cte.
The query renders in such a way that it *references* the CTE, but
never defines it.
I can build a *select* query which references (and defines) the CTE,
but not an update.

-- 
Jon
Software Blacksmith

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Dangers of setting a session's expire_on_commit=False?

2014-07-25 Thread Michael Bayer

On Jul 25, 2014, at 7:16 PM, Claudio Freire klaussfre...@gmail.com wrote:

 On Fri, Jul 25, 2014 at 7:55 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 Don't objects potentially have this problem anyway, in the sense that if
 they are accessed TWICE after a commit, the second access will use the data
 cached from the first, and could again be out of date?
 
 
 only if you have a low transaction isolation level set up.   The Session
 tries to make a choice as to the most reasonable place that concurrent
 changes should be anticipated.  Transaction demarcations are the best place.
 If you are expecting to code your app to specifically expect READ COMMITTED
 or READ UNCOMMITTED behavior where your transaction relies upon seeing a row
 change value from a concurrent transaction, that's a special use case, in
 which case you can use expire() for those object that have this requirement.
 The ORM Session can obviously not guess when such an expiration is to be
 detected otherwise.
 
 
 I don't see how transaction isolation levels relate to this.
 
 The effect of disabling expire_on_commit is that of not seeing
 subsequent commits. It would be a fictious DO NOT READ COMMITTED
 level. Having it on, somewhat caters to possible SERIALIZED settings,
 where strict ordering is to be expected, since without serialized
 transactions there's no way expiring helps correctness in any way.
 None of those seem overly common to me, so I don't see how one can
 ignore the serialization level in effect or possible concurrent
 updates that are happening at the same time, with or without
 expire_on_commit.

yes, it caters the most to SERIALIZED settings, a little bit less so to 
REPEATABLE READ (works for individual objects but not collections), then still 
less to READ COMMITTED (works only to the extent that you're worried about 
other transactions in progress), etc.


 
 IMHO, expire_on_commit is something that really has no sensible
 default. You pick your own, the library authors pick one default
 because, well, why not?

So, for a long time, all through 0.4, the default was, never, and not even 
possible.  There was no expire on commit, at that time I thought it was insane 
to throw away all that great data that you've loaded unless you absolutely want 
to.

As it turns out the current defaults are not by accident!   We had a pretty 
steady parade of users who complained that their data was stale, and for years 
I scratched my head, how are we do to this?  just blow away all objects all the 
time on every query?   that seemed so wrong, so wasteful, and of course so 
complicated since we would want pending changes to remain around.   

As I've written many times, it was the Storm ORM that introduced me to the 
concept of expire on commit. The linkage to the transaction is also kind 
of where Hibernate and JSR 220 is coming from, though not necessarily wiping 
out the object on commit...the spec doesn't make that aspect very clear.
Overall the expire on commit idea is very strict and assumes entities are row 
proxies only.   


 For the record, I'm using expire_on_commit=off, because I also use
 model instances outside the scope of their originating transaction.
 I've had no problems with it, but I did have to be very careful with
 the semantics and lifetime of those objects, and of expiring manually
 anything I put on concurrently-accessed structures, lest someone
 modify it before the session's scope is over and it gets (or tries to
 be) committed to the DB.

Maybe expire on begin might be useful.   So your data is still there after 
commit, but if you start a new transaction, then things refresh.   I'm up for 
it in a 1.0 release, if you think it's useful.   Though explaining to everyone 
another option...what a PITA



 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Dangers of setting a session's expire_on_commit=False?

2014-07-25 Thread Claudio Freire
On Fri, Jul 25, 2014 at 10:09 PM, Michael Bayer
mike...@zzzcomputing.com wrote:
 For the record, I'm using expire_on_commit=off, because I also use
 model instances outside the scope of their originating transaction.
 I've had no problems with it, but I did have to be very careful with
 the semantics and lifetime of those objects, and of expiring manually
 anything I put on concurrently-accessed structures, lest someone
 modify it before the session's scope is over and it gets (or tries to
 be) committed to the DB.

 Maybe expire on begin might be useful.   So your data is still there after 
 commit, but if you start a new transaction, then things refresh.   I'm up for 
 it in a 1.0 release, if you think it's useful.   Though explaining to 
 everyone another option...what a PITA

I don't see how that'd work.

On session.add? When set as dependent of an object of another (active) session?

Seems very error prone to make it too automatic.

All in all, the current state isn't bad. It's understandable, and controllable.

I'm just saying there's no one-size-fits-all.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.