[sqlalchemy] SQLite synchronous on engine

2013-06-19 Thread pr64
Hi,

In order to improve the underlying SQLite performance, I've changed some 
low level settings with PRAGMA commands:

PRAGMA synchronous=NORMAL; /* instead of default FULL value, see: 
http://www.sqlite.org/pragma.html#pragma_synchronous */
PRAGMA journal_mode=WAL;   /* 
http://www.sqlite.org/pragma.html#pragma_journal_mode and 
http://www.sqlite.org/wal.html */

From an implementation point of view, I did as explained in this thread: 
https://groups.google.com/forum/?fromgroups#!topic/sqlalchemy/IY5PlUf4VwE. I've 
got an OrmManager class (which is a singleton) which is used to get new 
sessions. *The bold lines are the ones I added to improve performance*.


class OrmManager:

OrmManager class

Handles the database and provides an abstraction layer for it


def  __init__(self, database, metadata, db_type, echo=False):
self.database = database
self.session_maker = sessionmaker()

if db_type == 'file':
engine = create_engine('sqlite:///' + database, echo=echo,
connect_args={'detect_types': 
sqlite3.PARSE_DECLTYPES|sqlite3.PARSE_COLNAMES},
native_datetime=True,
poolclass=NullPool
)
elif db_type == 'memory':
engine = create_engine('sqlite:///' + database, echo=echo,
connect_args={'detect_types': 
sqlite3.PARSE_DECLTYPES|sqlite3.PARSE_COLNAMES},
native_datetime=True,
poolclass=SingletonThreadPool,
pool_size=5
)
else:
raise Exception(Unknown db_type: %s % str(db_type))

metadata.create_all(engine)
self.session_maker.configure(bind=engine, expire_on_commit=False)
*session = self.session_maker()*
*session.connection().execute(PRAGMA journal_mode=WAL)*
*session.commit()*
*session.close()*

def get_session(self):
Gets ORM session

session = self.session_maker()
*session.connection().execute(PRAGMA synchronous=NORMAL)*
return session

I have two questions:

1- journal_mode pragma is persistent (according to sqlite doc) and should 
be done once but is there a way to pass the synchronous configuration to 
the  engine and make it global instead of setting it every time my 
application gets a new session ?
2- Are there any performance settings I can tune at sqlalchemy and/or 
sqlite level to improve my db access speed ?

Thanks a lot for your feedback,

Pierre

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] SQLAlchemy-ImageAttach: an extension for attaching images to entity objects

2013-06-19 Thread Hong Minhee
Hi,

I am glad to introduce SQLAlchemy-ImageAttach, a new extension for attaching 
images to entity objects.  It provides the following features:

- Storage backend interface: Yu can use file system backend on your local 
development box, and switch it to AWS S3 when it’s deployed to the production 
box.  Or you can add a new backend implementation by yourself.

- Maintaining multiple image sizes: Any size of thumbnails can be generated 
from the original size without assuming the fixed set of sizes. You can 
generate a thumbnail of a particular size if it doesn't exist yet when the size 
is requested. Use RRS (Reduced Redundancy Storage) for reproducible thumbnails 
on S3.

- Every image has its URL: Attached images can be exposed as a URL.

- SQLAlchemy transaction aware: Saved file are removed when the ongoing 
transaction has been rolled back.

- Tested on various environments: Python 2.6, 2.7, 3.2, 3.3, PyPy / PostgreSQL, 
MySQL, SQLite / SQLAlchemy 0.8 or higher

On the other hand, it does not provide:

- General file attachment: It only supports images.
- BLOB storage: It doesn’t use BLOB data type at all.
- Image preprocessing: It simply does resizing, but any other image processings 
like crop aren’t provided.  (Although you can do such things using Wand, 
SQLAlchemy-ImageAttach depends on.)

Today I released SQLAlchemy-ImageAttach 0.8.0 which is its first stable 
version.  You can install it using pip or easy_install:

  $ pip install SQLAlchemy-ImageAttach

Docs and repository can be found at the following urls:

  https://sqlalchemy-imageattach.readthedocs.org/
  https://github.com/crosspop/sqlalchemy-imageattach

Please try it out and give me feedback!


Thanks,
Hong Minhee

smime.p7s
Description: S/MIME cryptographic signature


Re: [sqlalchemy] SQLite synchronous on engine

2013-06-19 Thread Michael Bayer

On Jun 19, 2013, at 10:14 AM, pr64 pierrerot...@gmail.com wrote:

 Hi,
 
 In order to improve the underlying SQLite performance, I've changed some low 
 level settings with PRAGMA commands:
 
 PRAGMA synchronous=NORMAL; /* instead of default FULL value, see: 
 http://www.sqlite.org/pragma.html#pragma_synchronous */
 PRAGMA journal_mode=WAL;   /* 
 http://www.sqlite.org/pragma.html#pragma_journal_mode and 
 http://www.sqlite.org/wal.html */
 
 From an implementation point of view, I did as explained in this thread: 
 https://groups.google.com/forum/?fromgroups#!topic/sqlalchemy/IY5PlUf4VwE. 
 I've got an OrmManager class (which is a singleton) which is used to get new 
 sessions. The bold lines are the ones I added to improve performance.


 session = self.session_maker()
 session.connection().execute(PRAGMA journal_mode=WAL)
 session.commit()
 session.close()
 
 def get_session(self):
 Gets ORM session
 
 session = self.session_maker()
 session.connection().execute(PRAGMA synchronous=NORMAL)
 return session
 
 I have two questions:
 
 1- journal_mode pragma is persistent (according to sqlite doc) and should be 
 done once but is there a way to pass the synchronous configuration to the  
 engine and make it global instead of setting it every time my application 
 gets a new session ?


you want to use a connect event for that:

from sqlalchemy import event

@event.listens_for(my_engine, connect)
def on_connect(dbapi_conn, conn_rec, conn_proxy):
cursor = dbapi_conn.cursor()
cursor.execute(your pragma here)
cursor.close()


 2- Are there any performance settings I can tune at sqlalchemy and/or sqlite 
 level to improve my db access speed ?

I'm not familiar with the nature of these performance settings, but the sqlite3 
DBAPI and SQLite itself is extremely fast, way way faster by itself than if you 
have any kind of Python code wrapping it and also faster than any other DBAPI 
I've worked with.   So if you do profiling you will see that the vast majority 
of time with a SQLite app is taken up by SQLAlchemy Core and ORM.   Like if you 
look at this profile diagram from 0.7:  
http://techspot.zzzeek.org/files/2010/sqla_070b1_large.png , the proportion of 
time actually spent within SQLite is that dark blue box in the center-left, 
where you can see method 'execute' of sqlite3 , and theres below it a 
little maroon box that says method, that's likely the sqlite3 
cursor.fetchall() method.So all of the performance gains these pragmas get 
you will at most make that one blue box a little smaller.  The rest of the 
screen represents time spent outside of sqlite3.

Some techniques on profiling can be seen in my stackoverflow answer here: 
http://stackoverflow.com/questions/1171166/how-can-i-profile-a-sqlalchemy-powered-application/1175677#1175677

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] flask-sqlalchemy pysybase connections

2013-06-19 Thread Kevin S

Unfortunately, we cannot switch off of Sybase. It is a future project, but 
we cannot go there right now (I would love to). I also have not been able 
to find any other sybase dbapis that work with sqlalchemy and are free.

I did set up some tests against pysybase directly, omitting the sqlalchemy 
and cherrypy pieces. I'm having two threads each create a connection and 
execute a query. After littering debug statements in the test code, and 
throughout the pysybase library, I can see that as soon as the first thread 
executes its query, which is done through a c extension, everything else 
halts, the main function, as well as the second thread. 

Here is the test code (minus debugging lines):

def sy_query(query):
db = Sybase.connect(dbServer,dbUser,dbPass,dbName)
cur = db.cursor()
cur.execute(query)
t1 = Thread(target=sy_query,args=(select * from blah,))
t2 = Thread(target=sy_query,args=(select something from blah2,))
t1.start()
t2.start()
t1.join()
t2.join()

It gets as far as one print statement after t1.start(). I have a print at 
the beginning of sy_query() that does not execute for the second thread 
until the first has finished its query. Now, I'm not very familiar with 
threads in python, or the GIL in general. Inside pysybase's library, there 
is essentially this call to the c extension:

status, result = self._cmd.ct_results()

where ct_results() is defined in the c file.
Is there some easy or brute force way to force that call to NOT grab the 
GIL? From my brief reading it sounded like c extensions are supposed to 
get around GIL issues, but again I am naive on the subject.

On Sunday, June 16, 2013 1:10:03 PM UTC-4, Michael Bayer wrote:


 On Jun 16, 2013, at 12:55 PM, Kevin S kevin...@gmail.com javascript: 
 wrote:

 I can try to get another dbapi installed later this week and see if that 
 works. However, I had to jump through some hoops just to get pysybase 
 working in the first place, so I'm not terribly looking forward to trying 
 to tackle another one.

 I don't know much about how sessions are managed (I believe flask creates 
 scoped-session objects). Could it be something that is just not implemented 
 in the pysybase sqlalchemy dialect, but available in the dbapi? I'm not 
 sure exactly what to look for.



 not really.  The DBAPI is a very simple API, it's pretty much mostly 
 execute(), rollback(), and commit().   We have a test suite that runs 
 against pysybase as well, it certainly has a lot of glitches, not the least 
 of which is that pysybase last time I checked could not handle non-ASCII 
 data in any way. 

 If pysybase is halting the entire intepreter on a query, there's nothing 
 about the DBAPI in the abstract which refers to that.   It sounds like 
 pysybase probably grabs the GIL on execute() while waiting for results, 
 which would be pretty bad.   Perhaps it has settings, either run time or 
 compile time, which can modify its behavior in this regard.   

 If it were me, I'd probably seek some way to not produce a web application 
 directly against a Sybase database, as the severe lack of driver support 
 will probably lead to many unsolvable scaling issues.  I'd look to mirror 
 the Sybase data in some other more modern system, either another RDBMS or a 
 more cache-like system like Redis.







 On Saturday, June 15, 2013 3:33:36 PM UTC-4, Michael Bayer wrote:


 On Jun 14, 2013, at 3:18 PM, Kevin S kevin...@gmail.com wrote: 

  I am running into a problem while developing a flask application using 
 flask-sqlalchemy. Now, I'm not even 100% sure my problem is sqlalchemy 
 related, but I don't know how to debug this particular issue. 
  
  To start, I have a sybase database that I want to see if I can build a 
 report generating application for. The reports will all be custom SQL 
 queries that are requested by our users, and they will be able to refresh 
 throughout the day as they edit and clean up their data (we focus on a lot 
 of data curation). We plan to do other things that merit the use of an ORM, 
 and we have a lot of complex relationships. Anyway, that's why I'm first 
 trying to get this to work in our flask + sqlalchemy stack. And it does 
 work in fact. 
  
  Now the problem is, my current application is not scalable, because any 
 time I do a long query (say several seconds or more), flask will not accept 
 any additional requests until that query finishes. (Note: I am running the 
 application through cherrypy). I have tested various things to ensure that 
 the application can handle multiple incoming requests. If I have it just 
 loop through a big file, or even just sleep instead of doing a query, then 
 I can bang away at it all I want from other browser windows, and it's fine. 
  
  We also have a copy of our database that is in postgres (this is only 
 for testing, and can't be a final solution, because it gets updated only 
 once a week). So, I've found that if I hook the application up to the 
 postgres 

[sqlalchemy] order_by hybrid property fails when specifed as string in a relationship

2013-06-19 Thread George Sakkis
It seems that hybrid properties are not allowed to be specified as strings 
for the order_by parameter of a relationship; attempting it fails with 
InvalidRequestError: Class ... does not have a mapped column named 
'...'. Is this a known limitation or a bug? Sample test case below.

Thanks,
George

# 

from sqlalchemy import Column, Integer, String, ForeignKey, case
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.hybrid import hybrid_property
from sqlalchemy.orm import relationship

Base = declarative_base()


class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
firstname = Column(String(50))
lastname = Column(String(50))
game_id = Column(Integer, ForeignKey('game.id'))

@hybrid_property
def fullname(self):
if self.firstname is not None:
return self.firstname +   + self.lastname
else:
return self.lastname

@fullname.expression
def fullname(cls):
return case([
(cls.firstname != None, cls.firstname +   + cls.lastname),
], else_=cls.lastname)


class Game(Base):
__tablename__ = 'game'
id = Column(Integer, primary_key=True)
name = Column(String(50))
if 0: # this works
users = relationship(User, order_by=User.fullname)
else: # this fails
users = relationship(User, order_by=User.fullname)

if __name__ == '__main__':
game = Game(name=tetris)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] flask-sqlalchemy pysybase connections

2013-06-19 Thread Michael Bayer

On Jun 19, 2013, at 3:07 PM, Kevin S kevinrst...@gmail.com wrote:

 
 Unfortunately, we cannot switch off of Sybase. It is a future project, but we 
 cannot go there right now (I would love to). I also have not been able to 
 find any other sybase dbapis that work with sqlalchemy and are free.

you can use FreeTDS with Pyodbc, and you might have much better results with 
that.   

 
 
 where ct_results() is defined in the c file.
 Is there some easy or brute force way to force that call to NOT grab the 
 GIL? From my brief reading it sounded like c extensions are supposed to get 
 around GIL issues, but again I am naive on the subject.

There's a C macro Py_BEGIN_ALLOW_THREADS/Py_END_ALLOW_THREADS that must bridge 
areas where the C extension should release the GIL.  The main documentation on 
this is at 
http://docs.python.org/2/c-api/init.html#releasing-the-gil-from-extension-code.





 
 On Sunday, June 16, 2013 1:10:03 PM UTC-4, Michael Bayer wrote:
 
 On Jun 16, 2013, at 12:55 PM, Kevin S kevin...@gmail.com wrote:
 
 I can try to get another dbapi installed later this week and see if that 
 works. However, I had to jump through some hoops just to get pysybase 
 working in the first place, so I'm not terribly looking forward to trying to 
 tackle another one.
 
 I don't know much about how sessions are managed (I believe flask creates 
 scoped-session objects). Could it be something that is just not implemented 
 in the pysybase sqlalchemy dialect, but available in the dbapi? I'm not sure 
 exactly what to look for.
 
 
 not really.  The DBAPI is a very simple API, it's pretty much mostly 
 execute(), rollback(), and commit().   We have a test suite that runs against 
 pysybase as well, it certainly has a lot of glitches, not the least of which 
 is that pysybase last time I checked could not handle non-ASCII data in any 
 way. 
 
 If pysybase is halting the entire intepreter on a query, there's nothing 
 about the DBAPI in the abstract which refers to that.   It sounds like 
 pysybase probably grabs the GIL on execute() while waiting for results, which 
 would be pretty bad.   Perhaps it has settings, either run time or compile 
 time, which can modify its behavior in this regard.   
 
 If it were me, I'd probably seek some way to not produce a web application 
 directly against a Sybase database, as the severe lack of driver support will 
 probably lead to many unsolvable scaling issues.  I'd look to mirror the 
 Sybase data in some other more modern system, either another RDBMS or a more 
 cache-like system like Redis.
 
 
 
 
 
 
 
 On Saturday, June 15, 2013 3:33:36 PM UTC-4, Michael Bayer wrote:
 
 On Jun 14, 2013, at 3:18 PM, Kevin S kevin...@gmail.com wrote: 
 
  I am running into a problem while developing a flask application using 
  flask-sqlalchemy. Now, I'm not even 100% sure my problem is sqlalchemy 
  related, but I don't know how to debug this particular issue. 
  
  To start, I have a sybase database that I want to see if I can build a 
  report generating application for. The reports will all be custom SQL 
  queries that are requested by our users, and they will be able to refresh 
  throughout the day as they edit and clean up their data (we focus on a lot 
  of data curation). We plan to do other things that merit the use of an 
  ORM, and we have a lot of complex relationships. Anyway, that's why I'm 
  first trying to get this to work in our flask + sqlalchemy stack. And it 
  does work in fact. 
  
  Now the problem is, my current application is not scalable, because any 
  time I do a long query (say several seconds or more), flask will not 
  accept any additional requests until that query finishes. (Note: I am 
  running the application through cherrypy). I have tested various things to 
  ensure that the application can handle multiple incoming requests. If I 
  have it just loop through a big file, or even just sleep instead of doing 
  a query, then I can bang away at it all I want from other browser windows, 
  and it's fine. 
  
  We also have a copy of our database that is in postgres (this is only for 
  testing, and can't be a final solution, because it gets updated only once 
  a week). So, I've found that if I hook the application up to the postgres 
  version, I don't have this problem. I can initiate a long query in one 
  browser tab, and any other page requests in subsequent windows come back 
  fine. The problem is only when using Sybase. We have other applications 
  that are not flask or sqlalchemy, and they don't seem to have this 
  limitation. As far as I can tell, I've narrowed it down to as soon as it 
  executes a query. The entire app will wait until that query finishes, not 
  allowing any new connections. I have log statements in my request 
  handlers, and even in my before_request method, and those will not print a 
  thing until the moment that first query returns. 
  
  Additional info: I am using Sybase 15 with the pysybase driver. 
  I 

Re: [sqlalchemy] order_by hybrid property fails when specifed as string in a relationship

2013-06-19 Thread Michael Bayer

On Jun 19, 2013, at 4:19 PM, George Sakkis george.sak...@gmail.com wrote:

 It seems that hybrid properties are not allowed to be specified as strings 
 for the order_by parameter of a relationship; attempting it fails with 
 InvalidRequestError: Class ... does not have a mapped column named '...'. 
 Is this a known limitation or a bug? Sample test case below.

It's kind of a missing feature; here's a patch to make that work which will be 
for 0.8:  http://www.sqlalchemy.org/trac/ticket/2761


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Versioned mixin and backref

2013-06-19 Thread Seth P
The Versioned mixin described in 
http://docs.sqlalchemy.org/en/rel_0_8/orm/examples.html#versioned-objects 
(which I renamed VersionedMixin, but is otherwise the same) has what I 
would consider an unintuitive and undesirable interaction with backref: if 
C references A with a backref, adding a new C object referencing a 
particular A object will cause the version number of the target A object to 
increment, even though there are no changes to the A table. If the relation 
has no backref (as in the relationship from C to B below), then the target 
object version number is not incremented, as I would expect. It seems that 
the code is effectively using session.is_modified(a) to determine whether 
to increment the version number, whereas I would have thought 
session.is_modified(a, 
include_collections=False) would be more appropriate. Is there some use 
case I'm not considering that favors the current behavior?

Thanks,

Seth

from sqlalchemy import Column, Integer, String, ForeignKey, create_engine
from sqlalchemy.ext.declarative.api import declarative_base
from sqlalchemy.orm import sessionmaker, scoped_session, relationship, 
backref
from history_meta import VersionedMixin, versioned_session

Base = declarative_base(object)
metadata = Base.metadata

class A(VersionedMixin, Base):
__tablename__ = 'a'
__table_args__ = {}
id = Column(Integer, primary_key=True)
name = Column(String(3))
def __repr__(self):
return A(id=%d,name='%s',version=%d,cs=%s) % (self.id, self.name, 
self.version, [c.name for c in self.cs])

class B(VersionedMixin, Base):
__tablename__ = 'b'
__table_args__ = {}
id = Column(Integer, primary_key=True)
name = Column(String(3))
def __repr__(self):
return B(id=%d,name='%s',version=%d) % (self.id, self.name, 
self.version)

class C(VersionedMixin, Base):
__tablename__ = 'c'
__table_args__ = {}
id = Column(Integer, primary_key=True)
name = Column(String(3))
a_id = Column(Integer, ForeignKey('a.id'))
a_re = relationship(A, backref='cs')
b_id = Column(Integer, ForeignKey('b.id'))
b_re = relationship(B)

if __name__ == '__main__':
engine = create_engine('sqlite:///:memory:', echo=False)
metadata.create_all(bind=engine)
Session = scoped_session(sessionmaker(bind=engine))
versioned_session(Session)
session = Session()

# populate tables with a single entry in each table
a = A(name='a')
b = B(name='b')
c1 = C(name='c1', a_re=a, b_re=b)
session.add_all([a, b, c1])
session.commit()
print '\nAfter initial commit'
print 'a=%s; is_modified(a)=%s; is_modified(a, 
include_collections=False)=%s' % (a, session.is_modified(a), 
session.is_modified(a, include_collections=False))
print 'b=%s; is_modified(b)=%s; is_modified(b, 
include_collections=False)=%s' % (b, session.is_modified(b), 
session.is_modified(b, include_collections=False))
# add another entry in b that points to a
c2 = C(name='c2', a_re=a, b_re=b)
session.add(c2)
print \nAfter adding C(name='c2', a_re=a, b_re=b), but before 
committing:
print 'a=%s; is_modified(a)=%s; is_modified(a, 
include_collections=False)=%s' % (a, session.is_modified(a), 
session.is_modified(a, include_collections=False))
print 'b=%s; is_modified(b)=%s; is_modified(b, 
include_collections=False)=%s' % (b, session.is_modified(b), 
session.is_modified(b, include_collections=False))
session.commit()
print '\nAfter final commit:'
print 'a=%s; is_modified(a)=%s; is_modified(a, 
include_collections=False)=%s' % (a, session.is_modified(a), 
session.is_modified(a, include_collections=False))
print 'b=%s; is_modified(b)=%s; is_modified(b, 
include_collections=False)=%s' % (b, session.is_modified(b), 
session.is_modified(b, include_collections=False))

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Versioned mixin and backref

2013-06-19 Thread Michael Bayer
Very strange how these reports are always in pairs. This recipe has been there 
for several years now, yet literally two days ago, someone submitted a pull 
request pointing out and repairing this issue for the first time. The 
pullreq has been merged and you can apply the changes it has here: 

https://bitbucket.org/zzzeek/sqlalchemy/commits/d6c60cb2f3b1bf27f10aecf542fc0e3f3f903183








On Jun 19, 2013, at 5:53 PM, Seth P spadow...@gmail.com wrote:

 The Versioned mixin described in 
 http://docs.sqlalchemy.org/en/rel_0_8/orm/examples.html#versioned-objects 
 (which I renamed VersionedMixin, but is otherwise the same) has what I would 
 consider an unintuitive and undesirable interaction with backref: if C 
 references A with a backref, adding a new C object referencing a particular A 
 object will cause the version number of the target A object to increment, 
 even though there are no changes to the A table. If the relation has no 
 backref (as in the relationship from C to B below), then the target object 
 version number is not incremented, as I would expect. It seems that the code 
 is effectively using session.is_modified(a) to determine whether to increment 
 the version number, whereas I would have thought session.is_modified(a, 
 include_collections=False) would be more appropriate. Is there some use case 
 I'm not considering that favors the current behavior?
 
 Thanks,
 
 Seth
 
 from sqlalchemy import Column, Integer, String, ForeignKey, create_engine
 from sqlalchemy.ext.declarative.api import declarative_base
 from sqlalchemy.orm import sessionmaker, scoped_session, relationship, backref
 from history_meta import VersionedMixin, versioned_session
 
 Base = declarative_base(object)
 metadata = Base.metadata
 
 class A(VersionedMixin, Base):
 __tablename__ = 'a'
 __table_args__ = {}
 id = Column(Integer, primary_key=True)
 name = Column(String(3))
 def __repr__(self):
 return A(id=%d,name='%s',version=%d,cs=%s) % (self.id, self.name, 
 self.version, [c.name for c in self.cs])
 
 class B(VersionedMixin, Base):
 __tablename__ = 'b'
 __table_args__ = {}
 id = Column(Integer, primary_key=True)
 name = Column(String(3))
 def __repr__(self):
 return B(id=%d,name='%s',version=%d) % (self.id, self.name, 
 self.version)
 
 class C(VersionedMixin, Base):
 __tablename__ = 'c'
 __table_args__ = {}
 id = Column(Integer, primary_key=True)
 name = Column(String(3))
 a_id = Column(Integer, ForeignKey('a.id'))
 a_re = relationship(A, backref='cs')
 b_id = Column(Integer, ForeignKey('b.id'))
 b_re = relationship(B)
 
 if __name__ == '__main__':
 engine = create_engine('sqlite:///:memory:', echo=False)
 metadata.create_all(bind=engine)
 Session = scoped_session(sessionmaker(bind=engine))
 versioned_session(Session)
 session = Session()
 
 # populate tables with a single entry in each table
 a = A(name='a')
 b = B(name='b')
 c1 = C(name='c1', a_re=a, b_re=b)
 session.add_all([a, b, c1])
 session.commit()
 print '\nAfter initial commit'
 print 'a=%s; is_modified(a)=%s; is_modified(a, 
 include_collections=False)=%s' % (a, session.is_modified(a), 
 session.is_modified(a, include_collections=False))
 print 'b=%s; is_modified(b)=%s; is_modified(b, 
 include_collections=False)=%s' % (b, session.is_modified(b), 
 session.is_modified(b, include_collections=False))
 # add another entry in b that points to a
 c2 = C(name='c2', a_re=a, b_re=b)
 session.add(c2)
 print \nAfter adding C(name='c2', a_re=a, b_re=b), but before 
 committing:
 print 'a=%s; is_modified(a)=%s; is_modified(a, 
 include_collections=False)=%s' % (a, session.is_modified(a), 
 session.is_modified(a, include_collections=False))
 print 'b=%s; is_modified(b)=%s; is_modified(b, 
 include_collections=False)=%s' % (b, session.is_modified(b), 
 session.is_modified(b, include_collections=False))
 session.commit()
 print '\nAfter final commit:'
 print 'a=%s; is_modified(a)=%s; is_modified(a, 
 include_collections=False)=%s' % (a, session.is_modified(a), 
 session.is_modified(a, include_collections=False))
 print 'b=%s; is_modified(b)=%s; is_modified(b, 
 include_collections=False)=%s' % (b, session.is_modified(b), 
 session.is_modified(b, include_collections=False))
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to