[sqlalchemy] Re: transaction spanning multiple threads

2015-02-17 Thread Laurence Rowe
I'd recommend keeping the writes to a single thread if you can as that will 
be easiest. If you can't do that then you could tie them together using the 
transaction module's two phase commit and zope.sqlalchemy. As you want the 
same transaction shared across threads you'll want to create your own 
transaction manager rather than rely on the default threadlocal one.
Session = scoped_session(sessionmaker(twophase=True))
tm = transaction.TransactionManager()
zope.sqlalchemy.register(Session, transaction_manager=tm)

http://zodb.readthedocs.org/en/latest/transactions.html
https://pypi.python.org/pypi/zope.sqlalchemy

Depending on the isolation level your separate threads may start out with 
slightly different views of the database. On Postgres you can synchronize 
snapshots to work around this, but each connection still has its own 
database transaction which is isolated from the others and won't see 
subsequent updates from the other threads.

http://www.postgresql.org/docs/9.4/static/functions-admin.html#FUNCTIONS-SNAPSHOT-SYNCHRONIZATION
 


Laurence

On Tuesday, 17 February 2015 19:58:33 UTC-8, Christian Lang wrote:

 Hi,

 I have a question regarding multi-threading and transactions in the 
 non-ORM case.

 Assume, we have a global Connection c (with transaction), on which a 
 number of insert/update queries are executed and some insert/update queries 
 are executed in threads:

 Main thread: Thread 1:  Thread 2:
   - 
  -
 c.execute(q1)
 c.execute(q2)
  c'.execute(q3)   
 c''.execute(q5)
  c'.execute(q4)   
 c''.execute(q6)
 c.execute(q7)
 ...

 If any of the queries fails, all changes should be rolled back, otherwise 
 committed.

 Since each thread will need to open its own connection (c' and c'') to the 
 DB, how can these thread-local transactions be tied to the global 
 transaction?
 On failure of q3/q4/q5/q6, the local transaction could be rolled back and 
 failure signaled to the main thread. However, on success of q3 and q4 (or 
 q5 and q6), the local transaction cannot be committed yet until the global 
 transaction commits.

 Is there some easy mechanism that SQLAlchemy provides for such 
 multi-thread transactions?

 Thanks in advance,
 Christian



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Precompiled queries and .get(ident)

2014-12-02 Thread Laurence Rowe
I've been experimenting with precompiled queries in my app using the second 
recipe 
on https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/BakedQuery. 
I'm seeing a big speed up, with pages 30-40% faster (my application has 
graph structured content so some pages require ~100 items.) But I'm also 
seeing an increase in the number of db requests as I've had to switch from:

model = session.query(Resource).get(uuid)

to:

@precompiled_query_builder(DBSession)
def _get_by_uuid_query():
session = DBSession()
return session.query(Resource).filter(Resource.rid == 
bindparam('rid'))
...
model = _get_by_uuid_query().params(rid=uuid).one()


I think this is due to the identity map not being used. Is there a way to 
precompile queries for .get() and have the best of both?


Laurence

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] tying multiple sessions together with one transaction

2014-03-07 Thread Laurence Rowe


 IIRC, the implementation of the two phase commit basically has everyone 
 flush on Phase1, then report back as a vote.  If there are any negative 
 votes, the transaction manager instructs everyone to fail and rollback as 
 Phase2.   If you had a flush that caused an integrity error before trying 
 to commit, that would still trigger an issue. 


``session.flush()`` is called during tpc_begin, a hook which is called 
before the two phase commit proper. If an integrity error is raised then it 
should be treated the same as it would earlier in the transaction, 
resulting in the transaction being aborted. I don't think it's strictly 
necessary, though it does serve to reduce the time spent performing the two 
phase commit.

So long as you configure your session to use two phase commit, 
zope.sqlalchemy will call ``tx.prepare()`` during tpc_vote and 
``tx.commit()`` during tpc_finish.

If you're only using sqlalchemy then doing it all with the Session is 
perfectly possible, though the transaction / request integration that comes 
with using zope.transaction and pyramid_tm is very handy.

Laurence

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Anybody have twophase/zope.sqlalchemy/MySQL working?

2014-01-11 Thread Laurence Rowe
On Friday, 10 January 2014 08:06:16 UTC-8, Michael Bayer wrote:

 maybe.   I was thinking, is the Session really doing the right thing here 
 by not getting involved, but I think yeah that still might be appropriate. 
  so we’d need to carry along some extra information about the transaction 
 with the connection so that do_rollback() and do_commit() can pick up on 
 that.

 or maybe session.close() needs to do some bookkeeping with the 
 SessionTransaction.   not sure.


The call stack here is:

- session.close()
  sqlalchemy/orm/session.py(999)close()
- transaction.close()
  sqlalchemy/orm/session.py(437)close()
- connection.close()
  sqlalchemy/engine/base.py(583)close()
- conn.close()
  sqlalchemy/pool.py(579)close()
- self.checkin()
  sqlalchemy/pool.py(506)checkin()
- self._pool, None, self._echo, fairy=self)
  sqlalchemy/pool.py(432)_finalize_fairy()
- pool._dialect.do_rollback(fairy)

The tpc state (xid, is_prepared) is held on the TwoPhaseTransaction object, 
referenced (self.__transaction) by the engine Connection object which calls 
conn.close(). I don't think SessionTransaction need be involved here, but 
perhaps the engine Connection / Transaction should. Some options are:

1. In engine Connection.close(), when self.__transaction.is_active pass 
down xid, is_prepared to the _ConnectionFairy. That would imply muddying 
the DBAPI Connection contract of the engine Connection.__connection.

2. Make the engine Transaction aware of the configured reset_on_return 
behaviour so that Transaction.close() can either rollback or commit. 
Connection.close() could then call Transaction.close().

3. Keep track of xid, is_prepared on the pool's connection_record.

Laurence

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Anybody have twophase/zope.sqlalchemy/MySQL working?

2014-01-10 Thread Laurence Rowe

On Thursday, 9 January 2014 09:41:40 UTC-8, Jeff Dairiki wrote:

 Okay, I've traced things out a bit more. 

 If the session state is not STATUS_INVALIDATED (aka STATUS_CHANGED), 
 SessionDataManager.commit() does a self._finish('no work').  That is 
 where self.tx gets set to None (this --- correctly --- then 
 causes .tpc_vote() and .tpc_finish() to be no-ops.) 

 So here's the crux of the biscuit: in two-phase-commit mode (at least 
 with MySQL) the sqlalchemy session (or session transaction) must be either 
 committed or explicitly rolled back before it is closed. 
 SessionDataManager.commit() does not do a rollback. 

 Example code: 

 import sqlalchemy as sa 

 engine = sa.create_engine('mysql://guest@furry/test', 
   echo='debug', 
   echo_pool='debug') 
 Sess = sa.orm.sessionmaker(bind=engine, twophase=True) 
 sess = Sess() 
 sess.query(sa.null()).scalar() 
 #sess.rollback() 
 sess.close() 

 Edited log output: 

 DEBUG:sqlalchemy.pool.QueuePool:Connection _mysql.connection open to 
 'furry' at 29a3370 checked out from pool 
 INFO:sqlalchemy.engine.base.Engine:BEGIN TWOPHASE (implicit) 
 INFO:sqlalchemy.engine.base.Engine:XA BEGIN %s 
 
 INFO:sqlalchemy.engine.base.Engine:('_sa_a38de3b7234d5fe8ad55d2bb13ec195c',) 

 INFO:sqlalchemy.engine.base.Engine:SELECT NULL AS anon_1 
 DEBUG:sqlalchemy.pool.QueuePool:Connection _mysql.connection open to 
 'furry' at 29a3370 being returned to pool 
 DEBUG:sqlalchemy.pool.QueuePool:Connection _mysql.connection open to 
 'furry' at 29a3370 rollback-on-return 
 INFO:sqlalchemy.pool.QueuePool:Invalidate connection 
 _mysql.connection open to 'furry' at 29a3370 (reason: 
 OperationalError:(1399, 'XAER_RMFAIL: The command cannot be executed when 
 global transaction is in the  ACTIVE state')) 
 DEBUG:sqlalchemy.pool.QueuePool:Closing connection _mysql.connection 
 open to 'furry' at 29a3370 

 Here, no attempt is made to terminate the two-phase (XA) transaction 
 until the connection is returned to the pool, at which point a 
 plain one-phase 'ROLLBACK' is issued.  MySQL does not like this, 
 thus the XAER_RMFAIL error. 

 Uncommenting the 'sess.rollback()' in the above example results in an 
 'XA END and 'XA ROLLBACK' being emitted before the connection is 
 returned to the pool, properly ending the two-phase (XA) transaction. 
 This eliminates the XAER_RMFAIL error, and results in proper recycling 
 of the pooled connection. 

 
Yup, your analysis here looks correct to me:

- The SessionTransaction.close() does not call transaction.close() and in 
turn transaction.rollback() because the connection's autoclose  is True as 
``conn is not bind`` 
- 
https://github.com/zzzeek/sqlalchemy/blob/rel_0_9_1/lib/sqlalchemy/orm/session.py#L431

https://github.com/zzzeek/sqlalchemy/blob/rel_0_9_1/lib/sqlalchemy/orm/session.py#L297

When the connection is returned to the pool, it will rollback-on-return and 
that's throwing the exception as it is simply calling 
``pool._dialect.do_rollback(fairy)`` - 
https://github.com/zzzeek/sqlalchemy/blob/rel_0_9_1/lib/sqlalchemy/pool.py#L408

When you call session.rollback() the engine's ``do_rollback_twophase()`` is 
being called which executes the appropirate XA ROLLBACK :xid 
- 
https://github.com/zzzeek/sqlalchemy/blob/rel_0_9_1/lib/sqlalchemy/dialects/mysql/base.py#L2086

I think the ``pool._dialect.do_rollback`` and ``pool._dialect.do_commit`` 
calls in _finalize_fairy need to take account of the twophase nature of the 
connection.


I don't think zope.sqlalchemy should be involved here, as the pool 
configures how a connection should be closed with reset_on_return: 
http://docs.sqlalchemy.org/en/rel_0_9/core/pooling.html#sqlalchemy.pool.QueuePool.__init__.params.reset_on_return
 

Worth reporting as a SQLAlchemy bug, though fixing it cleanly looks rather 
tricky.

You might be able to rig something together with event listeners, but it 
looks tricky. For the pools reset event you won't have access to the xid 
and is_prepared of the TwoPhaseTransaction, so you'll need to store these 
on the connection during:

http://docs.sqlalchemy.org/en/rel_0_9/core/events.html#sqlalchemy.events.PoolEvents.reset
http://docs.sqlalchemy.org/en/rel_0_9/core/events.html#sqlalchemy.events.ConnectionEvents.begin_twophase
http://docs.sqlalchemy.org/en/rel_0_9/core/events.html#sqlalchemy.events.ConnectionEvents.prepare_twophase

I think this should do it (untested):

from sqlalchemy import event

@event.listens_for(SomeEngine, 'begin_twophase')
def receive_begin_twophase(conn, xid):
conn._note_xid_prepared = (xid, False)


@event.listens_for(SomeEngine, 'prepare_twophase')
def receive_prepare_twophase(conn, xid):
conn._note_xid_prepared = (xid, True)


@event.listens_for(SomeEngineOrPool, 'reset')
def receive_reset(dbapi_con, con_record):
conn = con_record.connection
if not hasattr(conn, '_note_xid_prepared'):

Re: [sqlalchemy] PostgreSQL: interval columns, values measured in months or years

2013-12-23 Thread Laurence Rowe


On Monday, 23 December 2013 06:38:41 UTC-8, Michael Bayer wrote:


 On Dec 23, 2013, at 9:29 AM, Sibylle Koczian nulla.e...@web.dejavascript: 
 wrote: 

  Am 21.12.2013 16:27, schrieb Michael Bayer: 
  In the case of using Postgresql, the type 
  sqlalchemy.dialects.postgresql.INTERVAL takes over wherever you might 
  have used a sqlalchemy.Interval type.In this case, psycopg2 is 
  what’s doing whatever conversions are occurring here - if a result 
  row type has the Postgres OID for an “INTERVAL”, psycopg2 jumps in 
  and does the conversion to timedelta.  This isn’t on the SQLAlchemy 
  side.   If psycopg2 is doing the wrong thing you might want to look 
  over on their side for updates or bug reports. 
  
  I don't really know if psycopg2 is doing the wrong thing. Py-postgresql 
 does the same conversion. This might be viewed as a problem with 
 datetime.timedelta which isn't suitable for intervals of several months. 

 maybe you want to check on that because I’m not familiar with any such 
 limitation in datetime.timedelta, it essentially stores a number of days.   
 Below is an example using timedeltas of twelve years, eight months, and 
 four days: 

  import datetime 
  datetime.datetime(1992, 12, 19) - datetime.datetime(1980, 4, 15) 
 datetime.timedelta(4631) 
  d1 = datetime.datetime(1992, 12, 19) - datetime.datetime(1980, 4, 15) 
  datetime.datetime(1992, 12, 19) + d1 
 datetime.datetime(2005, 8, 24, 0, 0) 


Postgres' INTERVAL supports deltas of quantities other than days, +1 month 
may mean 28/29/30/31 days depending on the month. From 
http://stackoverflow.com/questions/546321/how-do-i-calculate-the-date-six-months-from-the-current-date-using-the-datetime,
 
dateutil has support for a richer relativedelta type and this can configure 
it on the psycopg2 level: https://pypi.python.org/pypi/psycopg2-dateutils

Laurence

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Anybody have twophase/zope.sqlalchemy/MySQL working?

2013-12-23 Thread Laurence Rowe
On Thursday, 12 December 2013 15:26:08 UTC-8, Thierry Florac wrote:

 Hi,
 I'm using two-phase transactions with ZODB, PostgreSQL and Oracle 
 databases connected with SQLAlchemy without problem.
 I'm not using native zope.sqlalchemy package, but another package called 
 ztfy.alchemy that I've built (based on zc.alchemy and zope.sqlalchemy), and 
 I didn't tried with MySQL.
 Maybe you can have a look if that can help... = 
 https://pypi.python.org/pypi/ztfy.alchemy


Hi Thierry,

I'm curious as to what prompted your fork of the zope.sqlalchemy 
datamanager into ztfy.alchemy rather than adding a dependency on the 
zope.sqlalchemy package? I've purposely limited the scope of 
zope.sqlalchemy to only the datamanager so that it could be shared amongst 
all frameworks using the Zope transaction package (at the time Zope2, Zope3 
and Grok, and now also Pyramid). Bug fixes and updates for working with 
newer SQLAlchemy versions are welcome 
at https://github.com/zopefoundation/zope.sqlalchemy

Laurence 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Anybody have twophase/zope.sqlalchemy/MySQL working?

2013-12-23 Thread Laurence Rowe
On Thursday, 12 December 2013 16:30:59 UTC-8, Jeff Dairiki wrote:

 Do you understand why the datamanager is finding the SessionTransaction 
 and using that directly?  (At least I think that's what it's doing --- 
 I haven't sussed this out completely.)  I'm referring to the line from 
 SessionDataManager.__init__: 

self.tx = session.transaction._iterate_parents()[-1] 


This is to handle the case of a session being a nested transaction at the 
time the ``zope.sqlalchemy`` datamanager joins the ``transaction`` 
transaction
 

 and then later the session manager calls .prepare(), .commit() 
 and/or .rollback() on self.tx, *if* self.tx is not None. 

 The thing is, for me, if the session has only been used for read 
 operation, 
 self.tx seems to be None.  So the datamanager never commits anything. 

 I don't understand (yet) why the data manager doesn't just call 
 .prepare() and .commit() directly on the sqlalchemy session instance. 


The zope.sqlalchemy datamanager will rollback the transaction when it 
detects that no work is done. In that case self.tx is set to None during 
SessionTransaction.commit and during the two phase commit there is nothing 
to do.
 
The sequence in which the datamanager methods are called is found in 
Transaction._commitResources: 
https://github.com/zopefoundation/transaction/blob/1.4.1/transaction/_transaction.py#L382

Dunno.  It doesn't seem like it should be MySQL specific thing, but maybe 
 postgres and others are more forgiving of a two-phase XA BEGIN being 
 terminated by a regular one-phase ROLLBACK? 

 Anyhow, I'll keep poking when I find a moment.  


(And from a later message in this thread)

Okay, so this was not a complete solution.  It does cause the 
 datamanager to commit the the sessions when the transaction is terminated 
 by transaction.commit(), but neither setting the 
 initial state to STATUS_CHANGED, nor calling mark_changed() is enough 
 to get the datamanager to rollback the session if the transaction 
 is ended with transaction.abort(). 


Looking at the datamanager logic again, I don't think self.tx can ever be 
None in abort() (at least not normally) but closing the session will close 
the underlying transaction on the connection, which issues the rollback: 
https://github.com/zzzeek/sqlalchemy/blob/rel_0_9_0b1/lib/sqlalchemy/engine/base.py#L1151

I suggest trying to reduce the problem to the minimum which would be 
removing the ZopeTransactionExtension and seeing what happens when you call 
the methods yourself. Enable the sqlalchemy debug logging to see the SQL on 
the connection and then compare that with what happens with the 
ZopeTransactionExtension enabled.

Laurence


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Transactional DDL and SQLite?

2013-04-01 Thread Laurence Rowe

On Friday, 18 February 2011 08:14:28 UTC-8, Michael Bayer wrote:

 we've put tickets on their tracker to this effect, that they should be 
 more liberal about considering when the transaction begins.

 http://code.google.com/p/pysqlite/issues/detail?id=21

 pysqlite is tricky since I dont know if the Python.org tracker or the 
 code.google.com tracker is more appropriate. In any case it doesn't 
 seem like a lot is being done.


In case anyone else needs this...

A patch was submitted for the bundled sqlite3 module 
in http://bugs.python.org/issue10740 and for pysqlite 
at https://code.google.com/p/pysqlite/issues/detail?id=24 adding an 
optional `operation_needs_transaction_callback` to the connection.

I've added this to my pysqlite-static-env branch along with instructions to 
set this up for sqlalchemy 
here: https://code.google.com/p/pysqlite-static-env/

Laurence

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Help understanding database round trips

2013-03-26 Thread Laurence Rowe
I'd like to measure the number of database round trips that are associated 
with a request to my web application so I can write tests to catch 
potential performance regressions. I've been using 
mock.Mock(wraps=connection.execute) to keep count and while I think this 
works for selects, I don't have a good understanding of when executed 
statements result in serialised network round trips or when they can be 
'pipelined', e.g. when inserting new rows.

So to take a real example, posting to my web app results in the insertion 
of a row in each of four tables. Afterwards, the connection.execute mock's 
call_count is 4 and I see 4 INSERTs in my sqlalchemy log:

INFO:sqlalchemy.engine.base.Engine:INSERT INTO resources (rid) VALUES (?)
INFO:sqlalchemy.engine.base.Engine:('7745b647ff154ff39cedb897d4e2983c',)
INFO:sqlalchemy.engine.base.Engine:INSERT INTO statements (rid, predicate, 
object, tid) VALUES (?, ?, ?, ?)
INFO:sqlalchemy.engine.base.Engine:('7745b647ff154ff39cedb897d4e2983c', 
'organism', '{scientific_name: Homo sapiens, taxon_id: 9606, _uuid: 
7745b647-ff15-4ff3-9ced-b897d4e2983c, organism_name: human}', 
'edea69795dd14eb6bed73321cff54471')
INFO:sqlalchemy.engine.base.Engine:INSERT INTO transactions (tid, data) 
VALUES (?, ?)
INFO:sqlalchemy.engine.base.Engine:('edea69795dd14eb6bed73321cff54471', 
'{tid: edea6979-5dd1-4eb6-bed7-3321cff54471, description: 
/organisms/, user:  remoteuser:TEST}')
INFO:sqlalchemy.engine.base.Engine:INSERT INTO current_statements (rid, 
predicate, sid) VALUES (?, ?, ?)
INFO:sqlalchemy.engine.base.Engine:('7745b647ff154ff39cedb897d4e2983c', 
'organism', 1)

Is there some way to tell which statements end up having data read from the 
cursor? Here I have one dependent insert which required the autoincremented 
primary key from a related row whereas the rest were fully specified.

Laurence

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Help understanding database round trips

2013-03-26 Thread Laurence Rowe


On Tuesday, 26 March 2013 14:54:08 UTC-7, Michael Bayer wrote:


 On Mar 26, 2013, at 5:24 PM, Laurence Rowe lauren...@gmail.comjavascript: 
 wrote: 

  I'd like to measure the number of database round trips that are 
 associated with a request to my web application so I can write tests to 
 catch potential performance regressions. I've been using 
 mock.Mock(wraps=connection.execute) to keep count and while I think this 
 works for selects, I don't have a good understanding of when executed 
 statements result in serialised network round trips or when they can be 
 'pipelined', e.g. when inserting new rows. 
  
  So to take a real example, posting to my web app results in the 
 insertion of a row in each of four tables. Afterwards, the 
 connection.execute mock's call_count is 4 and I see 4 INSERTs in my 
 sqlalchemy log: 
  
  INFO:sqlalchemy.engine.base.Engine:INSERT INTO resources (rid) VALUES 
 (?) 
  INFO:sqlalchemy.engine.base.Engine:('7745b647ff154ff39cedb897d4e2983c',) 
  INFO:sqlalchemy.engine.base.Engine:INSERT INTO statements (rid, 
 predicate, object, tid) VALUES (?, ?, ?, ?) 
  INFO:sqlalchemy.engine.base.Engine:('7745b647ff154ff39cedb897d4e2983c', 
 'organism', '{scientific_name: Homo sapiens, taxon_id: 9606, _uuid: 
 7745b647-ff15-4ff3-9ced-b897d4e2983c, organism_name: human}', 
 'edea69795dd14eb6bed73321cff54471') 
  INFO:sqlalchemy.engine.base.Engine:INSERT INTO transactions (tid, data) 
 VALUES (?, ?) 
  INFO:sqlalchemy.engine.base.Engine:('edea69795dd14eb6bed73321cff54471', 
 '{tid: edea6979-5dd1-4eb6-bed7-3321cff54471, description: 
 /organisms/, user:  remoteuser:TEST}') 
  INFO:sqlalchemy.engine.base.Engine:INSERT INTO current_statements (rid, 
 predicate, sid) VALUES (?, ?, ?) 
  INFO:sqlalchemy.engine.base.Engine:('7745b647ff154ff39cedb897d4e2983c', 
 'organism', 1) 
  
  Is there some way to tell which statements end up having data read from 
 the cursor? Here I have one dependent insert which required the 
 autoincremented primary key from a related row whereas the rest were fully 
 specified. 

 you can catch SQL traffic more directly using connection events, there are 
 two varieties, one catches the high level SQL expression construct, and the 
 other catches the activity at the level of cursor.execute() or 
 cursor.executemany() (DBAPI cursor).Intercepting execute/executemany is 
 the most accurate way to see all DBAPI interaction fully and exactly as 
 it's being passed: 


 http://docs.sqlalchemy.org/en/rel_0_8/core/events.html#sqlalchemy.events.ConnectionEvents.before_cursor_execute
  

 http://docs.sqlalchemy.org/en/rel_0_8/core/events.html#sqlalchemy.events.ConnectionEvents.after_cursor_execute
  

 next aspect, if you consider SELECT statements as those which read data 
 from the cursor, you can tell if a cursor has results pending on it by 
 checking if cursor.description is not None.  You can check for this inside 
 of the after_cursor_execute event.   

 if you're trying to look at INSERT statements and distinguish between 
 those which use an implicit autoincrementing primary key and those which 
 have it specified, depending on specifics this might be more 
 straightforward at the SQL expression level, if you are using 
 table.insert() constructs (or via the ORM which also does).A generic 
 way would be to look at the Table that's the subject of the insert, then 
 looking at the parameters to see if parameters referring to a full primary 
 key are present.  These values are all present on the context passed to the 
 execute events.Another way that might work more expediently, but is a 
 little more of an inside method, is to take a look at 
 context.inserted_primary_key inside of the after_cursor_execute event; if 
 it contains a full primary key without the value None present, that means 
 the dialect knew ahead of time the full primary key value (this will only 
 be filled in for a single statement execute, not an executemany). 

 I can work up an example using any combination of these techniques, if you 
 can let me know which might seem workable.


Thanks, moving to a connection event seems like a cleaner way to measure 
this than the mock.

When using the ORM, does SA always fetch any db generated primary key? 
(Either through a sequence preexecute or by subsequently reading from the 
cursor metadata depending on the dialect.) If not, then ideally I'd want 
only those that were fetched for use in a subsequent insert. If so, then I 
should probably tweak my structure - I'm only really using a non-UUID 
primary key because it was the easiest way to get an autoincrementing 
column in SQLite... Deployment will be on Postgres.

I had another idea to look for UOWTransaction.cycles evaluating True, but I 
think that might be telling me something different.

Laurence

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email

[sqlalchemy] zope.sqlalchemy 0.5 released

2010-06-07 Thread Laurence Rowe
I've just uploaded the 0.5 release of zope.sqlalchemy to PYPI,
bringing SQLAlchemy 0.6 support.

http://pypi.python.org/pypi/zope.sqlalchemy

Savepoint release support (nested transaction commit) is currently
being discussed on ZODB-dev:

https://mail.zope.org/pipermail/zodb-dev/2010-June/thread.html#13447
https://mail.zope.org/pipermail/zodb-dev/2010-January/thread.html#13108

Laurence


Changes
===

0.5 (2010-06-07)


* Remove redundant session.flush() / session.clear() on savepoint
operations.
  These were only needed with SQLAlchemy 0.4.x.

* SQLAlchemy 0.6.x support. Require SQLAlchemy = 0.5.1.

* Add support for running ``python setup.py test``.

* Pull in pysqlite explicitly as a test dependency.

* Setup sqlalchemy mappers in test setup and clear them in tear down.
This
  makes the tests more robust and clears up the global state after. It
  caused the tests to fail when other tests in the same run called
  clear_mappers.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: session lifecycle and wsgi

2010-04-29 Thread Laurence Rowe
On Apr 28, 4:38 pm, Chris Withers ch...@simplistix.co.uk wrote:
 Laurence Rowe wrote:
  Chris,

  This is what the combination of repoze.tm2/transaction and
  zope.sqlalchemy does for you. You don't have to do anything special
  other than that.

 It doesn't do the .remove().
 BFG currently has a bit of horribleness to make that work.
 I'd like to get rid of it or make it less horrible...

Can you point me at where it does that please.

Laurence

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: session lifecycle and wsgi

2010-04-28 Thread Laurence Rowe
Chris,

This is what the combination of repoze.tm2/transaction and
zope.sqlalchemy does for you. You don't have to do anything special
other than that.

Laurence

On Apr 28, 2:37 pm, Chris Withers ch...@simplistix.co.uk wrote:
 Hi All,

 I'm still trying to get an answer on this...

 Am I right in understanding that the basic session lifecycle should be:

 try:
      use session
      session.commit()
 except:
     log()
     session.rollback()
 finally:
     session.remove()

 The structure I've traditionally used with transactions has been:

 try:
      use session
 except:
     log()
     session.rollback()
 else:
     session.commit()

 Is this okay? Why would the first setup be preferable?
 (ie: what's wrong with my location of the commit() call?)
 What happens when the remove() call is omitted()?

 For me, the above will commonly be used in the context of a wsgi app
 (BFG to be precise). How do people manage the lifecycle in that context
 (ie: multi-thread, scoped sessions)?

 My thoughts were on a bit of wsgi middlewear so that I can manage the
 app's sessions without having to fiddle with BFG's wsgi application.
 I'll be using sessions in multiple databases to boot.

 Is there existing middlewear that does this? Is it a good/bad idea?

 cheers,

 Chris

 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group 
 athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Savepoint release and nested transactions

2010-01-16 Thread Laurence Rowe
Hi,

Following a request[1] for savepoint release support in
zope.sqlalchemy, I've been looking into how this might be done. Adding
the necessary support to Zope's transaction module was quite simple
[2], but the mapping of Zope transaction savepoints - SQLAlchemy
nested transactions - database savepoints gives a problem...

How can I get SQLAlchemy to release a savepoint, without also
releasing all subsequent savepoints (the nested transactions? This is
demonstrated by the doctest below from my branch of Zope's transaction
module:

Savepoint release
-

Some data managers may only support a limited number of savepoints.

 dm['bob-balance'] = 100.0
 dm['bob-balance']
100.0
 savepoint1 = transaction.savepoint()

 dm['bob-balance'] = 200.0
 dm['bob-balance']
200.0
 savepoint2 = transaction.savepoint()

 dm['bob-balance'] = 300.0
 dm['bob-balance']
300.0
 savepoint3 = transaction.savepoint()

To release resources on the data manager, a savepoint may be released:

 savepoint2.release()

The savepoint then becomes invalid and may no longer be used:

 savepoint2.rollback()
Traceback (most recent call last):
...
InvalidSavepointError

Subsequent savepoints remain valid:

 dm['bob-balance'] = 400.0
 dm['bob-balance']
400.0
 savepoint3.rollback()
 dm['bob-balance']
300.0

As do previous savepoints:

 savepoint1.rollback()
 dm['bob-balance']
100.0

 transaction.abort()


Laurence


[1] 
http://groups.google.com/group/sqlalchemy/browse_thread/thread/b2594ff621538f3f

[2] http://svn.zope.org/repos/main/transaction/branches/elro-savepoint-release
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Re: Savepoint release and nested transactions

2010-01-16 Thread Laurence Rowe
On Jan 16, 6:39 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jan 16, 2010, at 12:35 PM, Laurence Rowe wrote:

  Hi,

  Following a request[1] for savepoint release support in
  zope.sqlalchemy, I've been looking into how this might be done. Adding
  the necessary support to Zope's transaction module was quite simple
  [2], but the mapping of Zope transaction savepoints - SQLAlchemy
  nested transactions - database savepoints gives a problem...

  How can I get SQLAlchemy to release a savepoint, without also
  releasing all subsequent savepoints (the nested transactions? This is
  demonstrated by the doctest below from my branch of Zope's transaction
  module:

 OK, the first thing I see here, unless I'm misunderstanding, is that you're 
 looking to remove a savepoint from an arbitrary point in the nesting.   I.e. 
 not just the endpoint.

 So the next road I went down, OK, we would need a way to manipulate the list 
 of Transactions at the engine level and SessionTransactions at the ORM level, 
 such that a node can be removed from the middle of the list, issuing a 
 RELEASE, and leaving the list otherwise with the same nodes and endpoints.

 I then made a quick patch against engine/base.py to try this.     A test 
 looks like this:

 trans = conn.begin()
 conn.execute(select 1)
 trans2 = conn.begin_nested()
 conn.execute(select 1)
 trans3 = conn.begin_nested()
 conn.execute(select 1)
 trans2.release()
 trans3.release()

 trans.commit()

 Do I have this right ?   trans2 is the first savepoint, trans3 is the second. 
  We want to release trans2 first, leaving trans3 intact.

 PG at least does not allow this (trimmed):

 BEGIN
 select 1
 SAVEPOINT sa_savepoint_1
 select 1
 SAVEPOINT sa_savepoint_2
 select 1
 RELEASE SAVEPOINT sa_savepoint_1
 RELEASE SAVEPOINT sa_savepoint_2

 sqlalchemy.exc.InternalError: (InternalError) no such savepoint
  'RELEASE SAVEPOINT sa_savepoint_2' {}

 Here's what PG docs have to say:

 http://www.postgresql.org/docs/8.1/static/sql-release-savepoint.html

 RELEASE SAVEPOINT also destroys all savepoints that were established after 
 the named savepoint was established.  

 I.e. that statement is in direct contradiction to your request  without also 
 releasing all subsequent savepoints.

 So Postgresql, which I consider the gold standard of transaction operation, 
 would appear to not support this.  Do we know that this is a viable feature 
 to be pursuing ?

Thanks for looking into this. I hadn't realized that subsequent
savepoints would also be destroyed. I'll make Zope's transaction
module match the PostgreSQL behaviour. As savepoints and nested
transactions are then equivalent I shouldn't need anything more from
SQLAlchemy.

Laurence
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Re: Creating a Dialect for Amazon SimpleDB

2009-06-03 Thread Laurence Rowe

It would be interesting to see if this could be made to work. The
SimpleDB model is rather different from the relational model, so it
would only be useful if your application does not use any advanced
features - no joins etc, each 'domain' might map to one big (albeit
sparse) table.

Laurence

On Jun 3, 8:38 pm, enj enjah...@gmail.com wrote:
 Hello all,

 I am new to sqlalchemy and was introduced to it by a project (cjklib)
 that uses it. I want to migrate this project to Amazon SimpleDB and
 since it makes extensive use of sqlalchemy I thought the best course
 of action might be to make a SimpleDB dialect in sqlalchemy which
 could possibly benefit other projects trying to move to AWS.

 The purpose of my post is to see if there is any such effort out there
 or any other interest in this. Also I was wondering if there is any
 good docs/tutorials on implementing a Dialect.
 From my current understanding I plan to implement 
 engine.defaulthttp://www.sqlalchemy.org/trac/browser/sqlalchemy/trunk/lib/sqlalchem...
 which will use the boto library to interface with 
 simpledbhttp://code.google.com/p/boto/

 Any thoughts?

 --
 Ian Johnson
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: proposed extension to SessionExtension: after_bulk_operation

2008-11-05 Thread Laurence Rowe

On Nov 5, 9:14 am, Martijn Faassen [EMAIL PROTECTED] wrote:
 Michael Bayer wrote:
   I wonder if a before_ hook should be provided as well.

 I don't know myself; Laurence, if you're listening in perhaps you'd care
 to comment?

While there is aesthetic value to having symmetrical before and after
hooks, I don't have a specific need for another hook here. Some of the
other hooks also lack symmetric partners. I'm certainly not opposed to
adding them, but maybe we should leave it until someone has a
requirement for them.

Laurenc
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: multiprocessing best practices

2008-09-23 Thread Laurence Rowe

On Sep 23, 4:20 pm, mg [EMAIL PROTECTED] wrote:
 Hello There,
 I am developing an application that uses sqlalchemy and the py
 processing packages. My question is this, what is the best practice
 for using sessions in this type of app. Each subprocess needs to
 access my db to get work, so currently I am starting a scoped session
 in the run method of the class and using that session for all db work
 within that subprocess. Is that the optimal way, or is there a better
 way of working?

Use one scoped session + engine per process. Engines cannot be shared
across processes. The scoped session machinery will give you thread
local connections.

If you are not using threads you do not strictly need scoped
sessions,
but using them now gives you the flexibility to use them should you
choose to at some point in the future.

Laurence


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---