[sqlalchemy] [ANN] Fixture Goes 1.0 (tool for testing with data)

2008-07-18 Thread Kumar McMillan

I've just pushed 1.0 -- the *I Heart Data* release -- of fixture, a
Python module for loading and referencing test data. It is used
heavily at my work in two test suites: one for the functional tests of
an ETL framework and another for a Pylons + Elixir (SQLAlchemy) + Ext
JS web application.

http://farmdev.com/projects/fixture/

easy_install -U fixture


Highlights of This Release

* SQLAlchemy 0.4 now finally works as a nice fixture backend
(0.5beta support is shaky, but mostly there).
* Even more documentation! The new docs
(http://farmdev.com/projects/fixture/) have been completely
reorganized thanks to the venerable Sphinx. Of special note is a new
tutorial that shows how to use fixture to test a Pylons + SQLAlchemy
application: http://farmdev.com/projects/fixture/using-fixture-with-pylons.html

Many thanks to those who submitted issues and patches, especially to
Manuel AristarĂ¡n, sokann, Brian Lee Hawthorne, and Jeffrey Cousens.

What's next? The fixture command that generates DataSet classes from a
real database needs some attention. It doesn't work with SQLAlchemy
0.5beta yet and could work better with SQLAlchemy based data models in
general. Aside from that, fixture seems to have stabilized for my apps
at work so I'll be waiting to hear from the community about what other
areas to improve on.

-Kumar

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: a way to share Session.mapper(SomeObject) across two scoped sessions?

2008-01-25 Thread Kumar McMillan

On Jan 25, 2008 11:58 AM, Michael Bayer [EMAIL PROTECTED] wrote:
 ...so this
 test will pass if you change setUp to read:

  Session.mapper(SomeObject, table, properties={
  'options':relation(SomeOtherObject)
  }, save_on_init=False)
  Session.mapper(SomeOtherObject, table2, save_on_init=False)

ah, so it does.  I saw that in the code and thought I'd tried it but I
guess not.  Thanks, this works.



 The Session.mapper function is not worth it, in my opinion, it exists
 due to the sheer popularity of its previous incarnation,
 assign_mapper.  I much prefer keeping things explicit.

I can understand that.  I am trying to get this working for an
application that has its entire model declared in Elixir already.  I
like how easy Elixir is and so I'd rather not abandon that layer.
But, there is probably a way to configure Elixir for
save_on_init=False, I will give it a go.

thanks again, Kumar

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: a way to share Session.mapper(SomeObject) across two scoped sessions?

2008-01-25 Thread Kumar McMillan
On Jan 24, 2008 2:46 PM, Kumar McMillan [EMAIL PROTECTED] wrote:
 but, since I am explicitly saving it to a new session and the session
 has a different scope, shouldn't this be possible?  Maybe I'm not
 fully understanding scoped sessions.

I am hesitant to file this as a bug because I'm not sure if I
understand scoped sessions correctly.  My understanding is that one
should be able to explicitly attach the same object to different
sessions as long as those sessions have different scopes.

Here are two tests diffed against sqlalchemy trunk.  The first one
passes, as it attaches an unassigned mapped class to multiple sessions
of varying scope.  The second one fails with

InvalidRequestError: Object '[EMAIL PROTECTED]' is already attached
to session '19685424' (this is '19687888')

Should I file a ticket for this?  I looked at the session code briefly
but wasn't sure how to fix it.


Index: test/orm/session.py
===
--- test/orm/session.py (revision 4097)
+++ test/orm/session.py (working copy)
@@ -962,6 +962,25 @@
 Session.remove()

 assert SomeObject(id=1, data=hello,
options=[SomeOtherObject(someid=1)]) ==
Session.query(SomeObject).one()
+
+def test_attach_objects_to_multiple_sess(self):
+Session = scoped_session(sessionmaker())
+PrivateSession = scoped_session(create_session,
scopefunc=lambda: '__private_session__')
+
+class SomeObject(fixtures.Base):pass
+mapper(SomeObject, table)
+
+so1 = SomeObject()
+priv_sess = PrivateSession()
+priv_sess.save(so1)
+priv_sess.flush()
+
+so2 = SomeObject()
+sess = Session()
+sess.save(so2)
+sess.flush()
+
+PrivateSession.remove()


 class ScopedMapperTest(PersistTest):
@@ -1027,6 +1046,21 @@
 pass
 Session.mapper(Baz, table2, extension=ext)
 assert hasattr(Baz, 'query')
+
+def test_attach_assigned_objects_to_multiple_sess(self):
+PrivateSession = scoped_session(create_session,
scopefunc=lambda: '__private_session__')
+
+so1 = SomeObject()
+priv_sess = PrivateSession()
+priv_sess.save(so1)
+priv_sess.flush()
+
+so2 = SomeObject()
+sess = Session()
+sess.save(so2)
+sess.flush()
+
+PrivateSession.remove()

 def test_validating_constructor(self):
 s2 = SomeObject(someid=12)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



sa_mapper_scope_tests.diff
Description: Binary data


[sqlalchemy] a way to share Session.mapper(SomeObject) across two scoped sessions?

2008-01-24 Thread Kumar McMillan

This is a spin-off from the thread on how to do setup/test-app/teardown:
http://groups.google.com/group/sqlalchemy/browse_thread/thread/4fd6344bf8b9c033
(thanks for everyone's help on that)

I had posted test code that illustrates it's possible to save/delete a
mapped object using two different sessions indepedently.  But now,
when I try using a mapped class that has been assigned a session, that
is, Session.mapper() instead of mapper(), the test errors in
InvalidRequestError: Object '[EMAIL PROTECTED]' is already attached
to session '18333488' (this is '18402096')

but, since I am explicitly saving it to a new session and the session
has a different scope, shouldn't this be possible?  Maybe I'm not
fully understanding scoped sessions.

FWIW, if I del so._sa_session_id before fixture_session.save(so) then
the assertions later on all pass.  Not that I'm going to do this,
promise ;)

-Kumar

PS. I thought for a minute I needed save_on_init=False but this didn't solve it.


from sqlalchemy import *
from sqlalchemy.orm import scoped_session, sessionmaker, mapper
from sqlalchemy.exceptions import IntegrityError

PrivateSession = scoped_session(
sessionmaker(autoflush=False, transactional=True),
scopefunc=lambda:__name__) # a private scope
AppSession = scoped_session(
sessionmaker(autoflush=False, transactional=True))
dsn = 'sqlite:///:memory:'

def test_sa_scoping():
engine = create_engine(dsn)
metadata = MetaData()

sometable = Table('sometable', metadata,
Column('id', Integer, primary_key=True),
Column('keyname', String(30), unique=True))
class SomeObject(object):
pass

conn = engine.connect()
AppSession.configure(bind=conn)
app_session = AppSession()

metadata.create_all(bind=engine)
# when this is just mapper(), the test passes
AppSession.mapper(SomeObject, sometable)

conn = engine.connect()
PrivateSession.configure(bind=conn)
trans = conn.begin()
fixture_session = PrivateSession()
# create some data to test with :
so = SomeObject()
so.keyname = some unique key name
fixture_session.save(so)
fixture_session.flush()
trans.commit()

trans = conn.begin()
so2 = SomeObject()
so2.keyname = some unique key name
app_session.save(so2)
try:
app_session.flush()
except IntegrityError:
# violated unique key
trans.rollback()
else:
trans.commit()
app_session.close()

# after testing application code, I want to tear down
# test even if the app had an error :
assert so in fixture_session
trans = conn.begin()
fixture_session.delete(so)
fixture_session.flush()
trans.commit()
rs = fixture_session.query(SomeObject).all()
assert rs == [], unexpected: %s % rs

if __name__ == '__main__':
test_sa_scoping()

Traceback (most recent call last):
  File test_sa_scoping.py, line 65, in ?
test_sa_scoping()
  File test_sa_scoping.py, line 37, in test_sa_scoping
fixture_session.save(so)
  File 
/Users/kumar/env/sqlalchemy-exp/lib/python2.4/site-packages/SQLAlchemy-0.4.2p3-py2.4.egg/sqlalchemy/orm/session.py,
line 822, in save
self._save_impl(instance, entity_name=entity_name)
  File 
/Users/kumar/env/sqlalchemy-exp/lib/python2.4/site-packages/SQLAlchemy-0.4.2p3-py2.4.egg/sqlalchemy/orm/session.py,
line 993, in _save_impl
self._attach(instance)
  File 
/Users/kumar/env/sqlalchemy-exp/lib/python2.4/site-packages/SQLAlchemy-0.4.2p3-py2.4.egg/sqlalchemy/orm/session.py,
line 1029, in _attach
raise exceptions.InvalidRequestError(Object '%s' is already attached 
sqlalchemy.exceptions.InvalidRequestError: Object
'[EMAIL PROTECTED]' is already attached to session '18333488' (this
is '18402096')

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] How to accomplish setup/run-app/teardown with mapped classes and sessions?

2008-01-23 Thread Kumar McMillan
Hello, I have not been able to figure this out from the docs.

I would like to setup and teardown test data using mapped classes.
The problem is that those same mapped classes need to be used by the
application under test and in case there is an error, the teardown
still needs to run so that subsequent tests can setup more data.  It
seemed like the setup/teardown could be accomplished with a privately
scoped session but what I see is that this private session collides
with that of the application.  Here is a failing test case (also
attached) that illustrates exactly what I need to do (sorry it's a
little long).  The reason it catches the IntegrityError is because
during testing any kind of error can happen and I need to teardown
data regardless.  Should I give up and use insert statements and
engine objects for the setup/teardown?  Or is there a way to make this
test case pass?  I am using sqlalchemy 0.4.2p3

from sqlalchemy import *
from sqlalchemy.orm import scoped_session, sessionmaker, mapper
from sqlalchemy.exceptions import IntegrityError

PrivateSession = scoped_session(
sessionmaker(autoflush=False, transactional=True),
scopefunc=lambda:__name__) # a private scope
AppSession = scoped_session(
sessionmaker(autoflush=False, transactional=True))
dsn = 'sqlite:///:memory:'

def test_sa_scoping():
engine = create_engine(dsn)
metadata = MetaData()

sometable = Table('sometable', metadata,
Column('id', Integer, primary_key=True),
Column('keyname', String(30), unique=True))
class SomeObject(object):
pass

metadata.create_all(bind=engine)
PrivateSession.configure(bind=engine)
AppSession.configure(bind=engine)
mapper(SomeObject, sometable)

fixture_session = PrivateSession()
# create some data to test with :
so = SomeObject()
so.keyname = some unique key name
fixture_session.save(so)
fixture_session.flush()

app_session = AppSession()
so2 = SomeObject()
so2.keyname = some unique key name
app_session.save(so2)
try:
app_session.flush()
except IntegrityError:
# violated unique key
pass
app_session.close()

# after testing application code, I want to tear down
# test even if the app had an error :
assert so in fixture_session
fixture_session.delete(so)
fixture_session.flush()
rs = fixture_session.query(SomeObject).all()
assert rs == [], unexpected: %s % rs

if __name__ == '__main__':
test_sa_scoping()

Traceback (most recent call last):
  File test_sa_scoping.py, line 55, in ?
test_sa_scoping()
  File test_sa_scoping.py, line 50, in test_sa_scoping
fixture_session.flush()
  File 
/Users/kumar/env/sqlalchemy-exp/lib/python2.4/site-packages/SQLAlchemy-0.4.2p3-py2.4.egg/sqlalchemy/orm/session.py,
line 693, in flush
self.uow.flush(self, objects)
  File 
/Users/kumar/env/sqlalchemy-exp/lib/python2.4/site-packages/SQLAlchemy-0.4.2p3-py2.4.egg/sqlalchemy/orm/unitofwork.py,
line 215, in flush
flush_context.execute()
  File 
/Users/kumar/env/sqlalchemy-exp/lib/python2.4/site-packages/SQLAlchemy-0.4.2p3-py2.4.egg/sqlalchemy/orm/unitofwork.py,
line 437, in execute
UOWExecutor().execute(self, tasks)
  File 
/Users/kumar/env/sqlalchemy-exp/lib/python2.4/site-packages/SQLAlchemy-0.4.2p3-py2.4.egg/sqlalchemy/orm/unitofwork.py,
line 930, in execute
self.execute_delete_steps(trans, task)
  File 
/Users/kumar/env/sqlalchemy-exp/lib/python2.4/site-packages/SQLAlchemy-0.4.2p3-py2.4.egg/sqlalchemy/orm/unitofwork.py,
line 951, in execute_delete_steps
self.delete_objects(trans, task)
  File 
/Users/kumar/env/sqlalchemy-exp/lib/python2.4/site-packages/SQLAlchemy-0.4.2p3-py2.4.egg/sqlalchemy/orm/unitofwork.py,
line 936, in delete_objects
task.mapper._delete_obj(task.polymorphic_todelete_objects, trans)
  File 
/Users/kumar/env/sqlalchemy-exp/lib/python2.4/site-packages/SQLAlchemy-0.4.2p3-py2.4.egg/sqlalchemy/orm/mapper.py,
line 1219, in _delete_obj
raise exceptions.ConcurrentModificationError(Deleted rowcount %d
does not match number of objects deleted %d % (c.rowcount,
len(del_objects)))
sqlalchemy.exceptions.ConcurrentModificationError: Deleted rowcount 0
does not match number of objects deleted 1

NOTE:

When I comment out the code that uses AppSession, this test passes.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---


from sqlalchemy import *
from sqlalchemy.orm import scoped_session, sessionmaker, mapper
from sqlalchemy.exceptions import IntegrityError

PrivateSession = scoped_session(
   

[sqlalchemy] Re: How to accomplish setup/run-app/teardown with mapped classes and sessions?

2008-01-23 Thread Kumar McMillan

Hi

On Jan 23, 2008 4:07 PM, Michael Bayer [EMAIL PROTECTED] wrote:
 your teardown code can't have any dependencies on the test code
 itself.  So at the very least start the teardown phase with
 PrivateSession.close() so that you start fresh.

I tried adding that to the teardown code but then this assertion fails:
assert so in fixture_session
and if I comment out the assertion, I get the same ConcurrentModificationError

without a stored reference to the object that was saved, I'm not sure
how to delete it [without monkeying with last inserted id].


 the two ways to create tests that use isolated data are to either
 create and drop tables local to the unit tests themselves, or to run
 the unit tests within an enclosing Transaction (as in, conn =
 engine.connect(); trans = conn.begin(); session.bind=conn) which is
 rolled back at the end of the unit tests.  The SQLA unit tests
 themselves use the former method but I have applied the latter method
 to Pylons tests (and is also what you usually do with Java/Hibernate
 unit tests).

ok, I think I see what you're saying.  Removing the
PrivateSession.close(), I tried implementing begin/rollback by
changing the app segment to:

conn = engine.connect()
AppSession.configure(bind=conn)
app_session = AppSession()
trans = conn.begin()
so2 = SomeObject()
so2.keyname = some unique key name
app_session.save(so2)
try:
app_session.flush()
except IntegrityError:
# violated unique key
trans.rollback()
else:
trans.commit()
app_session.close()

...but it still fails with the same error, Deleted rowcount 0 does not
match number of objects deleted 1.  What am I missing?  I don't
understand how the teardown code is dependent on the app code if it is
using a different session and a different connection (now) to save the
same mapped class instances.

Here is the altered test case:



from sqlalchemy import *
from sqlalchemy.orm import scoped_session, sessionmaker, mapper
from sqlalchemy.exceptions import IntegrityError

PrivateSession = scoped_session(
sessionmaker(autoflush=False, transactional=True),
scopefunc=lambda:__name__) # a private scope
AppSession = scoped_session(
sessionmaker(autoflush=False, transactional=True))
dsn = 'sqlite:///:memory:'

def test_sa_scoping():
engine = create_engine(dsn)
metadata = MetaData()

sometable = Table('sometable', metadata,
Column('id', Integer, primary_key=True),
Column('keyname', String(30), unique=True))
class SomeObject(object):
pass

metadata.create_all(bind=engine)
PrivateSession.configure(bind=engine)
mapper(SomeObject, sometable)

fixture_session = PrivateSession()
# create some data to test with :
so = SomeObject()
so.keyname = some unique key name
fixture_session.save(so)
fixture_session.flush()

conn = engine.connect()
AppSession.configure(bind=conn)
app_session = AppSession()
trans = conn.begin()
so2 = SomeObject()
so2.keyname = some unique key name
app_session.save(so2)
try:
app_session.flush()
except IntegrityError:
# violated unique key
trans.rollback()
else:
trans.commit()
app_session.close()

# after testing application code, I want to tear down
# test even if the app had an error :
assert so in fixture_session
fixture_session.delete(so)
fixture_session.flush()
rs = fixture_session.query(SomeObject).all()
assert rs == [], unexpected: %s % rs

if __name__ == '__main__':
test_sa_scoping()


same exception...

sqlalchemy.exceptions.ConcurrentModificationError: Deleted rowcount 0
does not match number of objects deleted 1

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to accomplish setup/run-app/teardown with mapped classes and sessions?

2008-01-23 Thread Kumar McMillan

On Jan 23, 2008 4:36 PM, Kumar McMillan [EMAIL PROTECTED] wrote:
 ...but it still fails with the same error, Deleted rowcount 0 does not
 match number of objects deleted 1.  What am I missing?  I don't
 understand how the teardown code is dependent on the app code if it is
 using a different session and a different connection (now) to save the
 same mapped class instances.

ah, I just had to put the setup/teardown routines in their own
respective transactions too.  Now it passes.  Thanks!  Next... to see
if I can clean it up a bit and fit it into my app.

Passing test:


from sqlalchemy import *
from sqlalchemy.orm import scoped_session, sessionmaker, mapper
from sqlalchemy.exceptions import IntegrityError

PrivateSession = scoped_session(
sessionmaker(autoflush=False, transactional=True),
scopefunc=lambda:__name__) # a private scope
AppSession = scoped_session(
sessionmaker(autoflush=False, transactional=True))
dsn = 'sqlite:///:memory:'

def test_sa_scoping():
engine = create_engine(dsn)
metadata = MetaData()

sometable = Table('sometable', metadata,
Column('id', Integer, primary_key=True),
Column('keyname', String(30), unique=True))
class SomeObject(object):
pass

metadata.create_all(bind=engine)
mapper(SomeObject, sometable)

conn = engine.connect()
PrivateSession.configure(bind=conn)
trans = conn.begin()
fixture_session = PrivateSession()
# create some data to test with :
so = SomeObject()
so.keyname = some unique key name
fixture_session.save(so)
fixture_session.flush()
trans.commit()

conn = engine.connect()
AppSession.configure(bind=conn)
app_session = AppSession()
trans = conn.begin()
so2 = SomeObject()
so2.keyname = some unique key name
app_session.save(so2)
try:
app_session.flush()
except IntegrityError:
# violated unique key
trans.rollback()
else:
trans.commit()
app_session.close()

# after testing application code, I want to tear down
# test even if the app had an error :
assert so in fixture_session
trans = conn.begin()
fixture_session.delete(so)
fixture_session.flush()
trans.commit()
rs = fixture_session.query(SomeObject).all()
assert rs == [], unexpected: %s % rs

if __name__ == '__main__':
test_sa_scoping()

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: detect if objects have been deleted from a session?

2007-09-25 Thread Kumar McMillan

Thanks for the quick response.

On 9/24/07, Michael Bayer [EMAIL PROTECTED] wrote:
 but also i dont understand the problem you're having.  the
 ConcurrentModificationError should only happen if you are issuing a
 literal DELETE statement to the database which conflicts with the
 session trying to delete the instance, or if you've already issued
 the delete and flushed in a session concurrent to the current one.

yes, this was exactly what I was doing, whoops.  When I changed it to
only flush once I was painfully reminded that my regression suite was
failing in mysterious ways because I didn't have any relation
properties assigned to my mappers (it looked like the parent object
wasn't getting saved).  At the time I couldn't figure that out.  Sigh.
 But, yes, putting flush in the right place made the newer code work
:)

Now I have what appears to more misconfigured mapper issues but those
should be easy to find answers to.

I'd just been staring at it all too long to notice the multiple flush
calls.  Thanks!

-Kumar

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] detect if objects have been deleted from a session?

2007-09-24 Thread Kumar McMillan

Hello.
I have a naive system that links dict-like objects to mapped classes
so that rows can be saved to the db.  It doesn't know anything about
mapped class instances, so many-to-many relationships are [currently]
saved by a class mapped to the swing table itself; for example, it's
the equivalent of  :

client_role = Role(role_name=client)
acme = Organization(
organization_name=Acme)
acme_is_a_client = OrganizationRoles(
organization_id = acme.id,
role_id = client_role.id)

The deletion process iters through from child to parent according to
foreign keys and performs :

session.delete(acme_is_a_client)
session.delete(acme)
session.delete(client_role)

However, if somewhere in the app
Organization.query().join('roles').select() was performed, then
deleting acme will fail because it would have been deleted
automatically by session.delete(acme_is_a_client).  Specifically, I
get a ConcurrentModificationError saying the deleted row count was 0.

Given the limitation of this approach and not trying to depend on the
configuration of ondelete/passive_deletes, is it possible to simply
detect whether or not acme or client_role has been deleted in the
session before deleting?  That is, something more like :

if not object_was_deleted(session, acme_is_a_client):
session.delete(acme_is_a_client)
if not object_was_deleted(session, acme):
# this should not be executed
session.delete(acme)
if not object_was_deleted(session, client_role):
# nor this
session.delete(client_role)

The implementation below seems to detect deletions of the same object
but *not* the above scenario where the objects deleted were children :

def object_was_deleted(session, obj):
from sqlalchemy.orm.mapper import object_mapper
for c in [obj] + list(object_mapper(obj).cascade_iterator(
'delete', obj)):
if c in session.uow.deleted:
return True
elif not session.uow._is_valid(c):
return True
return False

Is there a way to detect deleted children?  Thanks for reading and let
me know if you need a more concrete example

-Kumar

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: new setuptools vs local SA copy

2007-02-09 Thread Kumar McMillan

python setup.py develop [1] doesn't install anything.  It makes a link
to a directory of source code, just like you are doing w/ the
pythonpath, and you only have to run it once.  either method is
accomplishing the same thing.  I just find workingenv very easy to use
and wanted to share since there isn't really any documentation on it.

[1] 
http://peak.telecommunity.com/DevCenter/setuptools#develop-deploy-the-project-source-in-development-mode

On 2/9/07, Michael Bayer [EMAIL PROTECTED] wrote:


 I have a directory on my laptop which looks like this:

 ./sa026
 ./sa027
 ./sa028
 ./sa030
 ./sa031
 ...

 I set PYTHONPATH to ./lib.  I change into each directory and run a
 script that id like to compare between versions.  No setup.py develop is
 required...i dont want to install anything anywhere.  I just want to use
 *this source code right here*, even though an egg version of that software
 happens to be installed somewhere.  I dont see why thats so unreasonable.

 basically right now, once you install something via setup.py, you can
 never ever again use any module with that same name without going through
 setuptools hoops for it (or shove a sys.path.insert() in your program,
 or hack your python like i did).


 Kumar McMillan wrote:
 
   put a file -pythonpath.pth into your site_packages folder:
  
   import os, sys; sys.__egginsert=len(os.environ.get('PYTHONPATH',
   '').split(os.pathsep));
  duly noted for future usage _when_ i meet an egg.
 
  workingenv, as Simon King mentioned, is the way to go.  At my company
  we share a dev server which has lots of stuff complied for us but for
  true isolated development, each developer needs his own python sandbox
  to make egg links to all modules in progress.
 
  $ sudo easy_install workingenv.py
 
  $ python -m workingenv --site-packages ~/temp-py
  $ source ~/temp-py/bin/activate
 
  and now you can run `python setup.py develop` without sudo on any
  package and it will only be available in that env, with a fallback to
  site-packages for modules you haven't installed.  And of course you
  can have as many of these as you want.
 
  Some things I've run into... when setuptools gets upgraded (why a
  workingenv feature isn't built into setuptools, who knows) sometimes
  you need to remove the setuptools egg and rebuild your workingenv the
  same way you built it
 
  $ python -m workingenv --site-packages ~/temp-py
 
  this will not disturb any eggs or egg-links in your env.
 
  The only alternative (quoted above) that I know is to hack your python
  install (which I also did before workingenv).  I still find workingenv
  easier, tho not ideal.
 
  I should also note that this was a perfect way to set up multiple
  buildbot builders on our server.
 
  
 


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: new setuptools vs local SA copy

2007-02-08 Thread Kumar McMillan

  put a file -pythonpath.pth into your site_packages folder:
 
  import os, sys; sys.__egginsert=len(os.environ.get('PYTHONPATH',
  '').split(os.pathsep));
 duly noted for future usage _when_ i meet an egg.

workingenv, as Simon King mentioned, is the way to go.  At my company
we share a dev server which has lots of stuff complied for us but for
true isolated development, each developer needs his own python sandbox
to make egg links to all modules in progress.

$ sudo easy_install workingenv.py

$ python -m workingenv --site-packages ~/temp-py
$ source ~/temp-py/bin/activate

and now you can run `python setup.py develop` without sudo on any
package and it will only be available in that env, with a fallback to
site-packages for modules you haven't installed.  And of course you
can have as many of these as you want.

Some things I've run into... when setuptools gets upgraded (why a
workingenv feature isn't built into setuptools, who knows) sometimes
you need to remove the setuptools egg and rebuild your workingenv the
same way you built it

$ python -m workingenv --site-packages ~/temp-py

this will not disturb any eggs or egg-links in your env.

The only alternative (quoted above) that I know is to hack your python
install (which I also did before workingenv).  I still find workingenv
easier, tho not ideal.

I should also note that this was a perfect way to set up multiple
buildbot builders on our server.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Unit testing with SA?

2007-02-07 Thread Kumar McMillan

  db = SQLAlchemyFixture()
  # anytime before db.data().setup() ...
  db.session = my_session

 In my current system I have a single global session that is used for
 everything.  Is there any reason you can see that I could not just
 reuse this session in all the test cases or should I be creating a new
 on each time?

Are you using postgres?  The only problem I foresee is if your test
does some work with table instances shared by the fixtures but doesn't
explicitly call rollback when there is an exception.  You will
probably even get a deadlock if that happens.  `ps aux | grep
postgres` will show if a fixture's delete statement is waiting on
another transaction.  I've tried to accomodate for this scenario so
let me know if you run into it again so I can add some tests for it.


 The other thing you lose with an SQL dump is that the output may not
 work across different database backends.  That is why I would really
 like the loading of the table to be routed back through SA so we can
 have some support for moving the testing data to whatever db's you end
 up needed.  (in my particular case this isn't really going to work
 because I need GIS support which is non portable, but it sounds like a
 nice capability to me)


agreed.  I'm not entirely convinced that XML is the way to go, but
this makes sense (and I like the plugin idea).  It would be mighty
fast with lxml.etree.iterparse().  still thinking...

 ...and points out a problem I am going to
 have using it.  In my current code I am not keeping the tables or
 mappers around.  Instead I have a database manager class that sets
 everthing up and simply holds onto the session and engine that should
 be used to query the database.  I rely upon the mapped classes to keep
 track of the table and metadata references internally.

 So... is there any way to associated a dataset with the Class type
 that is associated with the data in the dataset?

 for example maybe something like:

 class anything_I_want(DataSet):
mappedType = MyDataClass
class click:
   name=click

 or something else along these lines.  This seems like it would work
 well to tie the data back to the class type that is actually being
 mapped.  Then the anything_I_want class is really just a list of
 MyDataClass objects that need to be populated into the database.

Yes, I've made an attempt to support mapped classes but there are many
different ways to map classes so I might not have captured them all.
Please try:

class anything_i_want(DataSet):
class Meta:
storable=MyDataClass
class click:
name=click

and let me know if that doesn't work, so I can get an idea for how you
are using mapped classes.


-Kumar

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] how do I set an engine to autocommit?

2007-02-01 Thread Kumar McMillan

I have a program that is entering a deadlock (pyscopg) after an
exception since the test suite tries to delete stuff as part of
cleanup.  I tried wrapping everything in a transaction to be sure a
rollback is called but it *appears* that when I start using
transaction.session.bind_to.execute(stmt) directly, it is not part of
the transaction.

All I'm doing are selects anyway (this may be why the transaction is
ignored too) so I really don't need a transaction to begin with.  How
do I set an engine of a session to autocommit?

this is what it looks like (roughly)...


session = self.session_context.current
transaction = session.create_transaction()

# with the mapper (seems to be in transaction)
rs = session.query(mapped_class).select_whereclause(query_str)

# with a table object (does not seem to be in the transaction)
stmt = table.select(col==val)
engine = transaction.session.bind_to
engine.execute(stmt)

_

if I can set the engine to auto commit mode I will likely be able to
prevent the deadlock.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how do I set an engine to autocommit?

2007-02-01 Thread Kumar McMillan

  ...but it *appears* that when I start using
  transaction.session.bind_to.execute(stmt) directly, it is not part of
  the transaction.

 uh yeah...why would it be ?   youre essentially pulling out the
 original bind_to engine which has no idea about what particular
 connection/transaction is used by the SessionTransaction.

I misunderstood that about how transactions work.  Explicitly adding a
connection makes sense (but was not intuitive).


 http://www.sqlalchemy.org/docs/unitofwork.myt#unitofwork_transaction_sql

thanks for the link, I completely missed that; thought I'd looked everywhere.
_

the fact that the deadlock was solved when I added the connection to
the transaction tells me that psycopg runs in commit mode
automatically (even though dbapi 2 says it should not).  The psycopg
docs reinforce this too :
http://www.initd.org/tracker/psycopg/wiki/psycopg2_documentation#setting-transaction-isolation-levels

this is a little annoying because deadlocks are hard to track down and
there is still no way [that I can see] to tell sqlalchemy in an
agnostic way : autocommit=False for my engine.  I would think this
should be allowed like create_engine(dsn, autocommit=1) or as a query
string arg in the dsn.  Anyway, I can live with forcing everything in
a transaction but it is definitely a workaround since all I'm doing
are selects.

thanks, Kumar

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how do I set an engine to autocommit?

2007-02-01 Thread Kumar McMillan

heh, I was saying it backwards [lack of sleep] but this is what I was
referring to ...

Note that if the database supports an auto-commit feature, this must
be initially off. An interface method may be provided to turn it back
on.

- http://www.python.org/dev/peps/pep-0249/

...yes psycopg complies, but sqlalchemy does not expose its interface
method to turn autocommit functionality back on.  Am I missing it
somewhere?  I don't see this exposed anywhere in the psycopg dialect
nor create_engine().  Deadlocks are not fun to debug.

On 2/1/07, Jonathan Ellis [EMAIL PROTECTED] wrote:

 On 2/1/07, Kumar McMillan [EMAIL PROTECTED] wrote:
  the fact that the deadlock was solved when I added the connection to
  the transaction tells me that psycopg runs in commit mode
  automatically (even though dbapi 2 says it should not).  The psycopg
  docs reinforce this too :
  http://www.initd.org/tracker/psycopg/wiki/psycopg2_documentation#setting-transaction-isolation-levels
 

 Wrong; psycopg2 does not autocommit by default.  It automatically
 _creates new transactions_ but you still have to explicitly commit()
 or rollback().  This is what DBAPI2 calls for (and is really the only
 sane thing to do in non-autocommit mode if you think about it).

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] work directly with table clauses in transactions?

2007-01-29 Thread Kumar McMillan

I'm trying to work on something where it's necessary to discover table
objects and then perform inserts and deletes in a transaction.  It's
proving very cumbersome to try and locate the existing mappers for
these tables or create them automatically so I'm now trying to work
directly with table.insert(), table.delete(), etc.  Specifically, I
was getting stuck when a mapper for a table had already been used to
create an object earlier on (seems harmless to insert more objects
later, but I don't have access to the original mapper at that point).

I can't seem to figure out how to do something like
table.insert().execute() inside a transaction.  I already have code
that starts with self.session.create_transaction() and then does all
the inserts/deletes effectively with mappers but this doesn't seem to
work the same for table clauses.

Can someone point me at the right code to read or a page in the docs
for executing clauses in transactions?  I've been looking mostly at
http://www.sqlalchemy.org/docs/sqlconstruction.myt#sql_select ...  my
best guess is that I need to run table.insert().compile() somehow with
a transaction created by an engine object?

thanks, Kumar

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DynamicMetaData + create/drop table = confusion

2007-01-12 Thread Kumar McMillan

uggh, pysqlite 2.1 fixed some other problems I was having in
transactions iirc.  I'm scared to upgrade/downgrade :(

On 1/11/07, Michael Bayer [EMAIL PROTECTED] wrote:

 the important thing is your sqlite version.  im on 3.2.1.  my
 pysqlite seems to be2.0.2 ?  maybe a new pysqlite bug, not sure.
 also the sql echo shows that something is weird...its inserting a
 row, then deleting it, using id 1, which is what the id should be.
 the rowcount should definitely be 1 and not 0.


 On Jan 11, 2007, at 8:30 PM, Kumar McMillan wrote:

 
  hi.  the reason for the flush strangeness is I grabbed these
  statements from separate areas of the app to reproduce the scenario
  (but I guess the app needs cleanup, heh).
 
  What version of pysqilte did you test with?  Mine is pysqlite 2.3.2 on
  python 2.4.3, sqlalchemy dev r 2183; maybe that's all it is (memory
  weirdness).
 
  below is my sql, output, etc, showing the same test failing for me.
 
  I see what you mean about the sqlite instances.  Actually when I run
  it w/ the 2nd call as a connection to postgres, there are no errors.
  Since that is a better representation of my real problem I'm not so
  worried about the failure anymore.  And currently I'm working around
  it all using two separate BoundMetaData which is OK to me.  But I'm
  still curious as to what could be wrong with my setup.
 
  _
 
  2007-01-11 19:16:55,779 INFO sqlalchemy.engine.base.Engine.0x..30
  PRAGMA table_info(offers)
  2007-01-11 19:16:55,779 INFO sqlalchemy.engine.base.Engine.0x..30 {}
  2007-01-11 19:16:55,781 INFO sqlalchemy.engine.base.Engine.0x..30
  CREATE TABLE offers (
  id INTEGER NOT NULL,
  name TEXT,
  PRIMARY KEY (id)
  )
 
 
  2007-01-11 19:16:55,781 INFO sqlalchemy.engine.base.Engine.0x..30 None
  2007-01-11 19:16:55,782 INFO sqlalchemy.engine.base.Engine.0x..30
  COMMIT
  2007-01-11 19:16:55,784 INFO sqlalchemy.engine.base.Engine.0x..30
  BEGIN
  2007-01-11 19:16:55,786 INFO sqlalchemy.engine.base.Engine.0x..30
  INSERT INTO offers (name) VALUES (?)
  2007-01-11 19:16:55,786 INFO sqlalchemy.engine.base.Engine.0x..30
  ['foobar']
  2007-01-11 19:16:55,787 INFO sqlalchemy.engine.base.Engine.0x..30
  COMMIT
  2007-01-11 19:16:55,790 INFO sqlalchemy.engine.base.Engine.0x..30
  SELECT offers.id AS offers_id, offers.name AS offers_name
  FROM offers ORDER BY offers.oid
  2007-01-11 19:16:55,790 INFO sqlalchemy.engine.base.Engine.0x..30 []
  2007-01-11 19:16:55,791 INFO sqlalchemy.engine.base.Engine.0x..30
  BEGIN
  2007-01-11 19:16:55,792 INFO sqlalchemy.engine.base.Engine.0x..30
  DELETE FROM offers WHERE offers.id = ?
  2007-01-11 19:16:55,793 INFO sqlalchemy.engine.base.Engine.0x..30 [1]
  2007-01-11 19:16:55,793 INFO sqlalchemy.engine.base.Engine.0x..30
  COMMIT
  2007-01-11 19:16:55,794 INFO sqlalchemy.engine.base.Engine.0x..30
  SELECT offers.id AS offers_id, offers.name AS offers_name
  FROM offers ORDER BY offers.oid
  2007-01-11 19:16:55,795 INFO sqlalchemy.engine.base.Engine.0x..30 []
  2007-01-11 19:16:55,796 INFO sqlalchemy.engine.base.Engine.0x..30
  PRAGMA table_info(offers)
  2007-01-11 19:16:55,796 INFO sqlalchemy.engine.base.Engine.0x..30 {}
  2007-01-11 19:16:55,796 INFO sqlalchemy.engine.base.Engine.0x..30
  DROP TABLE offers
  2007-01-11 19:16:55,797 INFO sqlalchemy.engine.base.Engine.0x..30 None
  2007-01-11 19:16:55,797 INFO sqlalchemy.engine.base.Engine.0x..30
  COMMIT
  2007-01-11 19:16:55,799 INFO sqlalchemy.engine.base.Engine.0x..30
  PRAGMA table_info(offers)
  2007-01-11 19:16:55,799 INFO sqlalchemy.engine.base.Engine.0x..30 {}
  2007-01-11 19:16:55,800 INFO sqlalchemy.engine.base.Engine.0x..30
  CREATE TABLE offers (
  id INTEGER NOT NULL,
  name TEXT,
  PRIMARY KEY (id)
  )
 
 
  2007-01-11 19:16:55,800 INFO sqlalchemy.engine.base.Engine.0x..30 None
  2007-01-11 19:16:55,800 INFO sqlalchemy.engine.base.Engine.0x..30
  COMMIT
  2007-01-11 19:16:55,802 INFO sqlalchemy.engine.base.Engine.0x..30
  BEGIN
  2007-01-11 19:16:55,803 INFO sqlalchemy.engine.base.Engine.0x..30
  INSERT INTO offers (name) VALUES (?)
  2007-01-11 19:16:55,803 INFO sqlalchemy.engine.base.Engine.0x..30
  ['foobar']
  2007-01-11 19:16:55,804 INFO sqlalchemy.engine.base.Engine.0x..30
  COMMIT
  2007-01-11 19:16:55,805 INFO sqlalchemy.engine.base.Engine.0x..30
  SELECT offers.id AS offers_id, offers.name AS offers_name
  FROM offers ORDER BY offers.oid
  2007-01-11 19:16:55,805 INFO sqlalchemy.engine.base.Engine.0x..30 []
  2007-01-11 19:16:55,806 INFO sqlalchemy.engine.base.Engine.0x..30
  BEGIN
  2007-01-11 19:16:55,807 INFO sqlalchemy.engine.base.Engine.0x..30
  DELETE FROM offers WHERE offers.id = ?
  2007-01-11 19:16:55,807 INFO sqlalchemy.engine.base.Engine.0x..30 [1]
  2007-01-11 19:16:55,808 INFO sqlalchemy.engine.base.Engine.0x..30
  ROLLBACK
  Traceback (most recent call last):
File test_sa_concurrent.py, line 56, in ?
  db_roundtrip('sqlite:///:memory:')
File test_sa_concurrent.py, line 42

[sqlalchemy] Re: DynamicMetaData + create/drop table = confusion

2007-01-12 Thread Kumar McMillan

oh, nice.  upgrading sqlite to 3.3.7 and rebuilding pysqlite2 fixed it
-- sorry for the noise.

On 1/12/07, Kumar McMillan [EMAIL PROTECTED] wrote:
 uggh, pysqlite 2.1 fixed some other problems I was having in
 transactions iirc.  I'm scared to upgrade/downgrade :(

 On 1/11/07, Michael Bayer [EMAIL PROTECTED] wrote:
 
  the important thing is your sqlite version.  im on 3.2.1.  my
  pysqlite seems to be2.0.2 ?  maybe a new pysqlite bug, not sure.
  also the sql echo shows that something is weird...its inserting a
  row, then deleting it, using id 1, which is what the id should be.
  the rowcount should definitely be 1 and not 0.
 
 
  On Jan 11, 2007, at 8:30 PM, Kumar McMillan wrote:
 
  
   hi.  the reason for the flush strangeness is I grabbed these
   statements from separate areas of the app to reproduce the scenario
   (but I guess the app needs cleanup, heh).
  
   What version of pysqilte did you test with?  Mine is pysqlite 2.3.2 on
   python 2.4.3, sqlalchemy dev r 2183; maybe that's all it is (memory
   weirdness).
  
   below is my sql, output, etc, showing the same test failing for me.
  
   I see what you mean about the sqlite instances.  Actually when I run
   it w/ the 2nd call as a connection to postgres, there are no errors.
   Since that is a better representation of my real problem I'm not so
   worried about the failure anymore.  And currently I'm working around
   it all using two separate BoundMetaData which is OK to me.  But I'm
   still curious as to what could be wrong with my setup.
  
   _
  
   2007-01-11 19:16:55,779 INFO sqlalchemy.engine.base.Engine.0x..30
   PRAGMA table_info(offers)
   2007-01-11 19:16:55,779 INFO sqlalchemy.engine.base.Engine.0x..30 {}
   2007-01-11 19:16:55,781 INFO sqlalchemy.engine.base.Engine.0x..30
   CREATE TABLE offers (
   id INTEGER NOT NULL,
   name TEXT,
   PRIMARY KEY (id)
   )
  
  
   2007-01-11 19:16:55,781 INFO sqlalchemy.engine.base.Engine.0x..30 None
   2007-01-11 19:16:55,782 INFO sqlalchemy.engine.base.Engine.0x..30
   COMMIT
   2007-01-11 19:16:55,784 INFO sqlalchemy.engine.base.Engine.0x..30
   BEGIN
   2007-01-11 19:16:55,786 INFO sqlalchemy.engine.base.Engine.0x..30
   INSERT INTO offers (name) VALUES (?)
   2007-01-11 19:16:55,786 INFO sqlalchemy.engine.base.Engine.0x..30
   ['foobar']
   2007-01-11 19:16:55,787 INFO sqlalchemy.engine.base.Engine.0x..30
   COMMIT
   2007-01-11 19:16:55,790 INFO sqlalchemy.engine.base.Engine.0x..30
   SELECT offers.id AS offers_id, offers.name AS offers_name
   FROM offers ORDER BY offers.oid
   2007-01-11 19:16:55,790 INFO sqlalchemy.engine.base.Engine.0x..30 []
   2007-01-11 19:16:55,791 INFO sqlalchemy.engine.base.Engine.0x..30
   BEGIN
   2007-01-11 19:16:55,792 INFO sqlalchemy.engine.base.Engine.0x..30
   DELETE FROM offers WHERE offers.id = ?
   2007-01-11 19:16:55,793 INFO sqlalchemy.engine.base.Engine.0x..30 [1]
   2007-01-11 19:16:55,793 INFO sqlalchemy.engine.base.Engine.0x..30
   COMMIT
   2007-01-11 19:16:55,794 INFO sqlalchemy.engine.base.Engine.0x..30
   SELECT offers.id AS offers_id, offers.name AS offers_name
   FROM offers ORDER BY offers.oid
   2007-01-11 19:16:55,795 INFO sqlalchemy.engine.base.Engine.0x..30 []
   2007-01-11 19:16:55,796 INFO sqlalchemy.engine.base.Engine.0x..30
   PRAGMA table_info(offers)
   2007-01-11 19:16:55,796 INFO sqlalchemy.engine.base.Engine.0x..30 {}
   2007-01-11 19:16:55,796 INFO sqlalchemy.engine.base.Engine.0x..30
   DROP TABLE offers
   2007-01-11 19:16:55,797 INFO sqlalchemy.engine.base.Engine.0x..30 None
   2007-01-11 19:16:55,797 INFO sqlalchemy.engine.base.Engine.0x..30
   COMMIT
   2007-01-11 19:16:55,799 INFO sqlalchemy.engine.base.Engine.0x..30
   PRAGMA table_info(offers)
   2007-01-11 19:16:55,799 INFO sqlalchemy.engine.base.Engine.0x..30 {}
   2007-01-11 19:16:55,800 INFO sqlalchemy.engine.base.Engine.0x..30
   CREATE TABLE offers (
   id INTEGER NOT NULL,
   name TEXT,
   PRIMARY KEY (id)
   )
  
  
   2007-01-11 19:16:55,800 INFO sqlalchemy.engine.base.Engine.0x..30 None
   2007-01-11 19:16:55,800 INFO sqlalchemy.engine.base.Engine.0x..30
   COMMIT
   2007-01-11 19:16:55,802 INFO sqlalchemy.engine.base.Engine.0x..30
   BEGIN
   2007-01-11 19:16:55,803 INFO sqlalchemy.engine.base.Engine.0x..30
   INSERT INTO offers (name) VALUES (?)
   2007-01-11 19:16:55,803 INFO sqlalchemy.engine.base.Engine.0x..30
   ['foobar']
   2007-01-11 19:16:55,804 INFO sqlalchemy.engine.base.Engine.0x..30
   COMMIT
   2007-01-11 19:16:55,805 INFO sqlalchemy.engine.base.Engine.0x..30
   SELECT offers.id AS offers_id, offers.name AS offers_name
   FROM offers ORDER BY offers.oid
   2007-01-11 19:16:55,805 INFO sqlalchemy.engine.base.Engine.0x..30 []
   2007-01-11 19:16:55,806 INFO sqlalchemy.engine.base.Engine.0x..30
   BEGIN
   2007-01-11 19:16:55,807 INFO sqlalchemy.engine.base.Engine.0x..30
   DELETE FROM offers WHERE offers.id = ?
   2007-01-11 19:16:55,807 INFO sqlalchemy.engine.base.Engine.0x..30 [1

[sqlalchemy] DynamicMetaData + create/drop table = confusion

2007-01-06 Thread Kumar McMillan

hello.

I'm trying to use DynamicMetaData so that two separate tests can each
connect to a unique db, create some tables, insert some data, delete
that data, then drop the tables.

This seems to yield ...

sqlalchemy.exceptions.ConcurrentModificationError: Updated rowcount 0
does not match number of objects updated 1

... the second time around, when trying to delete the data.  I checked
the list and FAQ and note that I am *not* trying to modify a primary
key.  This seems related to the use of DynamicMetaData and the fact
that I am creating the table then dropping it.

I was able to reproduce this scenario in a single test (below and
attached) so maybe you can see something simple I'm doing wrong?
Experimenting, I noticed that if I use two separate BoundMetaData
instances *or* switch to checkfirst=True and not drop the table then
the test works fine.  However, I don't see why this shouldn't be
possible with DynamicMetaData.  Thanks in advance.

PS. this was in sqlalchemy trunk r 2183

_

import sqlalchemy
from sqlalchemy import *
from sqlalchemy.ext.assignmapper import assign_mapper
from sqlalchemy.ext.sessioncontext import SessionContext
def eq_(a,b):
   assert a==b, %s != %s % (a,b)

meta = DynamicMetaData()

offers = Table(offers, meta,
   Column(id, INT, primary_key=True),
   Column(name, String ),
)
class Offer(object):
   pass

def db_roundtrip(dsn):
   conn = meta.connect(dsn)
   meta.engine.echo = 1

   context = SessionContext(
   lambda: sqlalchemy.create_session(bind_to=meta.engine))
   assign_mapper(context, Offer, offers)
   session = context.current

   meta.create_all()
   session.flush()

   offer = Offer()
   offer.name = 'foobar'
   session.save(offer)
   session.flush()

   rows = Offer.select()
   eq_(len(rows), 1)
   eq_(rows[0].id, 1)
   eq_(rows[0].name, 'foobar')

   session.delete(offer)
   session.flush()

   rows = Offer.select()
   eq_(len(rows), 0)

   meta.drop_all()
   session.flush()

   sqlalchemy.orm.clear_mappers()

if __name__ == '__main__':

   db_roundtrip('sqlite:///:memory:')
   # pretend this is another connection :
   db_roundtrip('sqlite:///:memory:')

   print 'OK'

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



test_sa_concurrent.py
Description: Binary data