[sqlalchemy] Re: Create tables within a transaction

2009-02-06 Thread Chris Miles

Manual testing with sqlite appears to show that CREATE TABLE is
transactional and can be rolled back.  Consider::

$ sqlite3 test2.db
Loading resources from /Users/chris/.sqliterc
SQLite version 3.4.0
Enter .help for instructions
sqlite BEGIN;
sqlite CREATE TABLE test1 (foo INTEGER);
sqlite CREATE TABLE test2 (foo INTEGER);
sqlite COMMIT;
sqlite .tables
test1  test2
sqlite ^D
$ rm test2.db
$ sqlite3 test2.db
Loading resources from /Users/chris/.sqliterc
SQLite version 3.4.0
Enter .help for instructions
sqlite BEGIN;
sqlite CREATE TABLE test1 (foo INTEGER);
sqlite CREATE TABLE test2 (foo INTEGER);
sqlite ROLLBACK;
sqlite .tables
sqlite


Perhaps the behaviour I see through SA is a side effect of pysqlite?

Cheers,
Chris Miles

On Feb 6, 2:36 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 sqlite doesn't include CREATE TABLE statements within the scope of a  
 transaction.  I think that's a relatively rare behavior only seen in  
 Postgres, in fact - I dont think Oracle or MySQL have that behavior,  
 for example.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] association_proxy question

2009-02-06 Thread GHZ

I am trying to use the association_proxy for attributes that link to
tables containing mostly static data

e.g. my static data is:
COUNTRY
COUNTRY.CODE
COUNTRY.NAME

and the data I am changing is:
USER
USER.COUNTRY_CODE

I use the association_proxy as I want to be able to say:
user = User()
user.country_name = 'DENMARK'
#rather than  user.country_cde = 'DK'

In class User()   (declarative)
I have:
country_code = Column(String, ForeignKey('COUNTRY.CODE'))
country = relation('Country', uselist=False)
country_name = association_proxy('country', 'name',
creator=my_creator) #proxy to COUNTRY.NAME

where:
def my_creator(country_name):
country = session.query(Country).filter_by(name=country_name).one
()


i.e. I will only link to existing countries.. and not add a new one.

Is there a better way to do this?

In this instance I can create a new session in my_creator, but I would
really want to be using the same session, incase I have added, deleted
countries inside the same transaction.  Making the session global
doesn't seem right.

thanks


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Create tables within a transaction

2009-02-06 Thread Michael Bayer

maybe pysqlite, try it with pysqlite.SQLA has no specifics to PG  
or SQLite that change what its doing.


On Feb 6, 2009, at 5:02 AM, Chris Miles wrote:


 Manual testing with sqlite appears to show that CREATE TABLE is
 transactional and can be rolled back.  Consider::

 $ sqlite3 test2.db
 Loading resources from /Users/chris/.sqliterc
 SQLite version 3.4.0
 Enter .help for instructions
 sqlite BEGIN;
 sqlite CREATE TABLE test1 (foo INTEGER);
 sqlite CREATE TABLE test2 (foo INTEGER);
 sqlite COMMIT;
 sqlite .tables
 test1  test2
 sqlite ^D
 $ rm test2.db
 $ sqlite3 test2.db
 Loading resources from /Users/chris/.sqliterc
 SQLite version 3.4.0
 Enter .help for instructions
 sqlite BEGIN;
 sqlite CREATE TABLE test1 (foo INTEGER);
 sqlite CREATE TABLE test2 (foo INTEGER);
 sqlite ROLLBACK;
 sqlite .tables
 sqlite


 Perhaps the behaviour I see through SA is a side effect of pysqlite?

 Cheers,
 Chris Miles

 On Feb 6, 2:36 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 sqlite doesn't include CREATE TABLE statements within the scope of a
 transaction.  I think that's a relatively rare behavior only seen in
 Postgres, in fact - I dont think Oracle or MySQL have that behavior,
 for example.

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Error: attribute refresh operation cannot proceed

2009-02-06 Thread pwaern

Thanks - I'll see how far the expire_on_commit=False takes me.

On 5 Feb., 16:11, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 5, 2009, at 9:04 AM, pwaern wrote:

  What I would like to be able to is to keep on working with detached
  objects like the user object in the code above , in a manner where the
  objects attribute values are the same as they were when the session
  was closed, i.e. without further database queries (and certainly
  without exceptions!). If there was some way to ensure that all
  database attributes on the object were loaded but not modifiable, it
  could solve the problem. In my current application I use the
  get_session extension to return the ScopedSession instance in order to
  ensure that I can access attributes but I would prefer that the
  objects were properly detached but still funcionable.

 you'd have to call refresh() or otherwise hit an attribute on every  
 object before detached from a session.    Alternatively, you could set  
 expire_on_commit=False on your sessionmaker() so that the commit()  
 operation leaves the current in-memory state in place.   The latter  
 option would be why your test program works in 0.4, there wasnt any  
 expire_on_commit behavior.

 .
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Problem sending imap iterator to Session.execute

2009-02-06 Thread Kevin Martin

I am migrating from SA 0.4 to SA 0.5.2 and I am having some problems.
I am using the orm session object and issuing raw sql commands in some
cases for performance reasons.  Take the following example.  I have a
few hundred thousand tuples that will become new rows in 'mytable'.  I
want to issue a single fast sql statement to insert them.  In 0.4 I
had the following:

a_session.execute(insert into mytable values(%s, %s), list
(list_of_2_ary_tuples))

I believe that this was being passed down to the postgres driver
directly, but this does not appear to work in 0.5.  It results in:

...(my code)
 File /usr/lib/python2.5/site-packages/sqlalchemy/orm/session.py,
line 755, in execute
clause, params or {})
  File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py,
line 824, in execute
return Connection.executors[c](self, object, multiparams, params)
  File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py,
line 866, in _execute_clauseelement
keys = params[0].keys()

That is fine.  That was a bit of a hack anyway.  So I started looking
for a new approach.  It looks as though 0.5 expects all bind params to
be dictionaries.  I thought I would map my list of tuples to a list of
dictionaries and pass that along.

So, I went with the following.

new_values = [('val1', 'val2'), ('val3', 'val4')]
a_session.execute(MyTableOrmObject.table.insert(), map( lambda val:
{'col0':val[0], 'col1':val[1]}, new_values ))

This works great, but remember, this is hundreds of thousands of items
in my actual use., so I didn't really want to duplicate my data
structure in memory, I thought I would switch out my map, for an imap,
ie:

new_values = [('val1', 'val2'), ('val3', 'val4')]
a_session.execute(MyTableOrmObject.table.insert(), imap( lambda val:
{'col0':val[0], 'col1':val[1]}, new_values ))

Unfortunately, the __distill_params method in engine base switches on
list and tuple types and gets this wrong.  There is some work to find
out if the object implements an iterator interface, but I'm not really
sure when that would be used.  Here is the stacktrace:

...(my code)
  File /usr/lib/python2.5/site-packages/sqlalchemy/orm/session.py,
line 755, in execute
clause, params or {})
  File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py,
line 824, in execute
return Connection.executors[c](self, object, multiparams, params)
  File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py,
line 866, in _execute_clauseelement
keys = params[0].keys()
AttributeError: 'tuple' object has no attribute 'keys'

Is this a bug?  Is there a way I can use the imap interface?  The
thing that complicates this, is that session.execute has a different
interface than connection or engines execute method so I can' really
control how things get passed from session to connection, and it's
asymetric.

Any ideas?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Problem sending imap iterator to Session.execute

2009-02-06 Thread Michael Bayer


On Feb 6, 2009, at 12:49 PM, Kevin Martin wrote:


 I am migrating from SA 0.4 to SA 0.5.2 and I am having some problems.
 I am using the orm session object and issuing raw sql commands in some
 cases for performance reasons.  Take the following example.  I have a
 few hundred thousand tuples that will become new rows in 'mytable'.  I
 want to issue a single fast sql statement to insert them.  In 0.4 I
 had the following:

 a_session.execute(insert into mytable values(%s, %s), list
 (list_of_2_ary_tuples))

 I believe that this was being passed down to the postgres driver
 directly, but this does not appear to work in 0.5.  It results in:

the change here is that session.execute() now generates a text()  
construct by default to provide the more common use case of platform- 
agnostic SQL strings.   The old pattern will continue to work if you  
execute raw from the connection, which is like execute() local to  
the current transacation:

session.connection().execute(statement, raw_params_sent_to_driver)


 structure in memory, I thought I would switch out my map, for an imap,
 ie:

 new_values = [('val1', 'val2'), ('val3', 'val4')]
 a_session.execute(MyTableOrmObject.table.insert(), imap( lambda val:
 {'col0':val[0], 'col1':val[1]}, new_values ))

 Unfortunately, the __distill_params method in engine base switches on
 list and tuple types and gets this wrong.  There is some work to find
 out if the object implements an iterator interface, but I'm not really
 sure when that would be used.  Here is the stacktrace:

 ...(my code)
  File /usr/lib/python2.5/site-packages/sqlalchemy/orm/session.py,
 line 755, in execute
clause, params or {})
  File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py,
 line 824, in execute
return Connection.executors[c](self, object, multiparams, params)
  File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py,
 line 866, in _execute_clauseelement
keys = params[0].keys()
 AttributeError: 'tuple' object has no attribute 'keys'

 Is this a bug?  Is there a way I can use the imap interface?  The
 thing that complicates this, is that session.execute has a different
 interface than connection or engines execute method so I can' really
 control how things get passed from session to connection, and it's
 asymetric.

There was a ticket where we were working with interators being fully  
accepted by that method, for the purpose of ResultProxy and RowProxy  
objects being passed back in.  However it ran into a snag which  
involved the fact that there was no way to detect the type of the  
first object within the iterator without actually consuming the first  
result of the iterator (i.e. python iterator has no pushback), so  
that's why you see only partial iterator support there.

in reality, the underlying DBAPI may very well store the full list of  
results in memory all at once in any case.  Have you confirmed that  
this is not the case for psycopg2 ?




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Allowing orphaned children

2009-02-06 Thread James

Hi, I'm trying to set up a model where child objects are allowed to
not have parents. At present, I can't get SA to leave the children
intact, despite having ondelete=SET NULL and no delete-orphans.

This is with SA 0.4.3.

To demonstrate my confusion, can someone explain why this code deletes
all my hats:

import sys, time
from datetime import datetime
from sqlalchemy import Table, Column, ForeignKey, MetaData,
create_engine
from sqlalchemy.orm import relation, sessionmaker, mapper, backref
from sqlalchemy import String, Unicode, Integer, DateTime

metadata=MetaData()
engine = create_engine(sqlite:///:memory:)

users_table = Table('tg_user', metadata,
Column('user_id', Integer, primary_key=True),
)

hat_table = Table('hat', metadata,
Column('id', Integer, primary_key=True),
Column('user_id', Integer, ForeignKey('tg_user.user_id',
ondelete='SET NULL')),
)

metadata.create_all(engine)

class User(object):
pass

class Hat(object):
pass

mapper(User, users_table)

mapper(Hat, hat_table,
properties = {
'user': relation(User, backref=backref(hats,
cascade=all)),
}
)

Session = sessionmaker(bind=engine, autoflush=False,
transactional=True)
session = Session()

me = User()
me.hats.extend([Hat(), Hat(), Hat()])
session.save(me)
session.flush()

print session.query(Hat).count(), hats
session.delete(me)
session.flush()
print session.query(Hat).count(), hats

Thank you!
James
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---