[sqlalchemy] default_metadata and threadlocal

2007-02-11 Thread Manlio Perillo

Hi.

Why the default_metadata has a threadlocal context?



Thanks and regards  Manlio Perillo


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Cascade-Delete causes AssertionError (Tries to blank-out primary key ...)

2007-02-11 Thread Nebur

The example below raises an:
sqlalchemy.exceptions.AssertionError: Dependency rule tried to blank-
out primary key column 'userdata.user_id' on instance
'[EMAIL PROTECTED]'

The code creates 2 objects having a 1:1 relation with cascade-delete.
The ForeignKey is declared as a primary key. This seems to cause the
Error.
Versions: Python 2.4, SA 0.3.1, SA 0.3.4


class User(object):
pass

class Userdata(object):
def __init__(self, user):
self.user_id = user.id

if __name__==__main__:
db = create_engine(mysql://[EMAIL PROTECTED]/test_cascade)
session = create_session()
metadata = BoundMetaData(db)

t_user = Table(user,metadata,
Column(id,Integer,primary_key=True),
)
t_userdata = Table(userdata,metadata,

Column(user_id,Integer,ForeignKey(user.id),primary_key=True),
)
metadata.create_all()
mapper(User, t_user)
mapper(Userdata, t_userdata, properties = {
myuser:relation(User,backref=backref(meta,cascade=delete))
})

# create 1 instance of each object:
user1 = User()
session.save(user1)
session.flush()
data1 = Userdata(user1)
session.save(data1)
session.flush()

# now delete the user,
# expecting the cascade to delete userdata,too:
session.delete(user1)
session.flush() #AssertionError: Dependency rule tried to blank-
out ...


I can workaround this error by using a separate primary key in table
userdata:
t_userdata = Table(userdata,metadata,
Column(id,Integer,primary_key=True),
Column(user_id,Integer,ForeignKey(user.id)),
)
and everything works fine.
I'm wondering whether this is an SA bug, or a bad table design ?
Thanks and regards,
 Ruben


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Cascade-Delete causes AssertionError (Tries to blank-out primary key ...)

2007-02-11 Thread Luke Stebbing

See http://www.sqlalchemy.org/docs/
datamapping.myt#datamapping_relations_lifecycle

I think you want a parent-child relationship between User and
Userdata. Here's how I would change it (disclaimer: I'm new to SA
myself):

class User(object):
pass

class Userdata(object):
pass ### we'll let SQLAlchemy update the relationship
automatically instead of setting it manually

if __name__==__main__:
db = create_engine(mysql://[EMAIL PROTECTED]/test_cascade)
session = create_session()
metadata = BoundMetaData(db)
t_user = Table(user,metadata,
Column(id,Integer,primary_key=True),
)
t_userdata = Table(userdata,metadata,
 
Column(user_id,Integer,ForeignKey(user.id),primary_key=True),
)
metadata.create_all()

### we create the relationship here instead, for clarity.
uselist=False on the relation and the backref makes it one-to-one.
Compare to the example in the link I gave
mapper(User, t_user, properties = {
userdata: relation(Userdata, uselist=False,
cascade=all, delete-orphan, backref=backref(myuser,
uselist=False))
})
mapper(Userdata, t_userdata)

# create 1 instance of each object:
user1 = User()
session.save(user1)
session.flush()
user1.userdata = Userdata() ### add userdata to user, relationship
is automatically created
session.flush()

# now delete the user
session.delete(user1)
session.flush()


Cheers,

Luke

On Feb 11, 9:44 am, Nebur [EMAIL PROTECTED] wrote:
 The example below raises an:
 sqlalchemy.exceptions.AssertionError: Dependency rule tried to blank-
 out primary key column 'userdata.user_id' on instance
 '[EMAIL PROTECTED]'

 The code creates 2 objects having a 1:1 relation with cascade-delete.
 The ForeignKey is declared as a primary key. This seems to cause the
 Error.
 Versions: Python 2.4, SA 0.3.1, SA 0.3.4

 class User(object):
 pass

 class Userdata(object):
 def __init__(self, user):
 self.user_id = user.id

 if __name__==__main__:
 db = create_engine(mysql://[EMAIL PROTECTED]/test_cascade)
 session = create_session()
 metadata = BoundMetaData(db)

 t_user = Table(user,metadata,
 Column(id,Integer,primary_key=True),
 )
 t_userdata = Table(userdata,metadata,
 
 Column(user_id,Integer,ForeignKey(user.id),primary_key=True),
 )
 metadata.create_all()
 mapper(User, t_user)
 mapper(Userdata, t_userdata, properties = {
 
 myuser:relation(User,backref=backref(meta,cascade=delete))
 })

 # create 1 instance of each object:
 user1 = User()
 session.save(user1)
 session.flush()
 data1 = Userdata(user1)
 session.save(data1)
 session.flush()

 # now delete the user,
 # expecting the cascade to delete userdata,too:
 session.delete(user1)
 session.flush() #AssertionError: Dependency rule tried to blank-
 out ...

 I can workaround this error by using a separate primary key in table
 userdata:
 t_userdata = Table(userdata,metadata,
 Column(id,Integer,primary_key=True),
 Column(user_id,Integer,ForeignKey(user.id)),
 )
 and everything works fine.
 I'm wondering whether this is an SA bug, or a bad table design ?
 Thanks and regards,
  Ruben


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Cascade-Delete causes AssertionError (Tries to blank-out primary key ...)

2007-02-11 Thread Nebur

Luke, your example is a great speed-up for me learning SA !  And my
code is working fine now.

I' m still looking what exactly makes the AssertionError disappear.
(Well, your other changes (uselist=False etc.) are clearly an
improvement. Still want to find out the cause of the error)
The minimum correction of my above code seems to be 2 lines:

1.The cascade rule changed from delete into all,delete-orphan
2.This in turn requires to maintain the relation in one of the
objects:
after:  session.save(data1) 
data1.myuser = user1 # alternatively  user1.meta.append(data1)  will
work as well

I still don't understand why the ORM strictly requires all, delete-
orphan here, and why delete ends up in the AssertionError.   I
think all stands for delete, save-update, refresh-expire, merge,
expunge. But none of these rules seems to play a role is this
example. (I _must_ be wrong here...)
Any hint is really appreciated.
 Ruben






--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Some open tickets

2007-02-11 Thread Paul Johnston

Hi,

Can someone with committer status take a look at the following when 
possible:

#462 - Please close the ticket; I reopened for a reason that I now 
realise is duff (info on ticket).
#422 - Gets more unit tests to pass for MSSQL. The patch does change the 
behaviour of _ClauseList.compare so the compare is order-insensitive. I 
think that is a good change semanticly, but if you disagree I can find a 
different fix.
#473 - MSSQL bug fix.

Many thanks!

Paul

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] psycopg2 default transaction inefficiency

2007-02-11 Thread JP

I noticed this thread on django-devs:

http://groups.google.com/group/django-developers/browse_frm/thread/521a03a726d526e1/b1bacc5628341129?lnk=gstq=psycopg2rnum=1#b1bacc5628341129

Which notes that psycopg2 by default starts transactions behind the
scenes, even for select statements. If you happen to be running a web
app where each hit starts a new cursor, and you only do a few selects,
this adds a *lot* of overhead relative to autocommit.

I wondered if sqlalchemy suffered from the same problem with psycopg2,
and it does. Take a look at what sqlalchemy thinks it was doing for a
short example, vs what the postgres query log contains.

The code:

 import sqlalchemy as sa
 meta = sa.DynamicMetaData()
 users = sa.Table('users', meta, sa.Column('id', sa.Integer, 
 primary_key=True, autoincrement=True), sa.Column('name', sa.String(50)))
 meta.connect('postgres://snip', debug=True)
 meta.create_all()
 users.select().execute()

sqlalchemy query log:
select relname from pg_class where lower(relname) = %(name)s
CREATE TABLE users (
id SERIAL NOT NULL,
name VARCHAR(50),
PRIMARY KEY (id)
)
COMMIT
SELECT users.id, users.name FROM users

postgres query log:
LOG:  statement: SET DATESTYLE TO 'ISO'
LOG:  statement: SHOW client_encoding
LOG:  statement: SHOW default_transaction_isolation
LOG:  statement: BEGIN; SET TRANSACTION ISOLATION LEVEL READ COMMITTED
LOG:  statement: select relname from pg_class where lower(relname) =
'users'
LOG:  statement:
CREATE TABLE users (
id SERIAL NOT NULL,
name VARCHAR(50),
PRIMARY KEY (id)
)
LOG:  statement: END
LOG:  statement: BEGIN; SET TRANSACTION ISOLATION LEVEL READ COMMITTED
LOG:  statement: SELECT users.id, users.name FROM users

I think it would be better for sqlalchemy to set psycopg2 to
autocommit by default, and implement do_begin in the dialect so that
transactions are only started when desired, rather than implicitly on
the first statment seen by a cursor when there's no current
transaction, as seems to be the case now.

Mike, would you be interested in a patch that implemented that
behavior? Does anyone disagree that that would be better?

JP


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: default_metadata and threadlocal

2007-02-11 Thread Michael Bayer

its threadlocal because its imitating some global context people are
accustomed to from SQLObject.

On Feb 11, 6:13 am, Manlio Perillo [EMAIL PROTECTED] wrote:
 Hi.

 Why the default_metadata has a threadlocal context?

 Thanks and regards  Manlio Perillo


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: psycopg2 default transaction inefficiency

2007-02-11 Thread Michael Bayer

im confused.  the django thread seems to be saying that it is setting
psycopg2 *into* autocommit mode, which causes the problem; the default
setting of autocommit for DBAPI (and maintained in SA) is False.  When
i wrote frameworks years ago I always thought autocommit mode was
the way to go but i since learned that issues like this arise  so SA
*never* uses autocommit mode on the connection; instead it implements
its own autocommitting behavior in a manner similar to hibernate -
it looks at the statement being executed, determines if its a CRUD/
CREATE/DROP expression, and then explicitly calls COMMIT if no SA
transaction is already in progress.

the log you have above doesnt prove that anything unusual is going on,
since you are illustrating a transactional operation, then a SELECT,
then another transactional operation, then another SELECT. SA does an
explicit COMMIT for the CREATE statements since they are required to
complete the table creation operation.

this test confirms that psycopg2 defaults to autocommit as false and
doesnt do anything with transaction modes after the connection is
opened:

import psycopg2 as psycopg

conn = psycopg.connect(user='scott', password='tiger',
host='127.0.0.1', database='test')

for x in range(0, 5):
curs = conn.cursor()
curs.execute(SELECT 1)
curs.close()


log output:

LOG:  statement: SET DATESTYLE TO 'ISO'
LOG:  statement: SHOW client_encoding
LOG:  statement: SHOW default_transaction_isolation
LOG:  statement: BEGIN; SET TRANSACTION ISOLATION LEVEL READ COMMITTED
LOG:  statement: SELECT 1
LOG:  statement: SELECT 1
LOG:  statement: SELECT 1
LOG:  statement: SELECT 1
LOG:  statement: SELECT 1
LOG:  statement: ABORT

one connection, five new cursors, only one set of setup
corresponding to the connection.


On Feb 9, 3:52 pm, JP [EMAIL PROTECTED] wrote:
 I noticed this thread on django-devs:

 http://groups.google.com/group/django-developers/browse_frm/thread/52...

 Which notes that psycopg2 by default starts transactions behind the
 scenes, even for select statements. If you happen to be running a web
 app where each hit starts a new cursor, and you only do a few selects,
 this adds a *lot* of overhead relative to autocommit.

 I wondered if sqlalchemy suffered from the same problem with psycopg2,
 and it does. Take a look at what sqlalchemy thinks it was doing for a
 short example, vs what the postgres query log contains.

 The code:

  import sqlalchemy as sa
  meta = sa.DynamicMetaData()
  users = sa.Table('users', meta, sa.Column('id', sa.Integer, 
  primary_key=True, autoincrement=True), sa.Column('name', sa.String(50)))
  meta.connect('postgres://snip', debug=True)
  meta.create_all()
  users.select().execute()

 sqlalchemy query log:
 select relname from pg_class where lower(relname) = %(name)s
 CREATE TABLE users (
 id SERIAL NOT NULL,
 name VARCHAR(50),
 PRIMARY KEY (id)
 )
 COMMIT
 SELECT users.id, users.name FROM users

 postgres query log:
 LOG:  statement: SET DATESTYLE TO 'ISO'
 LOG:  statement: SHOW client_encoding
 LOG:  statement: SHOW default_transaction_isolation
 LOG:  statement: BEGIN; SET TRANSACTION ISOLATION LEVEL READ COMMITTED
 LOG:  statement: select relname from pg_class where lower(relname) =
 'users'
 LOG:  statement:
 CREATE TABLE users (
 id SERIAL NOT NULL,
 name VARCHAR(50),
 PRIMARY KEY (id)
 )
 LOG:  statement: END
 LOG:  statement: BEGIN; SET TRANSACTION ISOLATION LEVEL READ COMMITTED
 LOG:  statement: SELECT users.id, users.name FROM users

 I think it would be better for sqlalchemy to set psycopg2 to
 autocommit by default, and implement do_begin in the dialect so that
 transactions are only started when desired, rather than implicitly on
 the first statment seen by a cursor when there's no current
 transaction, as seems to be the case now.

 Mike, would you be interested in a patch that implemented that
 behavior? Does anyone disagree that that would be better?

 JP


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: bindparams don't work with in_?

2007-02-11 Thread Michael Bayer

ive been alerted that the bindparam() function doesnt work inside of
in_() and theres a new ticket that addresses this issue, but i havent
had time to confirm.

however, even if this is fixed, your usage would still be incorrect
(except perhaps with postgres, which supports an ARRAY type
implicitly); normally you cant pass an array to a bind parameter
value.  you would have to have a bindparam() instance for each scalar
value youd like to pass, i.e.:

s = table.select(table.c.somecol.in_(bindparam('param1'),
bindparam('param2'), bindparam('param3')))
s.execute(param1=2, param2=3, param3=4)


On Feb 10, 4:46 pm, dykang [EMAIL PROTECTED] wrote:
 from sqlalchemy import *
 I was having some trouble understanding how to use the bindparams, and
 I haven't been able to get them to work with the in_ operator. The
 following code is a simple demonstration of what I'm trying (with
 mysql). It connects to  a db, creates a small table and then tries to
 compile and execute a query that uses in_.  When I try to execute the
 code, I get the following exception:
 sqlalchemy.exceptions.SQLError: (TypeError) not all arguments
 converted during string formatting 'SELECT testings.id,
 testings.some_data, testings.some_int \nFROM testings \nWHERE
 testings.id = %s' [[2, 3, 4]]

 I'm not really clear on how to use the bindparams properly, but this
 seems to be incorrect, any help would be appreciated,

 David
 #==begin source below

 meta = DynamicMetaData ()
 meta.connect ('some uri')
 meta.engine.echo=True
 dbsession = create_session (bind_to=(meta.engine))

 TestTbl = Table ('testings', meta,
 Column ('id', Integer, Sequence ('testings_id_seq'),
 primary_key=True),
 Column ('some_data', Unicode (40)),
 Column ('some_int', Integer),
 mysql_engine='InnoDB')

 class Testing (object):
 pass

 TestTbl.create ()

 s = select ([TestTbl], TestTbl.c.id.in_(bindparam('my_id'))).compile
 ()
 some_mapper = mapper (Testing, TestTbl)
 results = dbsession.query(Testing).instances (s.execute(my_id=[2,3,4]))


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Updating an object by passing a dictionary?

2007-02-11 Thread Michael Bayer

implement a set(**kw) method on your class which does what you want.

im guessing it would look like this:

def set(self, **kw):
for k in kw:
setattr(self, k, kw[k])



On Feb 10, 8:50 pm, Steve Bergman [EMAIL PROTECTED] wrote:
 I'm getting started with SQLAlchemy and have run into a small issue.

 In SQLObject, when I want to update a record in the database with the
 values contained in a dictionary, I do something like this:

 p = Person.get(id)
 p.set(**kw)

 What is the equivalent in SQLAlchemy?  I prefer not to have to set
 each attribute separately.

 Thanks,
 Steve Bergman


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: work directly with table clauses in transactions?

2007-02-11 Thread Michael Bayer



On Jan 30, 1:07 am, Kumar McMillan [EMAIL PROTECTED] wrote:

 I can't seem to figure out how to do something like
 table.insert().execute() inside a transaction.

with straight engine transactions:

http://www.sqlalchemy.org/docs/dbengine.myt#dbengine_transactions

I already have code
 that starts with self.session.create_transaction() and then does all
 the inserts/deletes effectively with mappers but this doesn't seem to
 work the same for table clauses.

with ORM transactions:

http://www.sqlalchemy.org/docs/unitofwork.myt#unitofwork_transaction_sql



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Some open tickets

2007-02-11 Thread Michael Bayer

yeah im not cool changing the compare method on ClauseList, thats a
very deep place to change something which should not matter for any
particular database dialect, youd have to show me what thats
accomplishing.  the test is more strict as an ordered test and
theres been no issue with that as of yet.

also, i dont like the autoseq table being added to the tables.py
module (nothing should be added to there which isnt globally used by
many tests/all databases) and also the autoseq test in the
session.py module, since its very ms-sql specific and the
test_fetchid functionality seems better suited for the test/sql/
defaults.py test, which tests things like autoincrement columns and
such (without getting the ORM involved).

the change from opts to db_opts in testbase.py should be done in a
style more consistent with what was already there... i dont see why
the name has to change.  DBTYPE, if not specified, should be pulled
from a sqlalchemy.engine.url.URL object and not parsed..and also i
dont understand why the auto_identity_insert flag, if its so
critical to maintain compatibilty with SA's normal behavior, defaults
to False.  at the very least have it parsed as a URL querystring
argument (see mysql.py for examples) so it can be part of the URL,
i.e. mssql://someurl?auto_identity_insert=True, so that the global
**db_opts thing doesnt have to happen in the test suite.

On Feb 11, 4:48 pm, Paul Johnston [EMAIL PROTECTED] wrote:
 Hi,

 Can someone with committer status take a look at the following when
 possible:

 #462 - Please close the ticket; I reopened for a reason that I now
 realise is duff (info on ticket).
 #422 - Gets more unit tests to pass for MSSQL. The patch does change the
 behaviour of _ClauseList.compare so the compare is order-insensitive. I
 think that is a good change semanticly, but if you disagree I can find a
 different fix.
 #473 - MSSQL bug fix.

 Many thanks!

 Paul


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how to unit test with SQ

2007-02-11 Thread Michael Bayer



On Feb 10, 8:36 pm, sqad [EMAIL PROTECTED] wrote:
 InvalidRequestError: Class 'str' entity name 'None' has no mapper
 associated with it

its probably something to do with this:

 user.addresses.appent('One Washington','California')

which possibly adds a string to a collection that is expecting
instances of a mapped class (i.e. SomeClass, not str).



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---