[sqlalchemy] Re: Premature autoflushing leads to IntegrityError with AssociationProxy and a backref

2011-02-17 Thread Julien Demoor
Great, thanks a lot!

On Feb 17, 7:05 pm, Michael Bayer  wrote:
> the stack trace tells all for autoflush situations.  Note this is an 0.7 
> stacktrace, 0.6 is slightly different but the same series of steps:
>
>   File "test.py", line 107, in 
>     group = Group([item1, item2])
>   File "", line 4, in __init__
>   File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/state.py", line 105, 
> in initialize_instance
>     return manager.original_init(*mixed[1:], **kwargs)
>
>  File "test.py", line 68, in __init__
> > 1.      self.items_by_owner[item.owner] = item
>
>   File 
> "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/ext/associationproxy.py", line 
> 600, in __setitem__
>     self.col[key] = self._create(key, value)
>
>   File 
> "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/ext/associationproxy.py", line 
> 585, in _create
> > 2.     return self.creator(key, value)
>
>   File "test.py", line 62, in create_for_proxy
> ---> 3.     return cls(None, owner, item)
>
>  File "", line 4, in __init__
>   File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/state.py", line 105, 
> in initialize_instance
>     return manager.original_init(*mixed[1:], **kwargs)
>
>   File "test.py", line 57, in __init__
> ---> 4, 5.    self.item = item
>
>   File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/attributes.py", line 
> 149, in __set__
>     instance_dict(instance), value, None)
>   File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/attributes.py", line 
> 642, in set
>     value = self.fire_replace_event(state, dict_, value, old, initiator)
>   File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/attributes.py", line 
> 662, in fire_replace_event
>     value = fn(state, value, previous, initiator or self)
>
>   File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/attributes.py", line 
> 932, in set_
> ---> 6.    passive=PASSIVE_NO_FETCH)
>
>   File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/attributes.py", line 
> 436, in append
>     self.set(state, dict_, value, initiator, passive=passive)
>
>   File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/attributes.py", line 
> 638, in set
> > 7.    old = self.get(state, dict_, passive=PASSIVE_ONLY_PERSISTENT)
>
>   File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/attributes.py", line 
> 414, in get
>     value = self.callable_(state, passive)
>
>   File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/strategies.py", line 
> 542, in _load_for_state
> ---> 8.     result = q.all()
>
>   File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/query.py", line 
> 1636, in all
>     return list(self)
>   File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/query.py", line 
> 1745, in __iter__
>     self.session._autoflush()
>
>   File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py", line 
> 901, in _autoflush
> ---> 9.    self.flush()
>
> 1. items_by_owner assignment
> 2. association proxy receives assignment, calls creator
> 3. creator is GroupOwner.create_for_proxy
> 4. GroupOwner constructor
> 5. assign self.item =item
> 6. group_owners backref must assign
> 7. group_owners backref is uselist=False, old value must be placed in the 
> "deleted" collection since new value replaces.  "deleted" collection is so 
> session knows to assign NULL to the item's foreign key.
> 8. Item.group_owners is not present in __dict__.  Old value must be loaded.   
> There's no old value in this case, but SQLA doesn't know that until it loads
> 9. autoflush
>
> Solution 1:
>
> initialize group_owners to None:
>
> class Item(object):
>    def __init__(self, name, owner):
>        self.name = name
>        self.owner = owner
>        self.group_owners = None
>
> solution 2:
>
> disable autoflush in the GroupOwner constructor.  
> Seehttp://www.sqlalchemy.org/trac/wiki/UsageRecipes/DisableAutoflush.
>
> On Feb 17, 2011, at 5:56 AM, Julien Demoor wrote:
>
> > Hi,
>
> > I'm running into a problem illustrated by the code below. The result
> > is an IntegrityError:
>
> > sqlalchemy.exc.IntegrityError: (IntegrityError) null value in column
> > "group_id" violates not-null constraint
> > 'INSERT INTO groups_owners (owner_id, item_id) VALUES (%(owner_id)s, %
> > (item_id)s) RETURNING groups_owners.group_id' {'item_id': 1,
> > 'owner_id': 1}
>
> > Looking at th

[sqlalchemy] Premature autoflushing leads to IntegrityError with AssociationProxy and a backref

2011-02-17 Thread Julien Demoor
Hi,

I'm running into a problem illustrated by the code below. The result
is an IntegrityError:

sqlalchemy.exc.IntegrityError: (IntegrityError) null value in column
"group_id" violates not-null constraint
 'INSERT INTO groups_owners (owner_id, item_id) VALUES (%(owner_id)s, %
(item_id)s) RETURNING groups_owners.group_id' {'item_id': 1,
'owner_id': 1}

Looking at the stack trace, autoflush is triggerred by the assignments
in GroupOwner.__init__(), but I fail to see why or what to do about
it.

The error appeared when I set a backref with uselist=False on the
GroupOwner.item relationship. I can work around the problem by using a
list instead, although that would make less sense since there's at
most one group per item. Is there an error in my relationship
configuration?

I'm running SA 0.6.6.

 Code to reproduce the error 

from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.orm.collections import attribute_mapped_collection
from sqlalchemy import Table, Column, Integer, Numeric, String,
ForeignKey, MetaData
from sqlalchemy.engine import create_engine
from sqlalchemy.orm import sessionmaker, mapper, relationship, backref

meta = MetaData()
engine = create_engine('postgresql://test:test@localhost/test')
Session = sessionmaker(bind=engine)
session = Session()

owners = Table("owners", meta,
   Column('id', Integer, primary_key=True),
   Column('name', String)
)

items = Table("items", meta,
  Column('id', Integer, primary_key=True),
  Column('name', String),
  Column('owner_id', Integer, ForeignKey('owners.id'))
)

groups = Table("groups", meta,
   Column('id', Integer,primary_key=True),
)

groups_owners = Table("groups_owners", meta,
  Column('group_id', Integer,
ForeignKey('groups.id'), primary_key=True),
  Column('owner_id', Integer,
ForeignKey('owners.id'), primary_key=True),
  Column('item_id', Integer,
ForeignKey('items.id'), nullable=False, unique=True) # notice
unique=True; items can belong to one group at most
)

def _create_holding(stock, shares):
"""A creator function, constructs Holdings from Stock and share
quantity."""
return Holding(stock=stock, shares=shares)

class Owner(object):
def __init__(self, name):
self.name = name

holdings = association_proxy('by_stock', 'shares',
creator=_create_holding)

class Item(object):
def __init__(self, name, owner):
self.name = name
self.owner = owner


class GroupOwner(object):
def __init__(self, group, owner, item):
self.group = group
self.owner = owner
self.item = item
#assert item.owner is owner

@classmethod
def create_for_proxy(cls, owner, item):
return cls(None, owner, item)

class Group(object):
def __init__(self, items):
if items:
for item in items:
self.items_by_owner[item.owner] = item

items_by_owner = association_proxy('group_owners', 'item',
 creator=GroupOwner.create_for_proxy)


mapper(Owner, owners)
mapper(Item, items, properties={
'owner': relationship(Owner),
})
mapper(Group, groups, properties={
'group_owners': relationship(GroupOwner,
collection_class=attribute_mapped_collection('owner')),
})
mapper(GroupOwner, groups_owners, properties={
'group': relationship(Group),
# Fails
'item': relationship(Item, backref=backref('group_owners',
uselist=False), single_parent=True),
# Doesn't fail
#'item': relationship(Item, backref=backref('group_owners'),
single_parent=True),
'owner': relationship(Owner),
})

meta.drop_all(bind=engine, checkfirst=True)
meta.create_all(bind=engine)

owner1 = Owner('owner 1')
item1 = Item('item 1', owner1)

owner2 = Owner('owner 2')
item2 = Item('item 2', owner2)

session.add(owner1)
session.add(owner2)
session.add(item1)
session.add(item2)

session.flush()

# Fails:
group = Group([item1, item2])
# Doesn't fail
#group = Group([item1])


session.add(group)
session.commit()

assert session.query(GroupOwner).filter_by(group_id=1, item_id=1,
owner_id=1).first() is not None
print 'Done.'

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



RE: [sqlalchemy] Cannot retrieve PostgreSQL array column with session.query() : TypeError: unhashable type: 'list'

2010-10-14 Thread Julien Demoor
Thank you very much!

-Message d'origine-
De : sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] De la
part de Michael Bayer
Envoyé : jeudi 14 octobre 2010 18:08
À : sqlalchemy@googlegroups.com
Objet : Re: [sqlalchemy] Cannot retrieve PostgreSQL array column with
session.query() : TypeError: unhashable type: 'list'


On Oct 14, 2010, at 11:53 AM, Julien Demoor wrote:

> Thanks again for your help.
> 
> I had tried using TypeDecorator without success. Now I tested further and
> found the problem is more narrow in scope than I thought. If I create a
> table MyTable that contains a column MyArray, with MyArray a TypeDecorator
> subclass that converts lists to tuples, insert a row, then do
> session.query(MyTable).get(row_id), it works fine and I get a tuple for my
> array. (The code is at the bottom of this message)
> 
> If I do session.query(MyTable).from_statement('SELECT ... FROM
> my_table;').first(), then MyArray.process_result_value() is not called and
> the returned instance's array attribute is a list rather than a tuple. In
> fact, ARRAY's result processor is not used either in that case. I added
some
> print statements to ResultMetaData.__init__ to try to understand why :
with
> a regular query, the column type is MyArray; with a query that uses
> from_statement(), the column type is NullType.
> 
> From there I'm lost. Is there a way to force Query() to a apply a column
> type with from_statement()? 

oh, right, with from_statement() SQLA knows nothing about the types - and in
fact in that case you are getting psycopg2's returned array directly.

For that you can use the text() construct:

query.from_statement(text("select * from ...",
typemap={'your_array_column':MyArrayType}))




> 
> CODE :
> 
> import os
> from sqlalchemy import create_engine, Table, Integer, MetaData, Column
> from sqlalchemy.orm import create_session, mapper
> from sqlalchemy.dialects.postgresql.base import ARRAY
> 
> sa_engine = create_engine(os.environ['TEST_DSN'])
> session = create_session(sa_engine, autoflush=True, expire_on_commit=True,
> autocommit=False)
> 
> from sqlalchemy import types
> class MyArray(types.TypeDecorator):
>   impl = ARRAY
> 
>   def process_bind_param(self, value, engine):
>   return value
> 
>   def process_result_value(self, value, engine):
>   print 'process_result_value() called'
>   if value is None:
>   return None
>   else:
>   return tuple(value)
>   def copy(self):
>   return MyArray(self.impl.item_type, self.impl.mutable)
> 
> metadata = MetaData(bind=sa_engine)
> foo = Table('foo', metadata, 
>   Column('bar', Integer, primary_key=True),
>   Column('my_array', MyArray(Integer, mutable=False))
> )
> class Foo(object):
>   pass
> mapper(Foo, foo)
> 
> foo_obj = session.query(Foo).from_statement("SELECT 1 AS foo_bar,
> '{1,2,3}'::integer[] AS foo_my_array;").first()
> print foo_obj.my_array # A list
> 
> foo.drop(checkfirst=True)
> foo.create()
> foo_obj = Foo()
> foo_obj.bar = -1
> foo_obj.my_array = [-1, -2]
> session.add(foo_obj)
> session.flush()
> session.expunge_all()
> del foo_obj
> 
> foo_obj = session.query(Foo).get(-1)
> print foo_obj.my_array # A tuple
> 
> session.expunge_all()
> del foo_obj
> 
> foo_obj = session.query(Foo).from_statement("SELECT * FROM foo WHERE
> bar=-1;").first()
> print foo_obj.my_array # A list
> 
> 
> -Message d'origine-
> De : sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] De
la
> part de Michael Bayer
> Envoyé : jeudi 14 octobre 2010 15:52
> À : sqlalchemy@googlegroups.com
> Objet : Re: [sqlalchemy] Cannot retrieve PostgreSQL array column with
> session.query() : TypeError: unhashable type: 'list'
> 
> 
> On Oct 14, 2010, at 4:38 AM, Julien Demoor wrote:
> 
>> Thanks for your reply. 
>> 
>> What you propose would work for me, but in the meantime I need a
> workaround.
> 
> If you need that exact pattern to work, build a TypeDecorator around ARRAY
> and have it return a tuple around the result.
> 
> TypeDecorator is described at:
>
http://www.sqlalchemy.org/docs/core/types.html?highlight=typedecorator#sqlal
> chemy.types.TypeDecorator
> 
> The SQLAlchemy type is always involved in between where psycopg2 returns
> data and where unique_list() is called.
> 
> As far as changing Query, the unique_list() is what makes it such that if
> you load a Parent object with many Child objects in a joi

RE: [sqlalchemy] Cannot retrieve PostgreSQL array column with session.query() : TypeError: unhashable type: 'list'

2010-10-14 Thread Julien Demoor
Thanks again for your help.

I had tried using TypeDecorator without success. Now I tested further and
found the problem is more narrow in scope than I thought. If I create a
table MyTable that contains a column MyArray, with MyArray a TypeDecorator
subclass that converts lists to tuples, insert a row, then do
session.query(MyTable).get(row_id), it works fine and I get a tuple for my
array. (The code is at the bottom of this message)

If I do session.query(MyTable).from_statement('SELECT ... FROM
my_table;').first(), then MyArray.process_result_value() is not called and
the returned instance's array attribute is a list rather than a tuple. In
fact, ARRAY's result processor is not used either in that case. I added some
print statements to ResultMetaData.__init__ to try to understand why : with
a regular query, the column type is MyArray; with a query that uses
from_statement(), the column type is NullType.

>From there I'm lost. Is there a way to force Query() to a apply a column
type with from_statement()? 

CODE :

import os
from sqlalchemy import create_engine, Table, Integer, MetaData, Column
from sqlalchemy.orm import create_session, mapper
from sqlalchemy.dialects.postgresql.base import ARRAY

sa_engine = create_engine(os.environ['TEST_DSN'])
session = create_session(sa_engine, autoflush=True, expire_on_commit=True,
autocommit=False)

from sqlalchemy import types
class MyArray(types.TypeDecorator):
impl = ARRAY

def process_bind_param(self, value, engine):
return value

def process_result_value(self, value, engine):
print 'process_result_value() called'
if value is None:
return None
else:
return tuple(value)
def copy(self):
return MyArray(self.impl.item_type, self.impl.mutable)

metadata = MetaData(bind=sa_engine)
foo = Table('foo', metadata, 
Column('bar', Integer, primary_key=True),
Column('my_array', MyArray(Integer, mutable=False))
)
class Foo(object):
pass
mapper(Foo, foo)

foo_obj = session.query(Foo).from_statement("SELECT 1 AS foo_bar,
'{1,2,3}'::integer[] AS foo_my_array;").first()
print foo_obj.my_array # A list

foo.drop(checkfirst=True)
foo.create()
foo_obj = Foo()
foo_obj.bar = -1
foo_obj.my_array = [-1, -2]
session.add(foo_obj)
session.flush()
session.expunge_all()
del foo_obj

foo_obj = session.query(Foo).get(-1)
print foo_obj.my_array # A tuple

session.expunge_all()
del foo_obj

foo_obj = session.query(Foo).from_statement("SELECT * FROM foo WHERE
bar=-1;").first()
print foo_obj.my_array # A list


-Message d'origine-
De : sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] De la
part de Michael Bayer
Envoyé : jeudi 14 octobre 2010 15:52
À : sqlalchemy@googlegroups.com
Objet : Re: [sqlalchemy] Cannot retrieve PostgreSQL array column with
session.query() : TypeError: unhashable type: 'list'


On Oct 14, 2010, at 4:38 AM, Julien Demoor wrote:

> Thanks for your reply. 
> 
> What you propose would work for me, but in the meantime I need a
workaround.

If you need that exact pattern to work, build a TypeDecorator around ARRAY
and have it return a tuple around the result.

TypeDecorator is described at:
http://www.sqlalchemy.org/docs/core/types.html?highlight=typedecorator#sqlal
chemy.types.TypeDecorator

The SQLAlchemy type is always involved in between where psycopg2 returns
data and where unique_list() is called.

As far as changing Query, the unique_list() is what makes it such that if
you load a Parent object with many Child objects in a joined-load
collection, you get just one Parent and not the same Parent for as many
Child objects as are in the result set.



> I thought of serializing the arrays in SQL, then converting back to tuples
> in my code, but the objects contained in the arrays are quite complex to
> parse (decimals, datetims...). So I tried patching the ARRAY class to
return
> tuples when mutable=False, and that had no effect. If I understand
> correctly, by the time unique_list() is called, ARRAY hasn't been involved
> yet, and the database's array is converted to a Python list by the driver
> (psycopg2 in my case).
> 
> The workaround I've found is to make the following change, in
> sqlalchemy.orm.query.Query.instances :
> 
> if filtered:
>#if single_entity:
>#filter = lambda x: util.unique_list(x, util.IdentitySet)
>#else:
>#filter = util.unique_list
>filter = lambda x: util.unique_list(x, util.IdentitySet)
> 
> Should I expect negative side-effects from this?
> 
> -Message d'origine-
> De : sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] De
la
> part de Michael Bayer
> Envoyé : mercre

RE: [sqlalchemy] Cannot retrieve PostgreSQL array column with session.query() : TypeError: unhashable type: 'list'

2010-10-14 Thread Julien Demoor
Thanks for your reply. 

What you propose would work for me, but in the meantime I need a workaround.
I thought of serializing the arrays in SQL, then converting back to tuples
in my code, but the objects contained in the arrays are quite complex to
parse (decimals, datetims...). So I tried patching the ARRAY class to return
tuples when mutable=False, and that had no effect. If I understand
correctly, by the time unique_list() is called, ARRAY hasn't been involved
yet, and the database's array is converted to a Python list by the driver
(psycopg2 in my case).

The workaround I've found is to make the following change, in
sqlalchemy.orm.query.Query.instances :

if filtered:
#if single_entity:
#filter = lambda x: util.unique_list(x, util.IdentitySet)
#else:
#filter = util.unique_list
filter = lambda x: util.unique_list(x, util.IdentitySet)

Should I expect negative side-effects from this?

-Message d'origine-
De : sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] De la
part de Michael Bayer
Envoyé : mercredi 13 octobre 2010 23:37
À : sqlalchemy@googlegroups.com
Objet : Re: [sqlalchemy] Cannot retrieve PostgreSQL array column with
session.query() : TypeError: unhashable type: 'list'

The Query runs the result through unique_list() anytime there are mapped
entities in the columns list.   The ARRAY result, returning a Python list
[],  isn't hashable, so thats that.

If you only queried for columns, it wouldn't be running through
unique_list().

I suppose we'd modify ARRAY to return tuples if it's "mutable" flag isn't
set.   that could only be in 0.7, though.  Let me know if that works for
you, we'll add a ticket (hard for me to say since I never use the ARRAY
type).



On Oct 13, 2010, at 2:22 PM, Julien Demoor wrote:

> Hello,
> 
> The problem I'm seeing is illustrated by the code below. I tried a
> workaround using TypeDecorator with process_result_value returning a
> tuple rather than a list, to no avail.
> 
> Any help will be greatly appreciated.
> 
> Regards.
> 
> Traceback :
> 
> Traceback (most recent call last):
>  File "satest2.py", line 23, in 
>session.query(Foo, 'col').from_statement("SELECT 55 AS foo_bar,
> '{1,2,3}'::integer[] AS col;").first()
>  File "/home/user/programs/env/lib/python2.6/site-packages/sqlalchemy/
> orm/query.py", line 1494, in first
>ret = list(self)[0:1]
>  File "/home/user/programs/env/lib/python2.6/site-packages/sqlalchemy/
> orm/query.py", line 1682, in instances
>rows = filter(rows)
>  File "/home/jdemoor/programs/km/lib/python2.6/site-packages/
> sqlalchemy/util.py", line 1193, in unique_list
>return [x for x in seq if x not in seen and not seen.add(x)]
> TypeError: unhashable type: 'list'
> 
> Full code :
> 
> import os
> from sqlalchemy import create_engine, Table, Integer, MetaData, Column
> from sqlalchemy.orm import create_session, mapper
> 
> sa_engine = create_engine(os.environ['TEST_DSN'])
> session = create_session(sa_engine, autoflush=True,
> expire_on_commit=True, autocommit=False)
> 
> metadata = MetaData()
> foo = Table('foo', metadata, Column('bar', Integer, primary_key=True))
> class Foo(object):
>   pass
> mapper(Foo, foo)
> 
> # This works
> assert session.query('col').from_statement("SELECT 'abc' AS
> col;").first() == ('abc',)
> assert session.query('col').from_statement("SELECT
> '{1,2,3}'::integer[] AS col;").first() == ([1,2,3],)
> assert session.query('col1', 'col2').from_statement("SELECT
> '{1,2,3}'::integer[] AS col1, 'abc' AS col2;").first() == ([1,2,3],
> 'abc')
> foo_obj = session.query(Foo).from_statement("SELECT 1 AS
> foo_bar;").first()
> assert foo_obj.bar == 1
> 
> try:
>   # This fails
>   session.query(Foo, 'col').from_statement("SELECT 55 AS foo_bar,
> '{1,2,3}'::integer[] AS col;").first()
> except TypeError, e:
>   print e
> 
> from sqlalchemy.dialects.postgresql.base import ARRAY
> col = Column('col', ARRAY(Integer, mutable=False))
> try:
>   # This fails too
>   session.query(Foo, col).from_statement("SELECT 55 AS foo_bar,
> '{1,2,3}'::integer[] AS col;").first()
> except TypeError, e:
>   print e
> 
> -- 
> You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
> To post to this group, send email to sqlalch...@googlegroups.com.
> To unsubscribe from this group, send email 

[sqlalchemy] Cannot retrieve PostgreSQL array column with session.query() : TypeError: unhashable type: 'list'

2010-10-13 Thread Julien Demoor
Hello,

The problem I'm seeing is illustrated by the code below. I tried a
workaround using TypeDecorator with process_result_value returning a
tuple rather than a list, to no avail.

Any help will be greatly appreciated.

Regards.

Traceback :

Traceback (most recent call last):
  File "satest2.py", line 23, in 
session.query(Foo, 'col').from_statement("SELECT 55 AS foo_bar,
'{1,2,3}'::integer[] AS col;").first()
  File "/home/user/programs/env/lib/python2.6/site-packages/sqlalchemy/
orm/query.py", line 1494, in first
ret = list(self)[0:1]
  File "/home/user/programs/env/lib/python2.6/site-packages/sqlalchemy/
orm/query.py", line 1682, in instances
rows = filter(rows)
  File "/home/jdemoor/programs/km/lib/python2.6/site-packages/
sqlalchemy/util.py", line 1193, in unique_list
return [x for x in seq if x not in seen and not seen.add(x)]
TypeError: unhashable type: 'list'

Full code :

import os
from sqlalchemy import create_engine, Table, Integer, MetaData, Column
from sqlalchemy.orm import create_session, mapper

sa_engine = create_engine(os.environ['TEST_DSN'])
session = create_session(sa_engine, autoflush=True,
expire_on_commit=True, autocommit=False)

metadata = MetaData()
foo = Table('foo', metadata, Column('bar', Integer, primary_key=True))
class Foo(object):
pass
mapper(Foo, foo)

# This works
assert session.query('col').from_statement("SELECT 'abc' AS
col;").first() == ('abc',)
assert session.query('col').from_statement("SELECT
'{1,2,3}'::integer[] AS col;").first() == ([1,2,3],)
assert session.query('col1', 'col2').from_statement("SELECT
'{1,2,3}'::integer[] AS col1, 'abc' AS col2;").first() == ([1,2,3],
'abc')
foo_obj = session.query(Foo).from_statement("SELECT 1 AS
foo_bar;").first()
assert foo_obj.bar == 1

try:
# This fails
session.query(Foo, 'col').from_statement("SELECT 55 AS foo_bar,
'{1,2,3}'::integer[] AS col;").first()
except TypeError, e:
print e

from sqlalchemy.dialects.postgresql.base import ARRAY
col = Column('col', ARRAY(Integer, mutable=False))
try:
# This fails too
session.query(Foo, col).from_statement("SELECT 55 AS foo_bar,
'{1,2,3}'::integer[] AS col;").first()
except TypeError, e:
print e

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Session uses new connection after commit

2010-08-24 Thread Julien Demoor
Thanks a lot, that solves my problem. Also it seems to work well
without instantiating the scoped session, so this is perfect.

On Aug 24, 3:13 pm, Michael Bayer  wrote:
> On Aug 24, 2010, at 6:12 AM, Julien Demoor wrote:
>
>
>
> > Hi,
>
> > I'm using PostgreSQL advisory locks in a multithreaded program. Worker
> > threads acquire locks with
> > session.execute(select([func.pg_advisory_lock(key)])) during a
> > transaction and release them just after  session.commit(). Sometimes
> > however, the connection behind the thread's session will have changed
> > after COMMIT, making it impossible for the thread to release the locks
> > it acquired. Note that I'm using scoped_session()'s class methods
> > everywhere.
>
> > A python script to reproduce the problem and its output are attached
> > below.
>
> > Is there a way to force the session to use the same connection within
> > a thread ? Would I then have to recycle the connections from time to
> > time ?
>
> I think the most direct way is to bind the Session to a specific connection:
>
> conn = engine.connect()
> sess = Session(bind=conn)
>
> then when the scope of work with the connection is complete:
>
> sess.close()
> conn.close()
>
> the "indirect" way would be to play games with the connection pool, but in 
> this case your application should already have explicit boundaries where 
> you'd like this connection to stay in play so the above approach is 
> straightforward.
>
>
>
> > Thanks,
>
> > Julien
>
> > === sa_pg_advisory_locks.py ==
>
> > # -*- coding: utf8 -*-
> > import random
> > import os
> > import threading
> > import time
>
> > from sqlalchemy import MetaData
> > from sqlalchemy.engine import create_engine
> > from sqlalchemy.orm import create_session, scoped_session,
> > sessionmaker, reconstructor
> > from sqlalchemy.sql import func, select
>
> > sa_engine = create_engine(os.environ['TEST_DSN'])
>
> > session = scoped_session(lambda: create_session(sa_engine,
> > autoflush=True, expire_on_commit=True, autocommit=False))
>
> > # Toggle this switch to see the difference in behaviour
> > COMMIT_BEFORE_LOCK_RELEASE = True
> > # COMMIT_BEFORE_LOCK_RELEASE = False
>
> > print 'Will commit %s releasing advisory lock' % ('before' if
> > COMMIT_BEFORE_LOCK_RELEASE else 'after')
>
> > # Synchronize program termination
> > event = threading.Event()
>
> > # Test function, will run concurrently in two threads
> > def run_test():
> >    try:
> >            i = 0
> >            while 1:
> >                    if event.isSet() or i >= 100:
> >                            break
> >                    # Show sign of life
> >                    if i and (i % 50 == 0):
> >                            print i
> >                    key = random.randint(1,2**16)
> >                    pid, _ = session.execute(select([func.pg_backend_pid(),
> > func.pg_advisory_lock(key)])).fetchone()
> >                    now = session.execute(select([func.now()])).scalar()
> >                    if COMMIT_BEFORE_LOCK_RELEASE:
> >                            session.commit()
> >                    pid_, unlocked = 
> > session.execute(select([func.pg_backend_pid(),
> > func.pg_advisory_unlock(key)])).fetchone()
> >                    if unlocked:
> >                            assert pid_ == pid
> >                    else:
> >                            raise AssertionError('Iteration %i, acquisition 
> > pid %i, release
> > pid %i\n' % (i, pid, pid_))
> >                    if not COMMIT_BEFORE_LOCK_RELEASE:
> >                            session.commit()
> >                    i += 1
> >    except Exception:
> >            event.set()
> >            raise
> >    event.set()
>
> > for i in xrange(10):
> >    thread = threading.Thread(target=run_test)
> >    thread.daemon = True
> >    thread.start()
>
> > event.wait()
> > time.sleep(1)
>
> > == output ==
>
> > u...@host ~
> > $ TEST_DSN=$CPYTHON_DSN python sa_pg_advisory_locks.py
> > Will commit before releasing advisory lock
> > Exception in thread Thread-10:
> > Traceback (most recent call last):
> >  File "/usr/lib/python2.6/threading.py", line 532, in
> > __bootstrap_inner
> >    self.run()
> >  File "/usr/lib/python2.6/threading.py", line 484, in run
> >    self.__target(*self.__args, **self._

[sqlalchemy] Session uses new connection after commit

2010-08-24 Thread Julien Demoor
Hi,

I'm using PostgreSQL advisory locks in a multithreaded program. Worker
threads acquire locks with
session.execute(select([func.pg_advisory_lock(key)])) during a
transaction and release them just after  session.commit(). Sometimes
however, the connection behind the thread's session will have changed
after COMMIT, making it impossible for the thread to release the locks
it acquired. Note that I'm using scoped_session()'s class methods
everywhere.

A python script to reproduce the problem and its output are attached
below.

Is there a way to force the session to use the same connection within
a thread ? Would I then have to recycle the connections from time to
time ?

Thanks,

Julien

=== sa_pg_advisory_locks.py ==

# -*- coding: utf8 -*-
import random
import os
import threading
import time

from sqlalchemy import MetaData
from sqlalchemy.engine import create_engine
from sqlalchemy.orm import create_session, scoped_session,
sessionmaker, reconstructor
from sqlalchemy.sql import func, select

sa_engine = create_engine(os.environ['TEST_DSN'])

session = scoped_session(lambda: create_session(sa_engine,
autoflush=True, expire_on_commit=True, autocommit=False))

# Toggle this switch to see the difference in behaviour
COMMIT_BEFORE_LOCK_RELEASE = True
# COMMIT_BEFORE_LOCK_RELEASE = False

print 'Will commit %s releasing advisory lock' % ('before' if
COMMIT_BEFORE_LOCK_RELEASE else 'after')

# Synchronize program termination
event = threading.Event()

# Test function, will run concurrently in two threads
def run_test():
try:
i = 0
while 1:
if event.isSet() or i >= 100:
break
# Show sign of life
if i and (i % 50 == 0):
print i
key = random.randint(1,2**16)
pid, _ = session.execute(select([func.pg_backend_pid(),
func.pg_advisory_lock(key)])).fetchone()
now = session.execute(select([func.now()])).scalar()
if COMMIT_BEFORE_LOCK_RELEASE:
session.commit()
pid_, unlocked = 
session.execute(select([func.pg_backend_pid(),
func.pg_advisory_unlock(key)])).fetchone()
if unlocked:
assert pid_ == pid
else:
raise AssertionError('Iteration %i, acquisition 
pid %i, release
pid %i\n' % (i, pid, pid_))
if not COMMIT_BEFORE_LOCK_RELEASE:
session.commit()
i += 1
except Exception:
event.set()
raise
event.set()

for i in xrange(10):
thread = threading.Thread(target=run_test)
thread.daemon = True
thread.start()

event.wait()
time.sleep(1)

== output ==

u...@host ~
$ TEST_DSN=$CPYTHON_DSN python sa_pg_advisory_locks.py
Will commit before releasing advisory lock
Exception in thread Thread-10:
Traceback (most recent call last):
  File "/usr/lib/python2.6/threading.py", line 532, in
__bootstrap_inner
self.run()
  File "/usr/lib/python2.6/threading.py", line 484, in run
self.__target(*self.__args, **self.__kwargs)
  File "sa_pg_advisory_locks.py", line 44, in run_test
raise AssertionError('Iteration %i, acquisition pid %i, release
pid %i\n' % (i, pid, pid_))
AssertionError: Iteration 6, acquisition pid 16676, release pid 27340


Exception in thread Thread-5:
Traceback (most recent call last):
  File "/usr/lib/python2.6/threading.py", line 532, in
__bootstrap_inner
self.run()
  File "/usr/lib/python2.6/threading.py", line 484, in run
self.__target(*self.__args, **self.__kwargs)
  File "sa_pg_advisory_locks.py", line 44, in run_test
raise AssertionError('Iteration %i, acquisition pid %i, release
pid %i\n' % (i, pid, pid_))
AssertionError: Iteration 12, acquisition pid 27340, release pid 16676


Exception in thread Thread-7:
Traceback (most recent call last):
  File "/usr/lib/python2.6/threading.py", line 532, in
__bootstrap_inner
self.run()
  File "/usr/lib/python2.6/threading.py", line 484, in run
self.__target(*self.__args, **self.__kwargs)
  File "sa_pg_advisory_locks.py", line 44, in run_test
raise AssertionError('Iteration %i, acquisition pid %i, release
pid %i\n' % (i, pid, pid_))
AssertionError: Iteration 10, acquisition pid 18248, release pid 8452


Exception in thread Thread-3:
Traceback (most recent call last):
  File "/usr/lib/python2.6/threading.py", line 532, in
__bootstrap_inner
self.run()
  File "/usr/lib/python2.6/threading.py", line 484, in run
self.__target(*self.__args, **self.__kwargs)
  File "sa_pg_advisory_locks.py", line 44, in run_test
raise AssertionError('Iteration %i, acquisition pid %i, release
pid %i\n' % (i, pid, pid_))
AssertionError: Iteration 18, acquisition pid 8452, release pid 

[sqlalchemy] Joined subquery, SQL syntax error

2009-07-16 Thread Julien Demoor

Hello,

I have a query that fails when executed: the engine is passed the
string representation of a sqlalchemy.sql.expression._BinaryExpression
object as a parameter, resulting in a syntax error. This expression is
part of
a subquery which is joined to a table. There is a simplified exemple
below along with the resulting traceback.

Is this a problem with my query or a bug in SA ?

Thanks in advance.

-- Test code --

from sqlalchemy import MetaData, create_engine
from sqlalchemy.orm import create_session
from sqlalchemy import Table, Column, Integer, Unicode, String,
DateTime, ForeignKey, Boolean, \
desc, select, and_, or_, subquery, CheckConstraint
from sqlalchemy.orm import relation, backref, mapper, eagerload
from sqlalchemy.orm.exc import NoResultFound
from sqlalchemy.sql import func, label, union

metadata = MetaData()
engine = create_engine('sqlite://')
session = create_session(engine, autoflush=True, autocommit=False)

foo_table = Table('foo', metadata,
Column('id', Integer, primary_key=True, nullable=False),
)

bar_table = Table('bar', metadata,
Column('id', Integer, primary_key=True, nullable=False),
Column('foo_id', Integer, ForeignKey('foo.id')),
Column('v', Integer),
)

sub = select([bar_table.c.foo_id, func.sum(bar_table.c.v).label
('sum')]).\
where(func.case([(bar_table.c.v > 1, True)], else_=False)).\
group_by(bar_table.c.foo_id)

query = select([foo_table.c.id, sub.c.sum], from_obj=[foo_table.join
(bar_table, foo_table.c.id == bar_table.c.foo_id)])

print list(session.query(query))

-- Results below --

$ python test.py
Traceback (most recent call last):
  File "test.py", line 29, in 
print list(session.query(query))
  File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/
sqlalchemy/orm/query.py", line 1287, in __iter__
return self._execute_and_instances(context)
  File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/
sqlalchemy/orm/query.py", line 1290, in _execute_and_instances
result = self.session.execute(querycontext.statement,
params=self._params, mapper=self._mapper_zero_or_none())
  File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/
sqlalchemy/orm/session.py", line 755, in execute
clause, params or {})
  File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/
sqlalchemy/engine/base.py", line 824, in execute
return Connection.executors[c](self, object, multiparams, params)
  File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/
sqlalchemy/engine/base.py", line 874, in _execute_clauseelement
return self.__execute_context(context)
  File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/
sqlalchemy/engine/base.py", line 896, in __execute_context
self._cursor_execute(context.cursor, context.statement,
context.parameters[0], context=context)
  File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/
sqlalchemy/engine/base.py", line 950, in _cursor_execute
self._handle_dbapi_exception(e, statement, parameters, cursor,
context)
  File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/
sqlalchemy/engine/base.py", line 931, in _handle_dbapi_exception
raise exc.DBAPIError.instance(statement, parameters, e,
connection_invalidated=is_disconnect)
sqlalchemy.exc.OperationalError: (OperationalError) near "GROUP":
syntax error u'SELECT id AS id, sum AS sum \nFROM (SELECT foo.id AS
id, sum \nFROM (SELECT bar.foo_id AS foo_id, sum(bar.v) AS sum \nFROM
bar \nWHERE case(?) GROUP BY bar.foo_id), foo JOIN bar ON foo.id =
bar.foo_id)' [[(, True)]]

-- End --

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---