[sqlalchemy] Bug with sqlalchemy==1.2.11 causing a relation error that didn't exist with 1.2.10

2018-08-27 Thread Curtis Castrapel
Hi,

I have an odd issue with sqlachemy==1.2.11 that I haven't seen before. My 
unit tests were failing locally after the upgrade for Lemur 
( https://github.com/Netflix/lemur/ ) . When I test locally, tests pass 
fine with sqlachemy==1.2.10, but fail with sqlachemy==1.2.11 . 

Here is the travis CI: https://travis-ci.org/Netflix/lemur/builds/421142451

The table it's complaining about is defined 
here: 
https://github.com/Netflix/lemur/blob/master/lemur/dns_providers/schemas.py 
. It could certainly be an error with the schema, but it's odd to me that I 
am not able to reproduce this error on earlier versions. Could this be 
related to the newline I see in the traceback below? ' \nFROM 
dns_providers' ? 

I can run the query defined below on the Postgres database directly 
(without the \n character) without any issues: 'SELECT dns_providers.id AS 
dns_providers_id, dns_providers.name AS dns_providers_name, 
dns_providers.description AS dns_providers_description, 
dns_providers.provider_type AS dns_providers_provider_type, 
dns_providers.credentials AS dns_providers_credentials, 
dns_providers.api_endpoint AS dns_providers_api_endpoint, 
dns_providers.date_created AS dns_providers_date_created, 
dns_providers.status AS dns_providers_status, dns_providers.options AS 
dns_providers_options, dns_providers.domains AS dns_providers_domains FROM 
dns_providers'

Here's an excerpt from the traceback:

"Traceback (most recent call last):
  File "/Users/ccastrapel/localrepos/lemur/lemur/common/managers.py", line 
60, in all
results.append(cls())
  File 
"/Users/ccastrapel/localrepos/lemur/lemur/plugins/lemur_acme/plugin.py", 
line 332, in __init__
self.acme = AcmeHandler()
  File 
"/Users/ccastrapel/localrepos/lemur/lemur/plugins/lemur_acme/plugin.py", 
line 47, in __init__
self.all_dns_providers = dns_provider_service.get_all_dns_providers()
  File "/Users/ccastrapel/localrepos/lemur/lemur/dns_providers/service.py", 
line 31, in get_all_dns_providers
return DnsProvider.query.all()
  File 
"/Users/ccastrapel/localrepos/lemur/env/lib/python3.7/site-packages/sqlalchemy/orm/query.py",
 
line 2836, in all
return list(self)
  File 
"/Users/ccastrapel/localrepos/lemur/env/lib/python3.7/site-packages/sqlalchemy/orm/query.py",
 
line 2988, in __iter__
return self._execute_and_instances(context)
  File 
"/Users/ccastrapel/localrepos/lemur/env/lib/python3.7/site-packages/sqlalchemy/orm/query.py",
 
line 3011, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
  File 
"/Users/ccastrapel/localrepos/lemur/env/lib/python3.7/site-packages/sqlalchemy/engine/base.py",
 
line 948, in execute
return meth(self, multiparams, params)
  File 
"/Users/ccastrapel/localrepos/lemur/env/lib/python3.7/site-packages/sqlalchemy/sql/elements.py",
 
line 269, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File 
"/Users/ccastrapel/localrepos/lemur/env/lib/python3.7/site-packages/sqlalchemy/engine/base.py",
 
line 1060, in _execute_clauseelement
compiled_sql, distilled_params
  File 
"/Users/ccastrapel/localrepos/lemur/env/lib/python3.7/site-packages/sqlalchemy/engine/base.py",
 
line 1200, in _execute_context
context)
  File 
"/Users/ccastrapel/localrepos/lemur/env/lib/python3.7/site-packages/sqlalchemy/engine/base.py",
 
line 1413, in _handle_dbapi_exception
exc_info
  File 
"/Users/ccastrapel/localrepos/lemur/env/lib/python3.7/site-packages/sqlalchemy/util/compat.py",
 
line 265, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File 
"/Users/ccastrapel/localrepos/lemur/env/lib/python3.7/site-packages/sqlalchemy/util/compat.py",
 
line 248, in reraise
raise value.with_traceback(tb)
  File 
"/Users/ccastrapel/localrepos/lemur/env/lib/python3.7/site-packages/sqlalchemy/engine/base.py",
 
line 1193, in _execute_context
context)
  File 
"/Users/ccastrapel/localrepos/lemur/env/lib/python3.7/site-packages/sqlalchemy/engine/default.py",
 
line 509, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) relation 
"dns_providers" does not exist
LINE 2: FROM dns_providers
 ^
 [SQL: 'SELECT dns_providers.id AS dns_providers_id, dns_providers.name AS 
dns_providers_name, dns_providers.description AS dns_providers_description, 
dns_providers.provider_type AS dns_providers_provider_type, 
dns_providers.credentials AS dns_providers_credentials, 
dns_providers.api_endpoint AS dns_providers_api_endpoint, 
dns_providers.date_created AS dns_providers_date_created, 
dns_providers.status AS dns_providers_status, dns_providers.options AS 
dns_providers_options, dns_providers.domains AS dns_providers_domains 
\nFROM dns_providers'] (Background on this error at: 
http://sqlalche.me/e/f405)"


Thanks,
Curtis

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example 

Re: [sqlalchemy] Bug? Query / Execute Mismatch When Given Duplicate Columns

2018-07-03 Thread Jonathan Vanasco


On Tuesday, July 3, 2018 at 9:37:04 AM UTC-4, Mike Bayer wrote:
>
> This architecture has been liberalized but this assumption still remains 
> within the Core and it's possible the ORM may or may not have some 
> remaining reliance on this assumption as well.
>

I assumed the RowProxy also requires this as well, since it works much like 
a namedtuple.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bug? Query / Execute Mismatch When Given Duplicate Columns

2018-07-03 Thread Mike Bayer
Yes that's intentional, the select() construct deduplicates redundant
column names as for many years the result system relied upon column names
in the cursor to organize its results.   This architecture has been
liberalized but this assumption still remains within the Core and it's
possible the ORM may or may not have some remaining reliance on this
assumption as well.

Give each column , or one of them, a distinct label() and you'll get two
columns.

On Mon, Jul 2, 2018, 7:54 PM Michael Tartre  wrote:

> Hey there!
>
> I noticed the following odd behavior:
>
> import sqlalchemy as sa
> engine = sa.create_engine('sqlite:///tmp.db', echo=True)
> tblsa = sa.Table("mytable", sa.MetaData(), autoload=True,
> autoload_with=engine)
> q = sa.select([tblsa.c.x, tblsa.c.x])
> with engine.connect() as conn:
> print(list(conn.execute(q)))
>
> 2018-07-02 19:45:15,890 INFO sqlalchemy.engine.base.Engine SELECT test.x
> FROM test
> 2018-07-02 19:45:15,894 INFO sqlalchemy.engine.base.Engine ()
> [(1,), (2,), (3,), (4,), (5,)]
>
> Is that intended? I had previously assumed that the execution rows would 
> match the input, which seems like the most obvious behavior.
>
> Thanks for any clarification,
>
> Michael
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Bug? Query / Execute Mismatch When Given Duplicate Columns

2018-07-02 Thread Michael Tartre
Hey there!

I noticed the following odd behavior:

import sqlalchemy as sa
engine = sa.create_engine('sqlite:///tmp.db', echo=True)
tblsa = sa.Table("mytable", sa.MetaData(), autoload=True, 
autoload_with=engine)
q = sa.select([tblsa.c.x, tblsa.c.x])
with engine.connect() as conn:
print(list(conn.execute(q)))

2018-07-02 19:45:15,890 INFO sqlalchemy.engine.base.Engine SELECT test.x 
FROM test
2018-07-02 19:45:15,894 INFO sqlalchemy.engine.base.Engine ()
[(1,), (2,), (3,), (4,), (5,)]

Is that intended? I had previously assumed that the execution rows would match 
the input, which seems like the most obvious behavior.

Thanks for any clarification,

Michael

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bug report with JSON nulls in 1.1.x

2017-07-18 Thread vineet

>
> OK, this is great, it's likely not the "underscore" as much as that 
> the attribute is named differently from the column.  I will look to 
> confirm this and set up a complete bug report, thanks!
>
 
Ah, that makes a lot more sense. Thanks for looking into that!

Above, you are looking at "a.column2", did you mean "a._column2"? 
> There should be no ".column2" attribute on the object. 
>

Woops, typo. That's what I get for trying to clean up the code to make it 
look nicer without rerunning it. 

the JSON column will translate "None" into JSON.NULL as it goes to the 
> database.   The "default" here is a python side-default, so the python 
> side value will be maintained, so ideally we'd want to set 
> default=None, but that means, "there's no default" so we're in a 
> little bit of a pickle there.  this issue was discussed in 
>
> https://bitbucket.org/zzzeek/sqlalchemy/issues/3870/client-side-column-default-value
>  
> 
> , 
> where the solution was a new bit of documentation added to the notes 
> for JSON.   the solution is at 
>
> http://docs.sqlalchemy.org/en/latest/core/type_basics.html?highlight=json#sqlalchemy.types.JSON.NULL
>  
> and describes to set the default to a SQL expression that resolves to 
> JSON NULL; the ORM will fetch this default back from the database; the 
> flush process will expire the Python value so that this fetch occurs 
> prior to the usual expiration at commit time.Hopefully you're 
> using Postgresql in which case you can set eager_defaults=True on the 
> mapping and it will re-fetch the value inline with the INSERT using 
> RETURNING. 
>

Got it, thanks. I think this will work well for us. We are using Postgres 
as well :)

JSON.NULL is this 
> totally special value that the Core default mechanism doesn't yet 
> fully accommodate so that's why setting default=text("'null'") for now 
> is a workaround. 
>

Makes sense, and a big thanks for the quick responses!
 

On Tuesday, July 18, 2017 at 8:02:56 AM UTC-7, Mike Bayer wrote:
>
> On Tue, Jul 18, 2017 at 10:31 AM, Mike Bayer  > wrote: 
> > On Tue, Jul 18, 2017 at 12:47 AM,   
> wrote: 
> >> Hello, I'd like to report a bug report regarding JSON nulls and 
> >> underscore-prefixed attributes as we're upgrading from 1.0.17 to 
> 1.1.11. 
> >> 
> >> I read through the behavior at 
> >> 
> http://docs.sqlalchemy.org/en/latest/changelog/migration_11.html#change-3514 
> >> and 
> >> 
> http://docs.sqlalchemy.org/en/latest/changelog/migration_11.html#behavior-change-3514,
>  
>
> >> all of which makes sense to me. 
> >> 
> >> However, this does not seem to work for attributes which are prefixed 
> with 
> >> an underscore. Example: 
> > 
> > OK, this is great, it's likely not the "underscore" as much as that 
> > the attribute is named differently from the column.  I will look to 
> > confirm this and set up a complete bug report, thanks! 
>
> set up w/ a patch at 
>
> https://bitbucket.org/zzzeek/sqlalchemy/issues/4031/eval_none-logic-isnt-resolving-col-to-prop.
>  
>
>
> > 
> >> 
> >> class A(db.Model): 
> >> __tablename__ = 'a' 
> >> 
> >> 
> >> id = db.Column(db.Integer, primary_key=True) 
> >> column1 = db.Column('column1', JSON, nullable=False) 
> >> _column2 = db.Column('column2', JSON, nullable=False) 
> >> 
> >> # Succeeds 
> >> db.session.add(A(column1=[], _column2=[])) 
> >> db.session.flush() 
> >> 
> >> # Succeeds 
> >> db.session.add(A(column1=None, _column2=[])) 
> >> db.session.flush() 
> >> 
> >> # Fails with integrity error 
> >> db.session.add(A(column1=[], _column2=None)) 
> >> db.session.flush() 
> >> 
> >> 
> >> (psycopg2.IntegrityError) null value in column "column2" violates 
> not-null 
> >> constraint 
> >> DETAIL:  Failing row contains (5, [], null). 
> >>  [SQL: '-- manage.py:1242 in \nINSERT INTO a (column1) VALUES 
> >> (%(column1)s) RETURNING a.id'] [parameters: {'column1': '[]'}] 
> >> 
> >> Using db.Column('column2', JSON, nullable=False, default=JSON.NULL) 
> ends up 
> >> sending a JSON NULL, so basically it looks like it is ignoring 
> explicitly 
> >> set None values for attributes that start with an underscore. 
> >> 
> >> This is not workflow blocking for us (I will just change our usages to 
> also 
> >> use default=JSON.NULL), but wanted to file a bug report in case it's 
> >> something others run into as well! 
> >> 
> >> -- 
> >> SQLAlchemy - 
> >> The Python SQL Toolkit and Object Relational Mapper 
> >> 
> >> http://www.sqlalchemy.org/ 
> >> 
> >> To post example code, please provide an MCVE: Minimal, Complete, and 
> >> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> >> description. 
> >> --- 
> >> You received this message because you are subscribed to the Google 
> Groups 
> >> "sqlalchemy" group. 
> >> To 

Re: [sqlalchemy] Bug report with JSON nulls in 1.1.x

2017-07-18 Thread Mike Bayer
On Tue, Jul 18, 2017 at 10:31 AM, Mike Bayer  wrote:
> On Tue, Jul 18, 2017 at 12:47 AM,   wrote:
>> Hello, I'd like to report a bug report regarding JSON nulls and
>> underscore-prefixed attributes as we're upgrading from 1.0.17 to 1.1.11.
>>
>> I read through the behavior at
>> http://docs.sqlalchemy.org/en/latest/changelog/migration_11.html#change-3514
>> and
>> http://docs.sqlalchemy.org/en/latest/changelog/migration_11.html#behavior-change-3514,
>> all of which makes sense to me.
>>
>> However, this does not seem to work for attributes which are prefixed with
>> an underscore. Example:
>
> OK, this is great, it's likely not the "underscore" as much as that
> the attribute is named differently from the column.  I will look to
> confirm this and set up a complete bug report, thanks!

set up w/ a patch at
https://bitbucket.org/zzzeek/sqlalchemy/issues/4031/eval_none-logic-isnt-resolving-col-to-prop.

>
>>
>> class A(db.Model):
>> __tablename__ = 'a'
>>
>>
>> id = db.Column(db.Integer, primary_key=True)
>> column1 = db.Column('column1', JSON, nullable=False)
>> _column2 = db.Column('column2', JSON, nullable=False)
>>
>> # Succeeds
>> db.session.add(A(column1=[], _column2=[]))
>> db.session.flush()
>>
>> # Succeeds
>> db.session.add(A(column1=None, _column2=[]))
>> db.session.flush()
>>
>> # Fails with integrity error
>> db.session.add(A(column1=[], _column2=None))
>> db.session.flush()
>>
>>
>> (psycopg2.IntegrityError) null value in column "column2" violates not-null
>> constraint
>> DETAIL:  Failing row contains (5, [], null).
>>  [SQL: '-- manage.py:1242 in \nINSERT INTO a (column1) VALUES
>> (%(column1)s) RETURNING a.id'] [parameters: {'column1': '[]'}]
>>
>> Using db.Column('column2', JSON, nullable=False, default=JSON.NULL) ends up
>> sending a JSON NULL, so basically it looks like it is ignoring explicitly
>> set None values for attributes that start with an underscore.
>>
>> This is not workflow blocking for us (I will just change our usages to also
>> use default=JSON.NULL), but wanted to file a bug report in case it's
>> something others run into as well!
>>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bug report with JSON nulls in 1.1.x

2017-07-18 Thread Mike Bayer
On Tue, Jul 18, 2017 at 12:47 AM,   wrote:
> Hello, I'd like to report a bug report regarding JSON nulls and
> underscore-prefixed attributes as we're upgrading from 1.0.17 to 1.1.11.
>
> I read through the behavior at
> http://docs.sqlalchemy.org/en/latest/changelog/migration_11.html#change-3514
> and
> http://docs.sqlalchemy.org/en/latest/changelog/migration_11.html#behavior-change-3514,
> all of which makes sense to me.
>
> However, this does not seem to work for attributes which are prefixed with
> an underscore. Example:

OK, this is great, it's likely not the "underscore" as much as that
the attribute is named differently from the column.  I will look to
confirm this and set up a complete bug report, thanks!

>
> class A(db.Model):
> __tablename__ = 'a'
>
>
> id = db.Column(db.Integer, primary_key=True)
> column1 = db.Column('column1', JSON, nullable=False)
> _column2 = db.Column('column2', JSON, nullable=False)
>
> # Succeeds
> db.session.add(A(column1=[], _column2=[]))
> db.session.flush()
>
> # Succeeds
> db.session.add(A(column1=None, _column2=[]))
> db.session.flush()
>
> # Fails with integrity error
> db.session.add(A(column1=[], _column2=None))
> db.session.flush()
>
>
> (psycopg2.IntegrityError) null value in column "column2" violates not-null
> constraint
> DETAIL:  Failing row contains (5, [], null).
>  [SQL: '-- manage.py:1242 in \nINSERT INTO a (column1) VALUES
> (%(column1)s) RETURNING a.id'] [parameters: {'column1': '[]'}]
>
> Using db.Column('column2', JSON, nullable=False, default=JSON.NULL) ends up
> sending a JSON NULL, so basically it looks like it is ignoring explicitly
> set None values for attributes that start with an underscore.
>
> This is not workflow blocking for us (I will just change our usages to also
> use default=JSON.NULL), but wanted to file a bug report in case it's
> something others run into as well!
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Bug report with JSON nulls in 1.1.x

2017-07-17 Thread vineet
Hello, I'd like to report a bug report regarding JSON nulls and 
underscore-prefixed attributes as we're upgrading from 1.0.17 to 1.1.11.

I read through the behavior at 
http://docs.sqlalchemy.org/en/latest/changelog/migration_11.html#change-3514 
and 
http://docs.sqlalchemy.org/en/latest/changelog/migration_11.html#behavior-change-3514,
 
all of which makes sense to me.

However, this does not seem to work for attributes which are prefixed with 
an underscore. Example:

class A(db.Model):
__tablename__ = 'a'


id = db.Column(db.Integer, primary_key=True)
column1 = db.Column('column1', JSON, nullable=False)
_column2 = db.Column('column2', JSON, nullable=False)

# Succeeds
db.session.add(A(column1=[], _column2=[]))
db.session.flush()

# Succeeds
db.session.add(A(column1=None, _column2=[]))
db.session.flush()

# Fails with integrity error
db.session.add(A(column1=[], _column2=None))
db.session.flush()


(psycopg2.IntegrityError) null value in column "column2" violates not-null 
constraint
DETAIL:  Failing row contains (5, [], null).
 [SQL: '-- manage.py:1242 in \nINSERT INTO a (column1) VALUES 
(%(column1)s) RETURNING a.id'] [parameters: {'column1': '[]'}]

Using db.Column('column2', JSON, nullable=False, default=JSON.NULL) ends up 
sending a JSON NULL, so basically it looks like it is ignoring explicitly 
set None values for attributes that start with an underscore.

This is not workflow blocking for us (I will just change our usages to also 
use default=JSON.NULL), but wanted to file a bug report in case it's 
something others run into as well!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] BUG in union with limit in PostgreSQL

2017-05-12 Thread Jarek
Hello!

Sorry, I was using old sqlalchemy, after upgrade it works fine.

best regards
Jarek


Dnia 2017-05-12, pią o godzinie 08:54 -0400, mike bayer pisze:
> 
> On 05/12/2017 04:55 AM, Jarek wrote:
> > Hello!
> > 
> > It looks that SQLAlchemy doesn't properly handle union with limits in
> > the following scenario:
> > 
> > res1 = Session.query( Messages ).order_by( Messages.ts ).limit(100)
> > res2 = Session.query( Messages1 ).order_by( Messages1.ts ).limit(100)
> > res3 = res1.union_all( res2 )
> > 
> > SQLAlchemy creates the following final query:
> > 
> > SELECT  FROM Messages order by ts limit 100
> >UNION ALL SELECT  FROM Messages1 order by ts limit 100
> 
> 
> that's not at all what I get and that's not how Query.union_all() does 
> it, it wraps the inner things inside a subquery, works fine.  MCVE:
> from sqlalchemy import *
> from sqlalchemy.orm import *
> from sqlalchemy.ext.declarative import declarative_base
> 
> Base = declarative_base()
> 
> 
> class A(Base):
>  __tablename__ = 'a'
>  id = Column(Integer, primary_key=True)
> 
> e = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
> Base.metadata.create_all(e)
> 
> 
> s = Session(e)
> 
> A1 = aliased(A)
> res1 = s.query(A).order_by(A.id).limit(100)
> res2 = s.query(A1).order_by(A1.id).limit(100)
> res3 = res1.union_all(res2)
> 
> res3.all()
> 
> 
> query:
> 
> SELECT anon_1.a_id AS anon_1_a_id
> FROM ((SELECT a.id AS a_id
> FROM a ORDER BY a.id
>   LIMIT %(param_1)s) UNION ALL (SELECT a_1.id AS a_1_id
> FROM a AS a_1 ORDER BY a_1.id
>   LIMIT %(param_2)s)) AS anon_1
> 
> 
> 
> please provide accurate details, thanks
> 
> 
> 
> > 
> > Which fails with:
> > 
> > ProgrammingError: (psycopg2.ProgrammingError) syntax error at or near
> > "UNION"
> > 
> > To fix this, both queries should be enclosed in parenthesis:
> > 
> > (SELECT  FROM Messages order by ts limit 100 )
> >UNION ALL ( SELECT  FROM Messages1 order by ts limit 100 )
> > 
> > Best regards
> > Jarek
> > 
> > 
> > 
> > 
> 


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] BUG in union with limit in PostgreSQL

2017-05-12 Thread mike bayer



On 05/12/2017 04:55 AM, Jarek wrote:

Hello!

It looks that SQLAlchemy doesn't properly handle union with limits in
the following scenario:

res1 = Session.query( Messages ).order_by( Messages.ts ).limit(100)
res2 = Session.query( Messages1 ).order_by( Messages1.ts ).limit(100)
res3 = res1.union_all( res2 )

SQLAlchemy creates the following final query:

SELECT  FROM Messages order by ts limit 100
   UNION ALL SELECT  FROM Messages1 order by ts limit 100



that's not at all what I get and that's not how Query.union_all() does 
it, it wraps the inner things inside a subquery, works fine.  MCVE:

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()


class A(Base):
__tablename__ = 'a'
id = Column(Integer, primary_key=True)

e = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
Base.metadata.create_all(e)


s = Session(e)

A1 = aliased(A)
res1 = s.query(A).order_by(A.id).limit(100)
res2 = s.query(A1).order_by(A1.id).limit(100)
res3 = res1.union_all(res2)

res3.all()


query:

SELECT anon_1.a_id AS anon_1_a_id
FROM ((SELECT a.id AS a_id
FROM a ORDER BY a.id
 LIMIT %(param_1)s) UNION ALL (SELECT a_1.id AS a_1_id
FROM a AS a_1 ORDER BY a_1.id
 LIMIT %(param_2)s)) AS anon_1



please provide accurate details, thanks





Which fails with:

ProgrammingError: (psycopg2.ProgrammingError) syntax error at or near
"UNION"

To fix this, both queries should be enclosed in parenthesis:

(SELECT  FROM Messages order by ts limit 100 )
   UNION ALL ( SELECT  FROM Messages1 order by ts limit 100 )

Best regards
Jarek






--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] BUG in union with limit in PostgreSQL

2017-05-12 Thread Jarek
Hello!

It looks that SQLAlchemy doesn't properly handle union with limits in
the following scenario:

res1 = Session.query( Messages ).order_by( Messages.ts ).limit(100)
res2 = Session.query( Messages1 ).order_by( Messages1.ts ).limit(100)
res3 = res1.union_all( res2 )

SQLAlchemy creates the following final query:

SELECT  FROM Messages order by ts limit 100 
  UNION ALL SELECT  FROM Messages1 order by ts limit 100

Which fails with:

ProgrammingError: (psycopg2.ProgrammingError) syntax error at or near
"UNION"

To fix this, both queries should be enclosed in parenthesis:

(SELECT  FROM Messages order by ts limit 100 )
  UNION ALL ( SELECT  FROM Messages1 order by ts limit 100 )

Best regards
Jarek




-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] struggling with postgres json related query using sqlalchemy -- sqlalchemy bug or issue with query?

2017-05-04 Thread mike bayer
there's a long term issue to get around to handling all of PG's syntaxes 
fully at 
https://bitbucket.org/zzzeek/sqlalchemy/issues/3566/figure-out-how-to-support-all-of-pgs. 
   Current target is 1.3 because it will be a huge effort, and it may 
continue moving out milestones.


On 05/04/2017 05:17 AM, Darin Gordon wrote:
I'm working on another json query and thought I'd circle back around to 
see whether I'd have to continue using the original "as_row" recipe.  
Not sure whether Lukas was correct about sharing a similar issue as the 
one I originally raised.


On Thursday, March 16, 2017 at 10:24:51 AM UTC-4, Mike Bayer wrote:

I put up

https://bitbucket.org/zzzeek/sqlalchemy/issues/3939/alias-as-column-expr-needs-tweak-to



with the specific Alias issue but if you can add context what the end
goal is that would be helpful, thanks.



On 03/16/2017 09:56 AM, Lukas Siemon wrote:
 > Makes perfect sense. I'll post a complete bug report in the
tracker later today (it's early morning here atm).
 >

--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and 
Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
description.

---
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] struggling with postgres json related query using sqlalchemy -- sqlalchemy bug or issue with query?

2017-05-04 Thread Darin Gordon
I'm working on another json query and thought I'd circle back around to see 
whether I'd have to continue using the original "as_row" recipe.  Not sure 
whether Lukas was correct about sharing a similar issue as the one I 
originally raised.  

On Thursday, March 16, 2017 at 10:24:51 AM UTC-4, Mike Bayer wrote:
>
> I put up 
>
> https://bitbucket.org/zzzeek/sqlalchemy/issues/3939/alias-as-column-expr-needs-tweak-to
>  
> with the specific Alias issue but if you can add context what the end 
> goal is that would be helpful, thanks. 
>
>
>
> On 03/16/2017 09:56 AM, Lukas Siemon wrote: 
> > Makes perfect sense. I'll post a complete bug report in the tracker 
> later today (it's early morning here atm). 
> > 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] struggling with postgres json related query using sqlalchemy -- sqlalchemy bug or issue with query?

2017-03-16 Thread mike bayer
I put up 
https://bitbucket.org/zzzeek/sqlalchemy/issues/3939/alias-as-column-expr-needs-tweak-to 
with the specific Alias issue but if you can add context what the end 
goal is that would be helpful, thanks.




On 03/16/2017 09:56 AM, Lukas Siemon wrote:

Makes perfect sense. I'll post a complete bug report in the tracker later today 
(it's early morning here atm).



--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] struggling with postgres json related query using sqlalchemy -- sqlalchemy bug or issue with query?

2017-03-16 Thread Lukas Siemon
Makes perfect sense. I'll post a complete bug report in the tracker later today 
(it's early morning here atm).

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] struggling with postgres json related query using sqlalchemy -- sqlalchemy bug or issue with query?

2017-03-16 Thread mike bayer

see here's what I don't get, hence why complete test case v helpful:

if i fix self_group() (which yes is a bug, I can fix that), this case 
still fails, because the alias() has no "type":


from sqlalchemy import *

t = table('t', column('x'))

expr = func.array_agg(select([t]).alias())

stmt = select([expr])

print stmt


if I do a plain function like func.XYZ(...) then I get nonsensical SQL 
in the output.



I fix way too many issues per week/month/year to remember the specifics 
of this area so a quick test of what someone is trying to do is always 
extremely helpful.




On 03/16/2017 09:26 AM, mike bayer wrote:

func.XYZ()?   I can look into that.   ideally you'd be doing
func.XYZ(selectable.as_scalar()), but letting alias work there is fine.



On 03/15/2017 08:31 PM, Lukas Siemon wrote:

Monkey patching seems to do the trick:

# Patch alias self_group kwargs
def patched_alias_self_group(self, target=None, **kwargs):
return original_alias_self_group(self, target=target)
original_alias_self_group = Alias.self_group
Alias.self_group = patched_alias_self_group


On Wednesday, 15 March 2017 16:53:34 UTC-7, Lukas Siemon wrote:

Was the underlying issue ever resolved?

Running into the same error here, but my query generation is
automatic, so taking it apart will take a little longer...

On Thursday, 17 November 2016 07:57:56 UTC-8, Mike Bayer wrote:

Hi Darin -

That's definitely a bug because all self_group() methods are
supposed to
at least have a **kw to let that argument pass in.   I located
the one
method that has this issue which is the self_group() method of
Alias.
However, this also suggests that an Alias object is being used
in a
columnar-context which is not actually what you want, an Alias
is only
good for a FROM clause.

Looking at your code where I think it's going wrong is:


 func.row_to_json(stmt2)

because stmt2 is an alias(), and func.xyz ()
expects a columnar expression.

We are dealing here with Postgresql's very awkward SQL
extensions to
functions.  There's a wide variety of these that are to be
supported as
part of

https://bitbucket.org/zzzeek/sqlalchemy/issues/3566/figure-out-how-to-support-all-of-pgs


,


though this case seems to be yet another case.   I've added a
recipe for
this as example three on the "pinned" response:


from sqlalchemy.sql import Alias, ColumnElement
from sqlalchemy.ext.compiler import compiles


class as_row(ColumnElement):
 def __init__(self, expr):
 assert isinstance(expr, Alias)
 self.expr = expr


@compiles(as_row)
def _gen_as_row(element, compiler, **kw):
 return compiler.visit_alias(element.expr, ashint=True, **kw)


if __name__ == '__main__':
 from sqlalchemy import Column, Integer, create_engine, func
 from sqlalchemy.orm import Session
 from sqlalchemy.ext.declarative import declarative_base

 Base = declarative_base()

 class A(Base):
 __tablename__ = 'a'
 id = Column(Integer, primary_key=True)
 x = Column(Integer)
 y = Column(Integer)

 e =
create_engine("postgresql://scott:tiger@localhost/test",
echo=True)
 Base.metadata.drop_all(e)
 Base.metadata.create_all(e)

 s = Session(e)
 s.add_all([
 A(x=1, y=2),
 A(x=5, y=4)
 ])
 s.commit()

 subq = s.query(A).subquery()

 print
s.query(func.row_to_json(as_row(subq))).select_from(subq).all()






On 11/17/2016 07:25 AM, Darin Gordon wrote:
> Using:  postgres 9.6  and latest sqlalchemy 1.1.4
>
> I've been trying to port a query to a sqlalchemy query but
have gotten
> an exception about an unrecognized keyword arg, which
> confuses me as to whether the issue is my code or a bug in
sqlalchemy:
>  TypeError: self_group() got an unexpected keyword argument
'against'
>
>
>
> I've been trying to port the following query, which works in
psql, to a
> sqlalchemy query:
>
>

---


>
> select domain, json_agg(parts) as permissions from
> (select domain, row_to_json(r) as parts from
> (select domain, action, 

Re: [sqlalchemy] struggling with postgres json related query using sqlalchemy -- sqlalchemy bug or issue with query?

2017-03-16 Thread mike bayer
func.XYZ()?   I can look into that.   ideally you'd be doing 
func.XYZ(selectable.as_scalar()), but letting alias work there is fine.




On 03/15/2017 08:31 PM, Lukas Siemon wrote:

Monkey patching seems to do the trick:

# Patch alias self_group kwargs
def patched_alias_self_group(self, target=None, **kwargs):
return original_alias_self_group(self, target=target)
original_alias_self_group = Alias.self_group
Alias.self_group = patched_alias_self_group


On Wednesday, 15 March 2017 16:53:34 UTC-7, Lukas Siemon wrote:

Was the underlying issue ever resolved?

Running into the same error here, but my query generation is
automatic, so taking it apart will take a little longer...

On Thursday, 17 November 2016 07:57:56 UTC-8, Mike Bayer wrote:

Hi Darin -

That's definitely a bug because all self_group() methods are
supposed to
at least have a **kw to let that argument pass in.   I located
the one
method that has this issue which is the self_group() method of
Alias.
However, this also suggests that an Alias object is being used in a
columnar-context which is not actually what you want, an Alias
is only
good for a FROM clause.

Looking at your code where I think it's going wrong is:


 func.row_to_json(stmt2)

because stmt2 is an alias(), and func.xyz ()
expects a columnar expression.

We are dealing here with Postgresql's very awkward SQL
extensions to
functions.  There's a wide variety of these that are to be
supported as
part of

https://bitbucket.org/zzzeek/sqlalchemy/issues/3566/figure-out-how-to-support-all-of-pgs

,

though this case seems to be yet another case.   I've added a
recipe for
this as example three on the "pinned" response:


from sqlalchemy.sql import Alias, ColumnElement
from sqlalchemy.ext.compiler import compiles


class as_row(ColumnElement):
 def __init__(self, expr):
 assert isinstance(expr, Alias)
 self.expr = expr


@compiles(as_row)
def _gen_as_row(element, compiler, **kw):
 return compiler.visit_alias(element.expr, ashint=True, **kw)


if __name__ == '__main__':
 from sqlalchemy import Column, Integer, create_engine, func
 from sqlalchemy.orm import Session
 from sqlalchemy.ext.declarative import declarative_base

 Base = declarative_base()

 class A(Base):
 __tablename__ = 'a'
 id = Column(Integer, primary_key=True)
 x = Column(Integer)
 y = Column(Integer)

 e =
create_engine("postgresql://scott:tiger@localhost/test", echo=True)
 Base.metadata.drop_all(e)
 Base.metadata.create_all(e)

 s = Session(e)
 s.add_all([
 A(x=1, y=2),
 A(x=5, y=4)
 ])
 s.commit()

 subq = s.query(A).subquery()

 print
s.query(func.row_to_json(as_row(subq))).select_from(subq).all()






On 11/17/2016 07:25 AM, Darin Gordon wrote:
> Using:  postgres 9.6  and latest sqlalchemy 1.1.4
>
> I've been trying to port a query to a sqlalchemy query but
have gotten
> an exception about an unrecognized keyword arg, which
> confuses me as to whether the issue is my code or a bug in
sqlalchemy:
>  TypeError: self_group() got an unexpected keyword argument
'against'
>
>
>
> I've been trying to port the following query, which works in
psql, to a
> sqlalchemy query:
>
>

---

>
> select domain, json_agg(parts) as permissions from
> (select domain, row_to_json(r) as parts from
> (select domain, action, array_agg(distinct
target)
> as target from
> (select (case when domain is null then
'*' else
> domain end) as domain,
> (case when target is null then
'*' else
> target end) as target,
> array_agg(distinct (case when
action is
> null then '*' else action end)) as action
>from permission
>   group by domain, target
>  ) x
> 

Re: [sqlalchemy] struggling with postgres json related query using sqlalchemy -- sqlalchemy bug or issue with query?

2017-03-16 Thread mike bayer

no idea, may I have a complete test case please?



On 03/15/2017 07:53 PM, Lukas Siemon wrote:

Was the underlying issue ever resolved?

Running into the same error here, but my query generation is automatic,
so taking it apart will take a little longer...

On Thursday, 17 November 2016 07:57:56 UTC-8, Mike Bayer wrote:

Hi Darin -

That's definitely a bug because all self_group() methods are
supposed to
at least have a **kw to let that argument pass in.   I located the one
method that has this issue which is the self_group() method of Alias.
However, this also suggests that an Alias object is being used in a
columnar-context which is not actually what you want, an Alias is only
good for a FROM clause.

Looking at your code where I think it's going wrong is:


 func.row_to_json(stmt2)

because stmt2 is an alias(), and func.xyz ()
expects a columnar expression.

We are dealing here with Postgresql's very awkward SQL extensions to
functions.  There's a wide variety of these that are to be supported as
part of

https://bitbucket.org/zzzeek/sqlalchemy/issues/3566/figure-out-how-to-support-all-of-pgs

,

though this case seems to be yet another case.   I've added a recipe
for
this as example three on the "pinned" response:


from sqlalchemy.sql import Alias, ColumnElement
from sqlalchemy.ext.compiler import compiles


class as_row(ColumnElement):
 def __init__(self, expr):
 assert isinstance(expr, Alias)
 self.expr = expr


@compiles(as_row)
def _gen_as_row(element, compiler, **kw):
 return compiler.visit_alias(element.expr, ashint=True, **kw)


if __name__ == '__main__':
 from sqlalchemy import Column, Integer, create_engine, func
 from sqlalchemy.orm import Session
 from sqlalchemy.ext.declarative import declarative_base

 Base = declarative_base()

 class A(Base):
 __tablename__ = 'a'
 id = Column(Integer, primary_key=True)
 x = Column(Integer)
 y = Column(Integer)

 e = create_engine("postgresql://scott:tiger@localhost/test",
echo=True)
 Base.metadata.drop_all(e)
 Base.metadata.create_all(e)

 s = Session(e)
 s.add_all([
 A(x=1, y=2),
 A(x=5, y=4)
 ])
 s.commit()

 subq = s.query(A).subquery()

 print
s.query(func.row_to_json(as_row(subq))).select_from(subq).all()






On 11/17/2016 07:25 AM, Darin Gordon wrote:
> Using:  postgres 9.6  and latest sqlalchemy 1.1.4
>
> I've been trying to port a query to a sqlalchemy query but have
gotten
> an exception about an unrecognized keyword arg, which
> confuses me as to whether the issue is my code or a bug in
sqlalchemy:
>  TypeError: self_group() got an unexpected keyword argument 'against'
>
>
>
> I've been trying to port the following query, which works in psql,
to a
> sqlalchemy query:
>
>

---

>
> select domain, json_agg(parts) as permissions from
> (select domain, row_to_json(r) as parts from
> (select domain, action, array_agg(distinct
target)
> as target from
> (select (case when domain is null then '*'
else
> domain end) as domain,
> (case when target is null then '*'
else
> target end) as target,
> array_agg(distinct (case when
action is
> null then '*' else action end)) as action
>from permission
>   group by domain, target
>  ) x
>   group by domain, action)
>   r) parts
> group by domain;
>
>

---

>
> The following sqlalchemy query raises the exception:
>
> thedomain = case([(Domain.name == None, '*')], else_=Domain.name)
> theaction = case([(Action.name == None, '*')],
else_=Action.name)
> theresource = case([(Resource.name == None, '*')],
> else_=Resource.name)
>
> action_agg = func.array_agg(theaction.distinct())
>
> stmt1 = (
> session.query(thedomain.label('domain'),
>   

Re: [sqlalchemy] struggling with postgres json related query using sqlalchemy -- sqlalchemy bug or issue with query?

2017-03-15 Thread Lukas Siemon
Monkey patching seems to do the trick:

# Patch alias self_group kwargs
def patched_alias_self_group(self, target=None, **kwargs):
return original_alias_self_group(self, target=target)
original_alias_self_group = Alias.self_group
Alias.self_group = patched_alias_self_group


On Wednesday, 15 March 2017 16:53:34 UTC-7, Lukas Siemon wrote:
>
> Was the underlying issue ever resolved?
>
> Running into the same error here, but my query generation is automatic, so 
> taking it apart will take a little longer...
>
> On Thursday, 17 November 2016 07:57:56 UTC-8, Mike Bayer wrote:
>>
>> Hi Darin - 
>>
>> That's definitely a bug because all self_group() methods are supposed to 
>> at least have a **kw to let that argument pass in.   I located the one 
>> method that has this issue which is the self_group() method of Alias. 
>> However, this also suggests that an Alias object is being used in a 
>> columnar-context which is not actually what you want, an Alias is only 
>> good for a FROM clause. 
>>
>> Looking at your code where I think it's going wrong is: 
>>
>>
>>  func.row_to_json(stmt2) 
>>
>> because stmt2 is an alias(), and func.xyz() expects a columnar 
>> expression. 
>>
>> We are dealing here with Postgresql's very awkward SQL extensions to 
>> functions.  There's a wide variety of these that are to be supported as 
>> part of 
>>
>> https://bitbucket.org/zzzeek/sqlalchemy/issues/3566/figure-out-how-to-support-all-of-pgs,
>>  
>>
>> though this case seems to be yet another case.   I've added a recipe for 
>> this as example three on the "pinned" response: 
>>
>>
>> from sqlalchemy.sql import Alias, ColumnElement 
>> from sqlalchemy.ext.compiler import compiles 
>>
>>
>> class as_row(ColumnElement): 
>>  def __init__(self, expr): 
>>  assert isinstance(expr, Alias) 
>>  self.expr = expr 
>>
>>
>> @compiles(as_row) 
>> def _gen_as_row(element, compiler, **kw): 
>>  return compiler.visit_alias(element.expr, ashint=True, **kw) 
>>
>>
>> if __name__ == '__main__': 
>>  from sqlalchemy import Column, Integer, create_engine, func 
>>  from sqlalchemy.orm import Session 
>>  from sqlalchemy.ext.declarative import declarative_base 
>>
>>  Base = declarative_base() 
>>
>>  class A(Base): 
>>  __tablename__ = 'a' 
>>  id = Column(Integer, primary_key=True) 
>>  x = Column(Integer) 
>>  y = Column(Integer) 
>>
>>  e = create_engine("postgresql://scott:tiger@localhost/test", 
>> echo=True) 
>>  Base.metadata.drop_all(e) 
>>  Base.metadata.create_all(e) 
>>
>>  s = Session(e) 
>>  s.add_all([ 
>>  A(x=1, y=2), 
>>  A(x=5, y=4) 
>>  ]) 
>>  s.commit() 
>>
>>  subq = s.query(A).subquery() 
>>
>>  print 
>> s.query(func.row_to_json(as_row(subq))).select_from(subq).all() 
>>
>>
>>
>>
>>
>>
>> On 11/17/2016 07:25 AM, Darin Gordon wrote: 
>> > Using:  postgres 9.6  and latest sqlalchemy 1.1.4 
>> > 
>> > I've been trying to port a query to a sqlalchemy query but have gotten 
>> > an exception about an unrecognized keyword arg, which 
>> > confuses me as to whether the issue is my code or a bug in sqlalchemy: 
>> >  TypeError: self_group() got an unexpected keyword argument 'against' 
>> > 
>> > 
>> > 
>> > I've been trying to port the following query, which works in psql, to a 
>> > sqlalchemy query: 
>> > 
>> > 
>> ---
>>  
>>
>> > 
>> > select domain, json_agg(parts) as permissions from 
>> > (select domain, row_to_json(r) as parts from 
>> > (select domain, action, array_agg(distinct target) 
>> > as target from 
>> > (select (case when domain is null then '*' else 
>> > domain end) as domain, 
>> > (case when target is null then '*' else 
>> > target end) as target, 
>> > array_agg(distinct (case when action is 
>> > null then '*' else action end)) as action 
>> >from permission 
>> >   group by domain, target 
>> >  ) x 
>> >   group by domain, action) 
>> >   r) parts 
>> > group by domain; 
>> > 
>> > 
>> ---
>>  
>>
>> > 
>> > The following sqlalchemy query raises the exception: 
>> > 
>> > thedomain = case([(Domain.name == None, '*')], else_=Domain.name) 
>> > theaction = case([(Action.name == None, '*')], 
>> else_=Action.name) 
>> > theresource = case([(Resource.name == None, '*')], 
>> > else_=Resource.name) 
>> > 
>> > 

Re: [sqlalchemy] struggling with postgres json related query using sqlalchemy -- sqlalchemy bug or issue with query?

2017-03-15 Thread Lukas Siemon
Was the underlying issue ever resolved?

Running into the same error here, but my query generation is automatic, so 
taking it apart will take a little longer...

On Thursday, 17 November 2016 07:57:56 UTC-8, Mike Bayer wrote:
>
> Hi Darin - 
>
> That's definitely a bug because all self_group() methods are supposed to 
> at least have a **kw to let that argument pass in.   I located the one 
> method that has this issue which is the self_group() method of Alias. 
> However, this also suggests that an Alias object is being used in a 
> columnar-context which is not actually what you want, an Alias is only 
> good for a FROM clause. 
>
> Looking at your code where I think it's going wrong is: 
>
>
>  func.row_to_json(stmt2) 
>
> because stmt2 is an alias(), and func.xyz() expects a columnar 
> expression. 
>
> We are dealing here with Postgresql's very awkward SQL extensions to 
> functions.  There's a wide variety of these that are to be supported as 
> part of 
>
> https://bitbucket.org/zzzeek/sqlalchemy/issues/3566/figure-out-how-to-support-all-of-pgs,
>  
>
> though this case seems to be yet another case.   I've added a recipe for 
> this as example three on the "pinned" response: 
>
>
> from sqlalchemy.sql import Alias, ColumnElement 
> from sqlalchemy.ext.compiler import compiles 
>
>
> class as_row(ColumnElement): 
>  def __init__(self, expr): 
>  assert isinstance(expr, Alias) 
>  self.expr = expr 
>
>
> @compiles(as_row) 
> def _gen_as_row(element, compiler, **kw): 
>  return compiler.visit_alias(element.expr, ashint=True, **kw) 
>
>
> if __name__ == '__main__': 
>  from sqlalchemy import Column, Integer, create_engine, func 
>  from sqlalchemy.orm import Session 
>  from sqlalchemy.ext.declarative import declarative_base 
>
>  Base = declarative_base() 
>
>  class A(Base): 
>  __tablename__ = 'a' 
>  id = Column(Integer, primary_key=True) 
>  x = Column(Integer) 
>  y = Column(Integer) 
>
>  e = create_engine("postgresql://scott:tiger@localhost/test", 
> echo=True) 
>  Base.metadata.drop_all(e) 
>  Base.metadata.create_all(e) 
>
>  s = Session(e) 
>  s.add_all([ 
>  A(x=1, y=2), 
>  A(x=5, y=4) 
>  ]) 
>  s.commit() 
>
>  subq = s.query(A).subquery() 
>
>  print s.query(func.row_to_json(as_row(subq))).select_from(subq).all() 
>
>
>
>
>
>
> On 11/17/2016 07:25 AM, Darin Gordon wrote: 
> > Using:  postgres 9.6  and latest sqlalchemy 1.1.4 
> > 
> > I've been trying to port a query to a sqlalchemy query but have gotten 
> > an exception about an unrecognized keyword arg, which 
> > confuses me as to whether the issue is my code or a bug in sqlalchemy: 
> >  TypeError: self_group() got an unexpected keyword argument 'against' 
> > 
> > 
> > 
> > I've been trying to port the following query, which works in psql, to a 
> > sqlalchemy query: 
> > 
> > 
> ---
>  
>
> > 
> > select domain, json_agg(parts) as permissions from 
> > (select domain, row_to_json(r) as parts from 
> > (select domain, action, array_agg(distinct target) 
> > as target from 
> > (select (case when domain is null then '*' else 
> > domain end) as domain, 
> > (case when target is null then '*' else 
> > target end) as target, 
> > array_agg(distinct (case when action is 
> > null then '*' else action end)) as action 
> >from permission 
> >   group by domain, target 
> >  ) x 
> >   group by domain, action) 
> >   r) parts 
> > group by domain; 
> > 
> > 
> ---
>  
>
> > 
> > The following sqlalchemy query raises the exception: 
> > 
> > thedomain = case([(Domain.name == None, '*')], else_=Domain.name) 
> > theaction = case([(Action.name == None, '*')], 
> else_=Action.name) 
> > theresource = case([(Resource.name == None, '*')], 
> > else_=Resource.name) 
> > 
> > action_agg = func.array_agg(theaction.distinct()) 
> > 
> > stmt1 = ( 
> > session.query(thedomain.label('domain'), 
> >   theresource.label('resource'), 
> >   action_agg.label('action')). 
> > select_from(User). 
> > join(role_membership_table, User.pk_id == 
> > role_membership_table.c.user_id). 
> > join(role_permission_table, role_membership_table.c.role_id 
> > == 

[sqlalchemy] Re: struggling with postgres json related query using sqlalchemy -- sqlalchemy bug or issue with query?

2016-11-17 Thread Darin Gordon
The as_row recipe worked.  Thanks for the quick response, Mike.



On Thursday, November 17, 2016 at 7:25:07 AM UTC-5, Darin Gordon wrote:
>
> Using:  postgres 9.6  and latest sqlalchemy 1.1.4
>
> I've been trying to port a query to a sqlalchemy query but have gotten an 
> exception about an unrecognized keyword arg, which
> confuses me as to whether the issue is my code or a bug in sqlalchemy: 
>  TypeError: self_group() got an unexpected keyword argument 'against'
>
>
>
> I've been trying to port the following query, which works in psql, to a 
> sqlalchemy query:
>
>
> ---
>
> select domain, json_agg(parts) as permissions from
> (select domain, row_to_json(r) as parts from
> (select domain, action, array_agg(distinct target) as 
> target from
> (select (case when domain is null then '*' else 
> domain end) as domain,
> (case when target is null then '*' else 
> target end) as target,
> array_agg(distinct (case when action is 
> null then '*' else action end)) as action
>from permission
>   group by domain, target
>  ) x
>   group by domain, action)
>   r) parts
> group by domain;
>
>
> ---
>
> The following sqlalchemy query raises the exception:
>
> thedomain = case([(Domain.name == None, '*')], else_=Domain.name)
> theaction = case([(Action.name == None, '*')], else_=Action.name)
> theresource = case([(Resource.name == None, '*')], 
> else_=Resource.name)
>
> action_agg = func.array_agg(theaction.distinct())
>
> stmt1 = (
> session.query(thedomain.label('domain'),
>   theresource.label('resource'),
>   action_agg.label('action')).
> select_from(User).
> join(role_membership_table, User.pk_id == 
> role_membership_table.c.user_id).
> join(role_permission_table, role_membership_table.c.role_id == 
> role_permission_table.c.role_id).
> join(Permission, role_permission_table.c.permission_id == 
> Permission.pk_id).
> outerjoin(Domain, Permission.domain_id == Domain.pk_id).
> outerjoin(Action, Permission.action_id == Action.pk_id).
> outerjoin(Resource, Permission.resource_id == Resource.pk_id).
> filter(User.identifier == identifier).
> group_by(Permission.domain_id, 
> Permission.resource_id)).subquery()
>
> stmt2 = (session.query(stmt1.c.domain,
>stmt1.c.action,
>   
>  func.array_agg(stmt1.c.resource.distinct())).
>  select_from(stmt1).
>  group_by(stmt1.c.domain, stmt1.c.action)).subquery()
>
> stmt3 = (session.query(stmt2.c.domain,
>func.row_to_json(stmt2)).
>  select_from(stmt2)).subquery()
>
> final = (session.query(stmt3.c.domain, func.json_agg(stmt3)).
>  select_from(stmt3).
>  group_by(stmt3.c.domain))
>
>
> ---
>
> Here's the exception trace:
>
> ./yosai_alchemystore/accountstore/accountstore.py in 
> _get_indexed_permissions_query(self, session, identifier)
> 156 
> 157 stmt3 = (session.query(stmt2.c.domain,
> --> 158func.row_to_json(stmt2)).
> 159  select_from(stmt2)).subquery()
> 160 
>
> ./lib/python3.5/site-packages/sqlalchemy/sql/functions.py in 
> __call__(self, *c, **kwargs)
> 322 
> 323 return Function(self.__names[-1],
> --> 324 packagenames=self.__names[0:-1], *c, **o)
> 325 
> 326 
>
> ./lib/python3.5/site-packages/sqlalchemy/sql/functions.py in 
> __init__(self, name, *clauses, **kw)
> 432 self.type = sqltypes.to_instance(kw.get('type_', None))
> 433 
> --> 434 FunctionElement.__init__(self, *clauses, **kw)
> 435 
> 436 def _bind_param(self, operator, obj, type_=None):
>
> ./lib/python3.5/site-packages/sqlalchemy/sql/functions.py in 
> __init__(self, *clauses, **kwargs)
>  60 self.clause_expr = ClauseList(
>  61 operator=operators.comma_op,
> ---> 62 

Re: [sqlalchemy] struggling with postgres json related query using sqlalchemy -- sqlalchemy bug or issue with query?

2016-11-17 Thread mike bayer

Hi Darin -

That's definitely a bug because all self_group() methods are supposed to 
at least have a **kw to let that argument pass in.   I located the one 
method that has this issue which is the self_group() method of Alias. 
However, this also suggests that an Alias object is being used in a 
columnar-context which is not actually what you want, an Alias is only 
good for a FROM clause.


Looking at your code where I think it's going wrong is:


func.row_to_json(stmt2)

because stmt2 is an alias(), and func.xyz() expects a columnar expression.

We are dealing here with Postgresql's very awkward SQL extensions to 
functions.  There's a wide variety of these that are to be supported as 
part of 
https://bitbucket.org/zzzeek/sqlalchemy/issues/3566/figure-out-how-to-support-all-of-pgs, 
though this case seems to be yet another case.   I've added a recipe for 
this as example three on the "pinned" response:



from sqlalchemy.sql import Alias, ColumnElement
from sqlalchemy.ext.compiler import compiles


class as_row(ColumnElement):
def __init__(self, expr):
assert isinstance(expr, Alias)
self.expr = expr


@compiles(as_row)
def _gen_as_row(element, compiler, **kw):
return compiler.visit_alias(element.expr, ashint=True, **kw)


if __name__ == '__main__':
from sqlalchemy import Column, Integer, create_engine, func
from sqlalchemy.orm import Session
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class A(Base):
__tablename__ = 'a'
id = Column(Integer, primary_key=True)
x = Column(Integer)
y = Column(Integer)

e = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
Base.metadata.drop_all(e)
Base.metadata.create_all(e)

s = Session(e)
s.add_all([
A(x=1, y=2),
A(x=5, y=4)
])
s.commit()

subq = s.query(A).subquery()

print s.query(func.row_to_json(as_row(subq))).select_from(subq).all()






On 11/17/2016 07:25 AM, Darin Gordon wrote:

Using:  postgres 9.6  and latest sqlalchemy 1.1.4

I've been trying to port a query to a sqlalchemy query but have gotten
an exception about an unrecognized keyword arg, which
confuses me as to whether the issue is my code or a bug in sqlalchemy:
 TypeError: self_group() got an unexpected keyword argument 'against'



I've been trying to port the following query, which works in psql, to a
sqlalchemy query:

---

select domain, json_agg(parts) as permissions from
(select domain, row_to_json(r) as parts from
(select domain, action, array_agg(distinct target)
as target from
(select (case when domain is null then '*' else
domain end) as domain,
(case when target is null then '*' else
target end) as target,
array_agg(distinct (case when action is
null then '*' else action end)) as action
   from permission
  group by domain, target
 ) x
  group by domain, action)
  r) parts
group by domain;

---

The following sqlalchemy query raises the exception:

thedomain = case([(Domain.name == None, '*')], else_=Domain.name)
theaction = case([(Action.name == None, '*')], else_=Action.name)
theresource = case([(Resource.name == None, '*')],
else_=Resource.name)

action_agg = func.array_agg(theaction.distinct())

stmt1 = (
session.query(thedomain.label('domain'),
  theresource.label('resource'),
  action_agg.label('action')).
select_from(User).
join(role_membership_table, User.pk_id ==
role_membership_table.c.user_id).
join(role_permission_table, role_membership_table.c.role_id
== role_permission_table.c.role_id).
join(Permission, role_permission_table.c.permission_id ==
Permission.pk_id).
outerjoin(Domain, Permission.domain_id == Domain.pk_id).
outerjoin(Action, Permission.action_id == Action.pk_id).
outerjoin(Resource, Permission.resource_id == Resource.pk_id).
filter(User.identifier == identifier).
group_by(Permission.domain_id,
Permission.resource_id)).subquery()

stmt2 = (session.query(stmt1.c.domain,
   stmt1.c.action,
   func.array_agg(stmt1.c.resource.distinct())).
 select_from(stmt1).
 

[sqlalchemy] struggling with postgres json related query using sqlalchemy -- sqlalchemy bug or issue with query?

2016-11-17 Thread Darin Gordon
Using:  postgres 9.6  and latest sqlalchemy 1.1.4

I've been trying to port a query to a sqlalchemy query but have gotten an 
exception about an unrecognized keyword arg, which
confuses me as to whether the issue is my code or a bug in sqlalchemy: 
 TypeError: self_group() got an unexpected keyword argument 'against'



I've been trying to port the following query, which works in psql, to a 
sqlalchemy query:

---

select domain, json_agg(parts) as permissions from
(select domain, row_to_json(r) as parts from
(select domain, action, array_agg(distinct target) as 
target from
(select (case when domain is null then '*' else 
domain end) as domain,
(case when target is null then '*' else 
target end) as target,
array_agg(distinct (case when action is 
null then '*' else action end)) as action
   from permission
  group by domain, target
 ) x
  group by domain, action)
  r) parts
group by domain;

---

The following sqlalchemy query raises the exception:

thedomain = case([(Domain.name == None, '*')], else_=Domain.name)
theaction = case([(Action.name == None, '*')], else_=Action.name)
theresource = case([(Resource.name == None, '*')], 
else_=Resource.name)

action_agg = func.array_agg(theaction.distinct())

stmt1 = (
session.query(thedomain.label('domain'),
  theresource.label('resource'),
  action_agg.label('action')).
select_from(User).
join(role_membership_table, User.pk_id == 
role_membership_table.c.user_id).
join(role_permission_table, role_membership_table.c.role_id == 
role_permission_table.c.role_id).
join(Permission, role_permission_table.c.permission_id == 
Permission.pk_id).
outerjoin(Domain, Permission.domain_id == Domain.pk_id).
outerjoin(Action, Permission.action_id == Action.pk_id).
outerjoin(Resource, Permission.resource_id == Resource.pk_id).
filter(User.identifier == identifier).
group_by(Permission.domain_id, 
Permission.resource_id)).subquery()

stmt2 = (session.query(stmt1.c.domain,
   stmt1.c.action,
   func.array_agg(stmt1.c.resource.distinct())).
 select_from(stmt1).
 group_by(stmt1.c.domain, stmt1.c.action)).subquery()

stmt3 = (session.query(stmt2.c.domain,
   func.row_to_json(stmt2)).
 select_from(stmt2)).subquery()

final = (session.query(stmt3.c.domain, func.json_agg(stmt3)).
 select_from(stmt3).
 group_by(stmt3.c.domain))

---

Here's the exception trace:

./yosai_alchemystore/accountstore/accountstore.py in 
_get_indexed_permissions_query(self, session, identifier)
156 
157 stmt3 = (session.query(stmt2.c.domain,
--> 158func.row_to_json(stmt2)).
159  select_from(stmt2)).subquery()
160 

./lib/python3.5/site-packages/sqlalchemy/sql/functions.py in 
__call__(self, *c, **kwargs)
322 
323 return Function(self.__names[-1],
--> 324 packagenames=self.__names[0:-1], *c, **o)
325 
326 

./lib/python3.5/site-packages/sqlalchemy/sql/functions.py in 
__init__(self, name, *clauses, **kw)
432 self.type = sqltypes.to_instance(kw.get('type_', None))
433 
--> 434 FunctionElement.__init__(self, *clauses, **kw)
435 
436 def _bind_param(self, operator, obj, type_=None):

./lib/python3.5/site-packages/sqlalchemy/sql/functions.py in 
__init__(self, *clauses, **kwargs)
 60 self.clause_expr = ClauseList(
 61 operator=operators.comma_op,
---> 62 group_contents=True, *args).\
 63 self_group()
 64 

./lib/python3.5/site-packages/sqlalchemy/sql/elements.py in 
__init__(self, *clauses, **kwargs)
   1783 self.clauses = [
   1784 
text_converter(clause).self_group(against=self.operator)
-> 1785 for clause in clauses]
   1786   

Re: [sqlalchemy] Bug with bidirectional association proxy many-to-many relationships reintroduced?

2016-08-24 Thread Mike Bayer



On 08/24/2016 03:09 PM, Gordan Todorovac wrote:

Mike,



python -i ~/python/sq3.py

from sqlalchemy import create_engine
from sqlalchemy.orm import Session
e = create_engine("sqlite:///:memory:")
Base.metadata.drop_all(e)
Base.metadata.create_all(e)
session=Session(e)
rory=User("rory")
session.add(rory)
chicken=Keyword("chicken")
session.add(chicken)
rory.keywords.append(chicken)
rory.keywords

[Keyword('chicken')]

chicken.users

[<__main__.User object at 0x7f82dff87390>]



Here, nothing is flushed yet.  UserKeyword is a pending object and has 
not been inserted.




rory.keywords.remove(chicken)


Here, you've removed from User.user_keywords, marked the UserKeyword as 
deleted, and since it was never inserted, it's bumped out of the 
session.



rory.keywords

[]


rory.user_keywords is empty.


chicken.users

[None]


chicken.user_keywords still has the UserKeyword object stuck in it, the 
UserKeyword has no User attached to it, so the user name is None. 
There is no relationship between UserKeyword.keyword and 
UserKeyword.user, so you'd need to wait for the session to be expired 
and have this totally unrelated relationship load again, or perhaps use 
attribute events of your own to synchronize these ahead of re-loading.





session.flush()

/hostname/multiacl2/lib/python2.7/site-packages/sqlalchemy/orm/unitofwork.py:235:
SAWarning: Object of type  not in session, add operation
along 'Keyword.user_keywords' will not proceed


this warning is because this is an odd situation where you've added a 
UserKeyword as pending, never flushed it, and have now removed it from 
the session, but it's still sitting inside of a collection.




chicken.user_keywords

[<__main__.UserKeyword object at 0x7f82dfa6ded0>]


expected, same reasons above


chicken.user_keywords[0].user
chicken.user_keywords[0].keyword

Keyword('chicken')


expected, same reason above.

type "session.commit()".  Now everything is what you'd expect.

This is also explained in my answer at 
http://stackoverflow.com/a/14471166/34549.


The bug explained in that answer is that prior to 0.8, the UserKeyword 
object in question *would still be in the Session*, and since it's 
"user" is None, you get a NULL error - the original poster reported: 
"Causes an integrity error as SA tries to set one of the foreign key 
columns to null.".  That's the part that was fixed.










So it definitely seems that the association object is
half-disassociated. I have this currently worked around by setting a
validator on UserKeyword.user that removes self from
UserKeyword.keyword.user_keywords if self.user is None, but I am not
confident in this workaround.



On Wednesday, August 24, 2016 at 1:57:58 PM UTC-4, Mike Bayer wrote:



The issue mentioned in 2655 is a major behavior of the ORM, in that an
object is considered orphan if any of its relationships are non-present
in all cases, including pending and persistent, rather than if all of
them are un-present. This is not at all subtle and is covered in a
wide range of test cases.   The test case attached to the bug continues
to pass, as does the demonstration attached in the migration notes:

http://docs.sqlalchemy.org/en/latest/changelog/migration_08.html#the-consideration-of-a-pending-object-as-an-orphan-has-been-made-more-aggressive

.



>
>
> Is anyone aware whether this bug was reintroduced on purpose (i.e.
> figured out the rationale for the original behavior) or by accident?

reintroduction of a bug here would need to be demonstrated.

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bug with bidirectional association proxy many-to-many relationships reintroduced?

2016-08-24 Thread Gordan Todorovac
Mike,

Thanks for the response! Here is the demonstration, please let me know if 
there is something obvious that I am missing:

> cat ~/python/sq3.py
from sqlalchemy import Column, Integer, String, ForeignKey
from sqlalchemy.orm import relationship, backref

from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
name = Column(String(64))

# association proxy of "user_keywords" collection
# to "keyword" attribute
keywords = association_proxy('user_keywords', 'keyword')

def __init__(self, name):
self.name = name

class UserKeyword(Base):
__tablename__ = 'user_keyword'
user_id = Column(Integer, ForeignKey('user.id'), primary_key=True)
keyword_id = Column(Integer, ForeignKey('keyword.id'), primary_key=True)
special_key = Column(String(50))

# bidirectional attribute/collection of "user"/"user_keywords"
user = relationship(User,
backref=backref("user_keywords",
cascade="all, delete-orphan")
)

keyword = relationship("Keyword",
backref=backref("user_keywords", cascade="all, 
delete-orphan"))

def __init__(self, keyword=None, user=None, special_key=None):
self.user = user
self.keyword = keyword
self.special_key = special_key

class Keyword(Base):
__tablename__ = 'keyword'
id = Column(Integer, primary_key=True)
keyword = Column('keyword', String(64))

users = association_proxy('user_keywords', 'user')

def __init__(self, keyword):
self.keyword = keyword

def __repr__(self):
return 'Keyword(%s)' % repr(self.keyword)

> python -i ~/python/sq3.py
>>> from sqlalchemy import create_engine
>>> from sqlalchemy.orm import Session
>>> e = create_engine("sqlite:///:memory:")
>>> Base.metadata.drop_all(e)
>>> Base.metadata.create_all(e)
>>> session=Session(e)
>>> rory=User("rory")
>>> session.add(rory)
>>> chicken=Keyword("chicken")
>>> session.add(chicken)
>>> rory.keywords.append(chicken)
>>> rory.keywords
[Keyword('chicken')]
>>> chicken.users
[<__main__.User object at 0x7f82dff87390>]
>>> rory.keywords.remove(chicken)
>>> rory.keywords
[]
>>> chicken.users
[None]
>>> session.flush()
/hostname/multiacl2/lib/python2.7/site-packages/sqlalchemy/orm/unitofwork.py:235:
 
SAWarning: Object of type  not in session, add operation along 
'Keyword.user_keywords' will not proceed
  (orm_util.state_class_str(state), operation, prop))
>>> chicken.user_keywords
[<__main__.UserKeyword object at 0x7f82dfa6ded0>]
>>> chicken.user_keywords[0].user
>>> chicken.user_keywords[0].keyword
Keyword('chicken')
>>>

So it definitely seems that the association object is half-disassociated. I 
have this currently worked around by setting a validator on 
UserKeyword.user that removes self from UserKeyword.keyword.user_keywords 
if self.user is None, but I am not confident in this workaround.



On Wednesday, August 24, 2016 at 1:57:58 PM UTC-4, Mike Bayer wrote:
>
>
>
> The issue mentioned in 2655 is a major behavior of the ORM, in that an 
> object is considered orphan if any of its relationships are non-present 
> in all cases, including pending and persistent, rather than if all of 
> them are un-present. This is not at all subtle and is covered in a 
> wide range of test cases.   The test case attached to the bug continues 
> to pass, as does the demonstration attached in the migration notes: 
>
> http://docs.sqlalchemy.org/en/latest/changelog/migration_08.html#the-consideration-of-a-pending-object-as-an-orphan-has-been-made-more-aggressive.
>  
>
>  
>
> > 
> > 
> > Is anyone aware whether this bug was reintroduced on purpose (i.e. 
> > figured out the rationale for the original behavior) or by accident? 
>
> reintroduction of a bug here would need to be demonstrated. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bug with bidirectional association proxy many-to-many relationships reintroduced?

2016-08-24 Thread Mike Bayer



On 08/24/2016 08:59 AM, Gordan Todorovac wrote:

Hi, all -

I am experiencing the issue described here

http://stackoverflow.com/questions/14470688/sqlalchemy-bidirectional-relationship-association-proxy

and reported as fixed here

https://bitbucket.org/zzzeek/sqlalchemy/issues/2655


in the latest release version of SQLAlchemy (1.0.14). Summary: removing
a child object from the proxied (on parent) many-to-many relationship
using an association object does not remove the association object or
the reference to it from the child.


The issue mentioned in 2655 is a major behavior of the ORM, in that an 
object is considered orphan if any of its relationships are non-present 
in all cases, including pending and persistent, rather than if all of 
them are un-present. This is not at all subtle and is covered in a 
wide range of test cases.   The test case attached to the bug continues 
to pass, as does the demonstration attached in the migration notes: 
http://docs.sqlalchemy.org/en/latest/changelog/migration_08.html#the-consideration-of-a-pending-object-as-an-orphan-has-been-made-more-aggressive.






Is anyone aware whether this bug was reintroduced on purpose (i.e.
figured out the rationale for the original behavior) or by accident?


reintroduction of a bug here would need to be demonstrated.





Many thanks,


Gordan


--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Bug with bidirectional association proxy many-to-many relationships reintroduced?

2016-08-24 Thread Gordan Todorovac
Hi, all -

I am experiencing the issue described here

http://stackoverflow.com/questions/14470688/sqlalchemy-bidirectional-relationship-association-proxy

and reported as fixed here

https://bitbucket.org/zzzeek/sqlalchemy/issues/2655


in the latest release version of SQLAlchemy (1.0.14). Summary: removing a 
child object from the proxied (on parent) many-to-many relationship using 
an association object does not remove the association object or the 
reference to it from the child.


Is anyone aware whether this bug was reintroduced on purpose (i.e. figured 
out the rationale for the original behavior) or by accident?


Many thanks,


Gordan

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] bug in QueuePool with pool_size = 1?

2016-06-29 Thread Mike Bayer



On 06/29/2016 05:23 PM, Jonathon Nelson wrote:

I noticed some strange behavior with QueuePool when the pool_size is
exactly 1.
(this is with 1.0.13). With an app that basically does this:

e = sa.create_engine(..., pool_size=N)
while True:
with e.begin() as trans:
# perform a simple query

I see this behavior (observed with strace, but turning on connection
logging will do it too):

With pool_size == 1, db connections opened, used, and then closed.


I don't see that (using netstat w/ mysql, I see one TCP connection, 
stays open).  How are you detecting that db connections are "open"? 
Note that the pool also has an "overflow" that will allow it to go over 
"pool_size" if more than one connection is requested simultaneously. 
Set that to zero to get the pool to stay on exactly one connection.





With pool_size == 2, *two* database connections remain open, neither are
closed.

Something seems weird with pool_size==1. :-(

The logs:

DEBUG:sqlalchemy.pool.QueuePool:Created new connection 
DEBUG:sqlalchemy.pool.QueuePool:Connection  checked out from pool
DEBUG:sqlalchemy.pool.QueuePool:Connection  being returned to pool
DEBUG:sqlalchemy.pool.QueuePool:Connection  rollback-on-return
DEBUG:sqlalchemy.pool.QueuePool:Connection  being returned to pool
DEBUG:sqlalchemy.pool.QueuePool:Connection  rollback-on-return
DEBUG:sqlalchemy.pool.QueuePool:Closing connection 
DEBUG:sqlalchemy.pool.QueuePool:Connection  checked out from pool
DEBUG:sqlalchemy.pool.QueuePool:Created new connection 
DEBUG:sqlalchemy.pool.QueuePool:Connection  checked out from pool
DEBUG:sqlalchemy.pool.QueuePool:Connection  being returned to pool
DEBUG:sqlalchemy.pool.QueuePool:Connection  rollback-on-return
DEBUG:sqlalchemy.pool.QueuePool:Connection  being returned to pool
DEBUG:sqlalchemy.pool.QueuePool:Connection  rollback-on-return
DEBUG:sqlalchemy.pool.QueuePool:Closing connection 
DEBUG:sqlalchemy.pool.QueuePool:Connection  checked out from pool


--
Jon Nelson
Dyn / Principal Software Engineer
p. +1 (603) 263-8029

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] bug in QueuePool with pool_size = 1?

2016-06-29 Thread Jonathon Nelson
I noticed some strange behavior with QueuePool when the pool_size is
exactly 1.
(this is with 1.0.13). With an app that basically does this:

e = sa.create_engine(..., pool_size=N)
while True:
with e.begin() as trans:
# perform a simple query

I see this behavior (observed with strace, but turning on connection
logging will do it too):

With pool_size == 1, db connections opened, used, and then closed.
With pool_size == 2, *two* database connections remain open, neither are
closed.

Something seems weird with pool_size==1. :-(

The logs:

DEBUG:sqlalchemy.pool.QueuePool:Created new connection 
DEBUG:sqlalchemy.pool.QueuePool:Connection  checked out from pool
DEBUG:sqlalchemy.pool.QueuePool:Connection  being returned to pool
DEBUG:sqlalchemy.pool.QueuePool:Connection  rollback-on-return
DEBUG:sqlalchemy.pool.QueuePool:Connection  being returned to pool
DEBUG:sqlalchemy.pool.QueuePool:Connection  rollback-on-return
DEBUG:sqlalchemy.pool.QueuePool:Closing connection 
DEBUG:sqlalchemy.pool.QueuePool:Connection  checked out from pool
DEBUG:sqlalchemy.pool.QueuePool:Created new connection 
DEBUG:sqlalchemy.pool.QueuePool:Connection  checked out from pool
DEBUG:sqlalchemy.pool.QueuePool:Connection  being returned to pool
DEBUG:sqlalchemy.pool.QueuePool:Connection  rollback-on-return
DEBUG:sqlalchemy.pool.QueuePool:Connection  being returned to pool
DEBUG:sqlalchemy.pool.QueuePool:Connection  rollback-on-return
DEBUG:sqlalchemy.pool.QueuePool:Closing connection 
DEBUG:sqlalchemy.pool.QueuePool:Connection  checked out from pool


-- 
Jon Nelson
Dyn / Principal Software Engineer
p. +1 (603) 263-8029

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bug with join?

2016-06-14 Thread Drachenfels

>
> > Article has a foreign key to Video and Video has a foreign key to Tag so 
> these can be joined either way. 
>

This is a bit information I missed to be honest. I forgot that while 
Article has direct join to Video, Video can be joined to Article via Tag. I 
checked my relations so many times and I didn't notice it.
 

> If you want joins to work with relationships alone you should consider 
> those using 
> inspect(Class).relationships to find the relationships you'd like to use 
> for each linkage. 
>
 
Lucky for me, my code is not that mad. I only wrap some things to reduce 
amount of repetition, however due to wrapper work semi-autonomously not 
every request has exactly same join order. And because order of joins in 
this case fixed/caused error I was a bit confused.

Thanks a lot of shedding light on the topic. Now I can sleep lightly 
knowing that mix fix is actually fix for my own mistake.

Cheers.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bug with join?

2016-06-13 Thread Mike Bayer



On 06/13/2016 01:15 PM, Drachenfels wrote:

Hi guys,

I found very strange behaviour of my script, after a lot of debugging it
appears that SQLAlchemy complains about joins that I cannot see exactly
why it has any issue with.

Full running example is below in both as inline main.py script and link.

In the nutshell, hierarchy of objects is as follows.

Article has Video, Video has Tag Series, Tag has Slug field
Article has as well Tags as m2m, each tag has Slug field.

What is wrong?

When I do

Article.join(ArticleTag).join(Tag).join(SlugTag).join(Slug).join(Video).join(Tag[Series]).join(SlugTag).join(Slug)

I will get error that SQLAlchemy cannot join Video as it's ambiguous join.


that's not totally surprising?  that's an enormous chain of joins, just 
specify the ON clause for those joins rather than making it guess.




When I Do

Article.join(Video).join(Tag[Series]).join(SlugTag).join(Slug).join(ArticleTag).join(Tag).join(SlugTag).join(Slug).

All works like a charm.

Different solution is to specify that when joining Video, onclause
should be Article.video_uid == Video.uid.

In general Article can have only one Video and nothing else can do it.
So I wonder where SQLAlchemy finds this ambiguity. Is it a bug?


It should be telling you where the ambiguity is.

Just ran your join and we get:

sqlalchemy.exc.InvalidRequestError: Could not find a FROM clause to join 
from.  Tried joining to , but got: Can't 
determine join between 'Join object on Join object on Join object on 
Join object on article(140307104294928) and 
article_tag(140307104293200)(140307104296656) and 
tag(140307104695632)(140307104296720) and 
slug_tag(140307105025936)(140307104185104) and slug(140307105024208)' 
and 'video'; tables have more than one foreign key constraint 
relationship between them. Please specify the 'onclause' of this join 
explicitly.


Article has a foriegn key to Video and Video has a foreign key to Tag so 
these can be joined either way.


If perhaps you're expecting this to be unambiguous because Tag has no 
relationship() to Video, relationship() is not used when you use 
query.join() without an ON clause.  It only uses foreign keys in this 
mode.  You would need to specify an ON clause either as a string 
relationship name (e.g. "videos") or as the attribute e.g. 
Article.videos etc.




Keep in mind this code is a small piece of bigger picture, so you might
be wondering why I have this aliasing feature. It's wrapper, that
collects all filters/joins and then compiles them removing for example
duplicated joins). I removed most of it but there might be an artefact
or two.


I haven't considered your code sample with all the right/left join 
manipulation stuff, your simple example above is explained by the 
documented behavior of Query.join.   If you want joins to work with 
relationships alone you should consider those using 
inspect(Class).relationships to find the relationships you'd like to use 
for each linkage.







Link:

http://sprunge.us/KBeM

Inline code:

# encoding: utf8

from __future__ import unicode_literals

import sys

import sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()


class Slug(Base):
__tablename__ = 'slug'

id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True)

slug = sqlalchemy.Column(sqlalchemy.Unicode(255), nullable=False)
expiration_date = sqlalchemy.Column(
sqlalchemy.DateTime(timezone=True), default=None)


class SlugTag(Base):
__tablename__ = 'slug_tag'

slug = sqlalchemy.orm.relationship(
'Slug', primaryjoin='SlugTag.slug_id == Slug.id',
cascade="all, delete-orphan", uselist=False, single_parent=True)
obj = sqlalchemy.orm.relationship(
'Tag', primaryjoin='SlugTag.obj_uid == Tag.uid',
uselist=False)

slug_id = sqlalchemy.Column(
sqlalchemy.Integer,
sqlalchemy.ForeignKey('slug.id'), nullable=False)
obj_uid = sqlalchemy.Column(
sqlalchemy.Unicode(32),
sqlalchemy.ForeignKey('tag.uid'), nullable=False)

__table_args__ = (
sqlalchemy.PrimaryKeyConstraint(
'slug_id', 'obj_uid', name='slug_tag_pk'),
)


class Tag(Base):
__tablename__ = 'tag'

uid = sqlalchemy.Column(
sqlalchemy.Unicode(32), primary_key=True, nullable=False)
title = sqlalchemy.Column(sqlalchemy.Unicode(255), nullable=False)
type = sqlalchemy.Column(sqlalchemy.Unicode(255))

slugs_connector = sqlalchemy.orm.relationship(
'SlugTag', primaryjoin='Tag.uid == SlugTag.obj_uid')

__table_args__ = (
sqlalchemy.UniqueConstraint('type', 'title',
name='title_type_unique'),
)


class Video(Base):
__tablename__ = 'video'

SERIES_TYPE = 'series'

uid = sqlalchemy.Column(
sqlalchemy.Unicode(32), primary_key=True, nullable=False)
title = sqlalchemy.Column(sqlalchemy.Unicode(255), 

[sqlalchemy] Bug with join?

2016-06-13 Thread Drachenfels
Hi guys,

I found very strange behaviour of my script, after a lot of debugging it 
appears that SQLAlchemy complains about joins that I cannot see exactly why 
it has any issue with.

Full running example is below in both as inline main.py script and link.

In the nutshell, hierarchy of objects is as follows.

Article has Video, Video has Tag Series, Tag has Slug field
Article has as well Tags as m2m, each tag has Slug field.

What is wrong?

When I do

Article.join(ArticleTag).join(Tag).join(SlugTag).join(Slug).join(Video).join(Tag[Series]).join(SlugTag).join(Slug)

I will get error that SQLAlchemy cannot join Video as it's ambiguous join.

When I Do

Article.join(Video).join(Tag[Series]).join(SlugTag).join(Slug).join(ArticleTag).join(Tag).join(SlugTag).join(Slug).

All works like a charm.

Different solution is to specify that when joining Video, onclause should 
be Article.video_uid == Video.uid.

In general Article can have only one Video and nothing else can do it. So I 
wonder where SQLAlchemy finds this ambiguity. Is it a bug?

Keep in mind this code is a small piece of bigger picture, so you might be 
wondering why I have this aliasing feature. It's wrapper, that collects all 
filters/joins and then compiles them removing for example duplicated 
joins). I removed most of it but there might be an artefact or two.



Link:

http://sprunge.us/KBeM

Inline code:

# encoding: utf8

from __future__ import unicode_literals

import sys

import sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()


class Slug(Base):
__tablename__ = 'slug'

id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True)

slug = sqlalchemy.Column(sqlalchemy.Unicode(255), nullable=False)
expiration_date = sqlalchemy.Column(
sqlalchemy.DateTime(timezone=True), default=None)


class SlugTag(Base):
__tablename__ = 'slug_tag'

slug = sqlalchemy.orm.relationship(
'Slug', primaryjoin='SlugTag.slug_id == Slug.id',
cascade="all, delete-orphan", uselist=False, single_parent=True)
obj = sqlalchemy.orm.relationship(
'Tag', primaryjoin='SlugTag.obj_uid == Tag.uid',
uselist=False)

slug_id = sqlalchemy.Column(
sqlalchemy.Integer,
sqlalchemy.ForeignKey('slug.id'), nullable=False)
obj_uid = sqlalchemy.Column(
sqlalchemy.Unicode(32),
sqlalchemy.ForeignKey('tag.uid'), nullable=False)

__table_args__ = (
sqlalchemy.PrimaryKeyConstraint(
'slug_id', 'obj_uid', name='slug_tag_pk'),
)


class Tag(Base):
__tablename__ = 'tag'

uid = sqlalchemy.Column(
sqlalchemy.Unicode(32), primary_key=True, nullable=False)
title = sqlalchemy.Column(sqlalchemy.Unicode(255), nullable=False)
type = sqlalchemy.Column(sqlalchemy.Unicode(255))

slugs_connector = sqlalchemy.orm.relationship(
'SlugTag', primaryjoin='Tag.uid == SlugTag.obj_uid')

__table_args__ = (
sqlalchemy.UniqueConstraint('type', 'title', 
name='title_type_unique'),
)


class Video(Base):
__tablename__ = 'video'

SERIES_TYPE = 'series'

uid = sqlalchemy.Column(
sqlalchemy.Unicode(32), primary_key=True, nullable=False)
title = sqlalchemy.Column(sqlalchemy.Unicode(255), nullable=False)
body = sqlalchemy.Column(sqlalchemy.Unicode)
duration = sqlalchemy.Column(sqlalchemy.Time)

series_uid = sqlalchemy.Column(
sqlalchemy.Unicode(32), sqlalchemy.ForeignKey('tag.uid'), 
index=True,
nullable=False)

series = sqlalchemy.orm.relationship(
'Tag', primaryjoin='Video.series_uid == Tag.uid', uselist=False)


class ArticleTag(Base):
"""Represent m2m table between Tags and Article.

association obj for mapping m2m articles<->tags
"""
__tablename__ = 'article_tag'

article_uid = sqlalchemy.Column(
sqlalchemy.Unicode(32), sqlalchemy.ForeignKey('article.uid'),
primary_key=True)
tag_uid = sqlalchemy.Column(
sqlalchemy.Unicode(32), sqlalchemy.ForeignKey('tag.uid'),
primary_key=True)

tag = sqlalchemy.orm.relationship('Tag', backref='articles')


class Article(Base):
__tablename__ = 'article'

uid = sqlalchemy.Column(
sqlalchemy.Unicode(32), primary_key=True, nullable=False)
title = sqlalchemy.Column(sqlalchemy.Unicode(255), nullable=False)
meta_description = sqlalchemy.Column(sqlalchemy.Unicode(255))
meta_title = sqlalchemy.Column(sqlalchemy.Unicode(255))

body = sqlalchemy.Column(sqlalchemy.Unicode)

video_uid = sqlalchemy.Column(
sqlalchemy.Unicode(32), sqlalchemy.ForeignKey('video.uid'),
index=True)

video = sqlalchemy.orm.relationship(
'Video', primaryjoin='Article.video_uid == Video.uid',
uselist=False, backref='articles')
article_tags = sqlalchemy.orm.relationship('ArticleTag', 
backref='article')


def 

Re: [sqlalchemy] Bug in BufferedColumnResultProxy class?

2016-04-28 Thread Piotr Dobrogost
On Wednesday, April 27, 2016 at 6:41:24 PM UTC+2, Mike Bayer wrote:
>
>
>
> On 04/27/2016 11:11 AM, Mike Bayer wrote: 
> > 
> > I'm improving our test suite by ensuring that result processors are 
> > fired off for all result proxy subtypes when caching is used as well and 
> > I will ensure the line of code you mention is exercised.  If I can 
> > reproduce your described issue at that level, then the bug will be 
> > located and fixed.   I'll keep you posted. 
> > 
>
> with query caching enabled the issue is reproduced with new test cases, 
> a fix for the issue is in review via the link at 
>
> https://bitbucket.org/zzzeek/sqlalchemy/issues/3699/buffferedcolumnresultproxy-with.
>  
>
>

I would like to thank you Mike very much for your assistance, perseverance, 
invaluable help and the fix.
Your support is exemplar.

Best regards from users of your excellent SQLAlchemy,
Marcin Raczyński i Piotr Dobrogost from Poland

ps.
The original investigation of the issue and suggested fix was done by 
Marcin Raczyński.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bug in BufferedColumnResultProxy class?

2016-04-27 Thread Mike Bayer



On 04/27/2016 11:11 AM, Mike Bayer wrote:


I'm improving our test suite by ensuring that result processors are
fired off for all result proxy subtypes when caching is used as well and
I will ensure the line of code you mention is exercised.  If I can
reproduce your described issue at that level, then the bug will be
located and fixed.   I'll keep you posted.



with query caching enabled the issue is reproduced with new test cases, 
a fix for the issue is in review via the link at 
https://bitbucket.org/zzzeek/sqlalchemy/issues/3699/buffferedcolumnresultproxy-with.



--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bug in BufferedColumnResultProxy class?

2016-04-27 Thread Mike Bayer



On 04/26/2016 12:11 PM, Piotr Dobrogost wrote:

On Tue, Apr 26, 2016 at 12:18 AM, Mike Bayer  wrote:


On 04/25/2016 11:04 AM, Piotr Dobrogost wrote:


Is caching using dogpile what you call "Query cache extension"? If so
we don't use it.


this extension:
http://docs.sqlalchemy.org/en/rel_1_0/orm/extensions/baked.html?highlight=baked#module-sqlalchemy.ext.baked


Ok. I see this being used in Kotti (the framework I use). For example
here 
https://github.com/Kotti/Kotti/blob/0d162332e369dedb1b4936935e43de89e9665f8e/kotti/resources.py#L812


the stack trace shown here illustrates an INSERT statement in the context of
an ORM flush, which does use the compiled_cache feature; so to that degree,
this part of the puzzle makes sense.


I'm curious what in the log shows that we deal with "INSERT statement
in the context of an ORM flush"?


However, in cx_oracle, an INSERT statement does not return rows there's no
data to be returned; cursor.description should be None and the
BufferredColumnResultProxy will not be used.An INSERT statement in
Oracle that uses "RETURNING", as is usually the case for the ORM INSERT
statement, uses OUT parameters to achieve this and will make use of the
ReturningResultProxy to work around what's needed here.


Ok.


Since we're working without a test case of any kind, are you sure that the
result proxy indicating the problem is not a ReturningResultProxy ?   Are we
dealing with a column that has a server-generated default value that is of
type BLOB, CLOB, or similar ?


There is ReturningResultProxy in the call stack for example here
https://gist.github.com/piotr-dobrogost/6d57cacb9e77f59748353b2b6c1334d7#file-call-stack-while-executing-line-519-of-result-py-run-for-the-first-time-ever-during-app-startup-L58

and it's the type of "self" which I showed here

https://gist.github.com/piotr-dobrogost/6d57cacb9e77f59748353b2b6c1334d7#file-call-stack-while-executing-line-519-of-result-py-run-for-the-first-time-ever-during-app-startup-L74

It occurred to me that one can reproduce this rather easily installing
Kotti framework alone (or rather slightly modified version adjusted to
work with Oracle which you can find at
https://github.com/piotr-dobrogost/Kotti/tree/sfx).Kotti has
requirements.txt file where you can replace Kotti==1.3.0-alpha.4 with
git+https://github.com/piotr-dobrogost/Kotti.git@sfx and install
everything with `pip install --no-deps -r requirements.txt`. Running
is simple with `pserve development.ini` command (as Kotti is based on
Pyramid). Before running you have to edit development.ini and change
sqlalchemy.url to point to your Oracle db. When run Kotti creates and
populates db and when you try to access app at http://localhost:5000/
the error happens.


Unfortunately I can't accept bug reports of this form.   At the level 
of, "I can't reproduce it unless I run Kotti", by that logic it is just 
as easily a bug in Kotti itself and I don't have the resources to debug 
issues in other projects.   Kotti developers should be pitching in here 
to do the work of extracting the code paths that Kotti is exercising 
that produces this otherwise not-yet-reproducible behavior into a 
demonstration of unexpected behavior directly against SQLAlchemy.


I'm improving our test suite by ensuring that result processors are 
fired off for all result proxy subtypes when caching is used as well and 
I will ensure the line of code you mention is exercised.  If I can 
reproduce your described issue at that level, then the bug will be 
located and fixed.   I'll keep you posted.





Regards,
Piotr Dobrogost



--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bug in BufferedColumnResultProxy class?

2016-04-26 Thread Piotr Dobrogost
On Tue, Apr 26, 2016 at 12:18 AM, Mike Bayer  wrote:
>
> On 04/25/2016 11:04 AM, Piotr Dobrogost wrote:
>>
>> Is caching using dogpile what you call "Query cache extension"? If so
>> we don't use it.
>
> this extension:
> http://docs.sqlalchemy.org/en/rel_1_0/orm/extensions/baked.html?highlight=baked#module-sqlalchemy.ext.baked

Ok. I see this being used in Kotti (the framework I use). For example
here 
https://github.com/Kotti/Kotti/blob/0d162332e369dedb1b4936935e43de89e9665f8e/kotti/resources.py#L812

> the stack trace shown here illustrates an INSERT statement in the context of
> an ORM flush, which does use the compiled_cache feature; so to that degree,
> this part of the puzzle makes sense.

I'm curious what in the log shows that we deal with "INSERT statement
in the context of an ORM flush"?

> However, in cx_oracle, an INSERT statement does not return rows there's no
> data to be returned; cursor.description should be None and the
> BufferredColumnResultProxy will not be used.An INSERT statement in
> Oracle that uses "RETURNING", as is usually the case for the ORM INSERT
> statement, uses OUT parameters to achieve this and will make use of the
> ReturningResultProxy to work around what's needed here.

Ok.

> Since we're working without a test case of any kind, are you sure that the
> result proxy indicating the problem is not a ReturningResultProxy ?   Are we
> dealing with a column that has a server-generated default value that is of
> type BLOB, CLOB, or similar ?

There is ReturningResultProxy in the call stack for example here
https://gist.github.com/piotr-dobrogost/6d57cacb9e77f59748353b2b6c1334d7#file-call-stack-while-executing-line-519-of-result-py-run-for-the-first-time-ever-during-app-startup-L58

and it's the type of "self" which I showed here

https://gist.github.com/piotr-dobrogost/6d57cacb9e77f59748353b2b6c1334d7#file-call-stack-while-executing-line-519-of-result-py-run-for-the-first-time-ever-during-app-startup-L74

It occurred to me that one can reproduce this rather easily installing
Kotti framework alone (or rather slightly modified version adjusted to
work with Oracle which you can find at
https://github.com/piotr-dobrogost/Kotti/tree/sfx). Kotti has
requirements.txt file where you can replace Kotti==1.3.0-alpha.4 with
git+https://github.com/piotr-dobrogost/Kotti.git@sfx and install
everything with `pip install --no-deps -r requirements.txt`. Running
is simple with `pserve development.ini` command (as Kotti is based on
Pyramid). Before running you have to edit development.ini and change
sqlalchemy.url to point to your Oracle db. When run Kotti creates and
populates db and when you try to access app at http://localhost:5000/
the error happens.


Regards,
Piotr Dobrogost

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bug in BufferedColumnResultProxy class?

2016-04-25 Thread Mike Bayer



On 04/25/2016 11:04 AM, Piotr Dobrogost wrote:

On Mon, Apr 25, 2016 at 3:23 PM, Mike Bayer  wrote:


OK I can try to work with that but that's a very specific feature, you'd
need to be using the Query cache extension,


I have problem finding information on "Query cache extension".
Googling for  I get
http://docs.sqlalchemy.org/en/latest/orm/examples.html#module-examples.dogpile_caching
as the first result from SA's docs and
http://stackoverflow.com/questions/204918/does-sqlalchemy-support-caching
as the first result from Stackoverflow.

Is caching using dogpile what you call "Query cache extension"? If so
we don't use it.


this extension: 
http://docs.sqlalchemy.org/en/rel_1_0/orm/extensions/baked.html?highlight=baked#module-sqlalchemy.ext.baked








  or using the "compiled_cache"
option,


There's no "compiled_cache" string in Kotti framework so I guess we
don't use it.


option, *or* the error is being generated within the context of a Session
being flushed within an INSERT or UPDATE RETURNING statement, which itself I
don't think uses the traditional result set model in Oracle anyway.


It's hard to say. Right before printing call stack from line 519 of
result.py there's SELECT logged:


the stack trace shown here illustrates an INSERT statement in the 
context of an ORM flush, which does use the compiled_cache feature; so 
to that degree, this part of the puzzle makes sense.


However, in cx_oracle, an INSERT statement does not return rows there's 
no data to be returned; cursor.description should be None and the 
BufferredColumnResultProxy will not be used.An INSERT statement in 
Oracle that uses "RETURNING", as is usually the case for the ORM INSERT 
statement, uses OUT parameters to achieve this and will make use of the 
ReturningResultProxy to work around what's needed here.


Since we're working without a test case of any kind, are you sure that 
the result proxy indicating the problem is not a ReturningResultProxy ? 
  Are we dealing with a column that has a server-generated default 
value that is of type BLOB, CLOB, or similar ?








2016-04-25 16:36:02,954 INFO
[sqlalchemy.engine.base.Engine][waitress] SELECT nodes.id AS nodes_id,
nodes.type AS nodes_type, nodes.parent_id AS nodes_parent_id,
nodes.position AS nodes_position, nodes."_acl" AS nodes__acl,
nodes.name AS nodes_name, nodes.title AS nodes_title,
nodes.annotations AS nodes_annotations, nodes.path AS nodes_path
FROM nodes LEFT OUTER JOIN contents ON nodes.id = contents.id LEFT
OUTER JOIN documents ON contents.id = documents.id LEFT OUTER JOIN
files ON contents.id = files.id LEFT OUTER JOIN images ON contents.id
= images.id
WHERE nodes.parent_id IS NULL

and right before the traceback is printed there's rollback being logged:
2016-04-25 16:36:02,975 INFO  [sqlalchemy.engine.base.Engine][waitress] ROLLBACK

however I don't know if it answers your last question.


any of these true for your case ?  Looks like you haven't provided a stack
trace for your issue, I'm sure you have that, so send that along.


Please find all information at
https://gist.github.com/piotr-dobrogost/6d57cacb9e77f59748353b2b6c1334d7
Probably you are most interested in the raw log (file named "Full log
from the start of the application (with empty db)"). Please note that
I modified line 519 in result.py by inserting the following code:
import traceback; traceback.print_stack();
to get call stack at this moment of execution.
Also the nature of the bug is such that there's no information in the
log about original problem (not executing some processors which were
registered and executed earlier). I'm guessing there are scopes during
which one should not modify the list of registered processors. If it
happens that this list (as a consequence of this bug) is being cleared
while one of such scopes that it would be awesome if SA could somehow
notice that the list of processors was changed in some scope where it
should be left intact and log warning or raise hard error. This way
the reason for error would be much easier to find in this case and
probably also in future.


Regards,
Piotr Dobrogost



--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bug in BufferedColumnResultProxy class?

2016-04-25 Thread Piotr Dobrogost
On Mon, Apr 25, 2016 at 3:23 PM, Mike Bayer  wrote:
>
> OK I can try to work with that but that's a very specific feature, you'd
> need to be using the Query cache extension,

I have problem finding information on "Query cache extension".
Googling for  I get
http://docs.sqlalchemy.org/en/latest/orm/examples.html#module-examples.dogpile_caching
as the first result from SA's docs and
http://stackoverflow.com/questions/204918/does-sqlalchemy-support-caching
as the first result from Stackoverflow.

Is caching using dogpile what you call "Query cache extension"? If so
we don't use it.

>  or using the "compiled_cache"
> option,

There's no "compiled_cache" string in Kotti framework so I guess we
don't use it.

> option, *or* the error is being generated within the context of a Session
> being flushed within an INSERT or UPDATE RETURNING statement, which itself I
> don't think uses the traditional result set model in Oracle anyway.

It's hard to say. Right before printing call stack from line 519 of
result.py there's SELECT logged:
2016-04-25 16:36:02,954 INFO
[sqlalchemy.engine.base.Engine][waitress] SELECT nodes.id AS nodes_id,
nodes.type AS nodes_type, nodes.parent_id AS nodes_parent_id,
nodes.position AS nodes_position, nodes."_acl" AS nodes__acl,
nodes.name AS nodes_name, nodes.title AS nodes_title,
nodes.annotations AS nodes_annotations, nodes.path AS nodes_path
FROM nodes LEFT OUTER JOIN contents ON nodes.id = contents.id LEFT
OUTER JOIN documents ON contents.id = documents.id LEFT OUTER JOIN
files ON contents.id = files.id LEFT OUTER JOIN images ON contents.id
= images.id
WHERE nodes.parent_id IS NULL

and right before the traceback is printed there's rollback being logged:
2016-04-25 16:36:02,975 INFO  [sqlalchemy.engine.base.Engine][waitress] ROLLBACK

however I don't know if it answers your last question.

> any of these true for your case ?  Looks like you haven't provided a stack
> trace for your issue, I'm sure you have that, so send that along.

Please find all information at
https://gist.github.com/piotr-dobrogost/6d57cacb9e77f59748353b2b6c1334d7
Probably you are most interested in the raw log (file named "Full log
from the start of the application (with empty db)"). Please note that
I modified line 519 in result.py by inserting the following code:
import traceback; traceback.print_stack();
to get call stack at this moment of execution.
Also the nature of the bug is such that there's no information in the
log about original problem (not executing some processors which were
registered and executed earlier). I'm guessing there are scopes during
which one should not modify the list of registered processors. If it
happens that this list (as a consequence of this bug) is being cleared
while one of such scopes that it would be awesome if SA could somehow
notice that the list of processors was changed in some scope where it
should be left intact and log warning or raise hard error. This way
the reason for error would be much easier to find in this case and
probably also in future.


Regards,
Piotr Dobrogost

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bug in BufferedColumnResultProxy class?

2016-04-25 Thread Mike Bayer



On 04/25/2016 05:49 AM, Piotr Dobrogost wrote:

On Friday, April 22, 2016 at 5:22:38 PM UTC+2, Mike Bayer wrote:


On 04/22/2016 10:40 AM, Piotr Dobrogost wrote:
 > It seems BufferedColumnResultProxy class (used only in Oracle
dialect)
 > has a bug.

I cannot confirm it, would need a test case.  Here's one that is
extremely simple but exercises the features you describe.   I have no
doubt that what you're seeing is a bug however so I'd need to know what
to add to this test to illustrate the issue:


Thanks for quick response and creation of sample code. We have trouble
adding something that would trigger bug as we don't fully understand
source code of SA and in addition we use other framework (Kotti) which
adds its layer of complexity. Having said that we think that what's
needed to reproduce bug is to modify your test case so that line 519 in
result.py would get hit
(https://github.com/zzzeek/sqlalchemy/blob/rel_1_0_12/lib/sqlalchemy/engine/result.py#L519).


OK I can try to work with that but that's a very specific feature, you'd 
need to be using the Query cache extension, or using the 
"compiled_cache" option, *or* the error is being generated within the 
context of a Session being flushed within an INSERT or UPDATE RETURNING 
statement, which itself I don't think uses the traditional result set 
model in Oracle anyway.   Are any of these true for your case ?  Looks 
like you haven't provided a stack trace for your issue, I'm sure you 
have that, so send that along.







There is no need for you or your developers to attempt to debug deep
issues like these; I can identify them within minutes given a
demonstration of the behavior and I am glad to do this work once an
incorrect behavior is demonstrated.


Thank you very much for offering your help.

The work that you need to do when you encounter stack traces like this
is to isolate the behavior into a simple test case, such as the one I
have above.Using the "divide and conquer" approach, where you begin
with your whole application, then slowly remove pieces of it that
continue to exhibit the bad behavior until you have something entirely
minimal.  The guidelines that I often refer to at
http://stackoverflow.com/help/mcve
 have an excellent description
of this.


Sure thing. The problem in practice is that in order to create "simple
test case" it's often needed to get good understanding of used libraries
at the source code level. The good example is your sample test case
which we don't know how to modify in spite of the fact that we suspect
which specific line needs to be hit.


Regards,
Piotr Dobrogost

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bug in BufferedColumnResultProxy class?

2016-04-25 Thread Piotr Dobrogost
On Friday, April 22, 2016 at 5:22:38 PM UTC+2, Mike Bayer wrote:
>
>
> On 04/22/2016 10:40 AM, Piotr Dobrogost wrote: 
> > It seems BufferedColumnResultProxy class (used only in Oracle dialect) 
> > has a bug. 
>
 

> I cannot confirm it, would need a test case.  Here's one that is 
> extremely simple but exercises the features you describe.   I have no 
> doubt that what you're seeing is a bug however so I'd need to know what 
> to add to this test to illustrate the issue: 
>

Thanks for quick response and creation of sample code. We have trouble 
adding something that would trigger bug as we don't fully understand source 
code of SA and in addition we use other framework (Kotti) which adds its 
layer of complexity. Having said that we think that what's needed to 
reproduce bug is to modify your test case so that line 519 in result.py 
would get hit 
(https://github.com/zzzeek/sqlalchemy/blob/rel_1_0_12/lib/sqlalchemy/engine/result.py#L519).

There is no need for you or your developers to attempt to debug deep 
> issues like these; I can identify them within minutes given a 
> demonstration of the behavior and I am glad to do this work once an 
> incorrect behavior is demonstrated. 
>

Thank you very much for offering your help. 

The work that you need to do when you encounter stack traces like this 
> is to isolate the behavior into a simple test case, such as the one I 
> have above.Using the "divide and conquer" approach, where you begin 
> with your whole application, then slowly remove pieces of it that 
> continue to exhibit the bad behavior until you have something entirely 
> minimal.  The guidelines that I often refer to at 
> http://stackoverflow.com/help/mcve have an excellent description of this. 
>

Sure thing. The problem in practice is that in order to create "simple test 
case" it's often needed to get good understanding of used libraries at the 
source code level. The good example is your sample test case which we don't 
know how to modify in spite of the fact that we suspect which specific line 
needs to be hit.


Regards,
Piotr Dobrogost

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Bug in BufferedColumnResultProxy class?

2016-04-22 Thread Piotr Dobrogost
It seems BufferedColumnResultProxy class (used only in Oracle dialect) has 
a bug.

_init_metadata() method defined at 
https://github.com/zzzeek/sqlalchemy/blob/rel_1_0_12/lib/sqlalchemy/engine/result.py#L1238
is being called even for already cached data (that's probably bug) and 
because this method clears a list of processors (metadata._processors = 
[None for _ in range(len(metadata.keys))]) the list of original processors 
(metadata._orig_processors) gets cleared the second time _init_metadata() 
method is called. The end result is that column's value is not being 
processed to proper Python type from the db original type and one gets 
errors like:

File 
"/home/piotr/.virtualenvs/kotti/lib/python2.7/site-packages/sqlalchemy/ext/mutable.py",
 
line 403, in coerce
raise ValueError(msg % (key, type(value)))
ValueError: Attribute '_acl' does not accept objects of type 

Whole traceback is available at http://pastebin.com/Ssui33XC

Guarding method's body after line L1240 with "if not hasattr(metadata, 
'_orig_processors'):" solves the problem but it's neither pretty nor it's 
the right solution. The right solution is probably to not call 
_init_metadata() method on cached data at all.

I'd appreciate if someone with good understanding of SA internals could 
take a look and confirm (better yet fix) this bug. I have to admit it took 
a top notch developer a couple of hours of debugging to go through much of 
SA code and establish the root cause of this problem.

I'm going to raise issue on tracker as soon as someone confirms this bug.

Best regards,
Piotr Dobrogost

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] bug with func.lower in bindparam?

2016-03-25 Thread Mike Bayer



On 03/25/2016 09:48 AM, Brian Cherinka wrote:


Hi,

Is there perhaps a bug in the sqlalchemy function lower (func.lower)?
  I'm trying to do a simple query that filters on a string keyword (the
label of a header_keyword).  I want to bind the parameter so I can
easily update the label after the fact, and I want to use the func.lower
to ensure no funny stuff, and all lowercase.  Without using the lower
everything works fine, and when I print the query, the correct variables
are inserted into the query.  However, it seems when I use func.lower,
the bindparam no longer works and inserts NULL into the variable.  Any
ideas what I'm doing wrong?  Is there a proper way to write this such
that it works?Thanks.

*Without Lower*

|
In[59]:tmpq=session.query(Cube).join(HeaderValue,HeaderKeyword).filter(HeaderKeyword.label==bindparam('header_keyword.label','PLATE'))

In[55]:printtmpq

SELECT cube.*
FROM cube JOIN header_value ON cube.pk =header_value.cube_pk JOIN
header_keyword ON header_keyword.pk =header_value.header_keyword_pk
WHERE header_keyword.label =%(header_keyword.label)s

In[56]:printtmpq.statement.compile(compile_kwargs={'literal_binds':True})

SELECT cube.*
FROM cube JOIN header_value ON cube.pk =header_value.cube_pk JOIN
header_keyword ON header_keyword.pk =header_value.header_keyword_pk
WHERE header_keyword.label ='PLATE'

|


*With Lower*

|
In[59]:tmpq=session.query(Cube).join(HeaderValue,HeaderKeyword).filter(func.lower(HeaderKeyword.label)==bindparam('header_keyword.label',func.lower('PLATE')))


the above is not valid:

   bindparam(somename, func.lower(something))   # <-- not valid

the argument to bindparam must be a Python literal value, not a SQL 
expression.   Bound parameters are interpreted by the database driver 
and the client API of the database itself and cannot evaluate SQL 
expressions.


In this case you just need to put lower on the outside:

func.lower(bindparam(somename))








In[60]:printtmpq

SELECT cube.*
FROM cube JOIN header_value ON cube.pk =header_value.cube_pk JOIN
header_keyword ON header_keyword.pk =header_value.header_keyword_pk
WHERE lower(header_keyword.label)=%(header_keyword.label)s

In[61]:printtmpq.statement.compile(compile_kwargs={'literal_binds':True})

SELECT cube.*
FROM cube JOIN header_value ON cube.pk =header_value.cube_pk JOIN
header_keyword ON header_keyword.pk =header_value.header_keyword_pk
WHERE lower(header_keyword.label)=NULL

|


--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] [Bug?][v0.9.8][postgres] Table.create(): checkfirst=True does NOT work for temporary tables

2014-12-04 Thread Ladislav Lenart
Hello.

The following code crashes:

# db init...
meta = MetaData()
foo = Table('tmp_foo' meta,
Column('id', Integer, primary_key=True),
prefixes=['TEMPORARY'],
)
conn = session.connection()
foo.create(conn, checkfirst=True)
foo.create(conn, checkfirst=True)


This is because the 'check-first' logic emmits the following SQL:

SELECT relname
FROM
pg_class c
JOIN pg_namespace n ON n.oid=c.relnamespace
WHERE
n.nspname=CURRENT_SCHEMA()
AND relname='tmp_foo'


The culrpit is in a call to CURRENT_SCHEMA() because temporary tables in
postgres reside in a special schema.

Is there a fix/workaround for this?


Thank you in advance,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] [Bug?][v0.9.8][postgres] Table.create(): checkfirst=True does NOT work for temporary tables

2014-12-04 Thread Michael Bayer

 On Dec 4, 2014, at 9:36 AM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Hello.
 
 The following code crashes:
 
 # db init...
 meta = MetaData()
 foo = Table('tmp_foo' meta,
Column('id', Integer, primary_key=True),
prefixes=['TEMPORARY'],
 )
 conn = session.connection()
 foo.create(conn, checkfirst=True)
 foo.create(conn, checkfirst=True)
 
 
 This is because the 'check-first' logic emmits the following SQL:
 
 SELECT relname
 FROM
pg_class c
JOIN pg_namespace n ON n.oid=c.relnamespace
 WHERE
n.nspname=CURRENT_SCHEMA()
AND relname='tmp_foo'
 
 
 The culrpit is in a call to CURRENT_SCHEMA() because temporary tables in
 postgres reside in a special schema.
 
 Is there a fix/workaround for this?

issue 
https://bitbucket.org/zzzeek/sqlalchemy/issue/3264/has_table-in-postgresql-doesnt-work-for
 has been created and fixed for 1.0.   

For now, I’d avoid using “checkfirst” for a temporary table.  As this is only 
local to a transaction it should be straightforward just to make sure the code 
is only calling create() once.  If this is unavoidable, then place the call to 
create() within a conditional that runs the new query: 

select relname from pg_class c join pg_namespace n on 
n.oid=c.relnamespace where 
pg_catalog.pg_table_is_visible(c.oid) 
and relname=:name”,

PG’s behavior unfortunately allows a non-temporary table to silently overwrite 
a temporary one, so this change is a significant behavioral change to the 
checkfirst flag.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] [Bug?][v0.9.8][postgres] Table.create(): checkfirst=True does NOT work for temporary tables

2014-12-04 Thread Ladislav Lenart
Wow! Thank you!

I guess this is a near-light-speed support in practice! :-)

I stumbled upon this issue while I was trying to figure out how to work with
temporary tables in SQLAlchemy. Final version of my code does not use the
checkfirst flag at all, because I know when to create and when to drop the temp
table.

However it is nice to know that it will work correctly in all circumstances.

Please, keep up the excellent work, because SQLAlchemy is probably the best
library I have ever worked with! :-)


Thank you,

Ladislav Lenart


On 4.12.2014 18:03, Michael Bayer wrote:
 
 On Dec 4, 2014, at 9:36 AM, Ladislav Lenart lenart...@volny.cz wrote:

 Hello.

 The following code crashes:

 # db init...
 meta = MetaData()
 foo = Table('tmp_foo' meta,
Column('id', Integer, primary_key=True),
prefixes=['TEMPORARY'],
 )
 conn = session.connection()
 foo.create(conn, checkfirst=True)
 foo.create(conn, checkfirst=True)


 This is because the 'check-first' logic emmits the following SQL:

 SELECT relname
 FROM
pg_class c
JOIN pg_namespace n ON n.oid=c.relnamespace
 WHERE
n.nspname=CURRENT_SCHEMA()
AND relname='tmp_foo'


 The culrpit is in a call to CURRENT_SCHEMA() because temporary tables in
 postgres reside in a special schema.

 Is there a fix/workaround for this?
 
 issue 
 https://bitbucket.org/zzzeek/sqlalchemy/issue/3264/has_table-in-postgresql-doesnt-work-for
  has been created and fixed for 1.0.   
 
 For now, I’d avoid using “checkfirst” for a temporary table.  As this is only 
 local to a transaction it should be straightforward just to make sure the 
 code is only calling create() once.  If this is unavoidable, then place the 
 call to create() within a conditional that runs the new query: 
 
 select relname from pg_class c join pg_namespace n on 
 n.oid=c.relnamespace where 
 pg_catalog.pg_table_is_visible(c.oid) 
 and relname=:name”,
 
 PG’s behavior unfortunately allows a non-temporary table to silently 
 overwrite a temporary one, so this change is a significant behavioral change 
 to the checkfirst flag.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Bug bindparam/MySql

2014-03-07 Thread Jas Per
I get an exception, when I try to use a bindparam variable twice in a query 
in MySql like this:

TestTable.name == bindparam('username', type_ = String),
TestTable.username == bindparam('username', type_ = String)

- ubuntu 13.10 64bit
- python 3.3.2
- sqlalchemy 0.9.3
- mysql 5.5.35
- mysql-connector-python 1.1.4

tested with sqlite and postgres as well, both do not fail. looks like mysql 
needs two separate parameters {'username_1': 'test','username_2': 'test'} 
..? full test case:

from sqlalchemy.engine import create_engine
from sqlalchemy.sql.expression import select, and_, bindparam, insert
from sqlalchemy.types import String,Integer
from sqlalchemy.ext.declarative.api import declarative_base
from sqlalchemy.sql.schema import Column

def testCase(dbType):
engine = connectDB(dbType)
connection = engine.connect()

Base = declarative_base()

class TestTable(Base):
__tablename__ = 'TestTable'

ID = Column(Integer, primary_key=True)
name = Column(String(255), nullable=False)
username = Column(String(255), nullable=False)

Base.metadata.create_all(bind=engine)

insData = [{'name':'test','username':'test'}]
connection.execute(insert(TestTable,insData))

statement = select([TestTable]).where(and_(
TestTable.name == bindparam('username', type_ = String),
TestTable.username == bindparam('username', type_ = 
String)
 ))

dbres = 
connection.execute(statement,username='test',usName='test').fetchall()

assert len(dbres)
print('PASSED: '+dbType)
engine.dispose()
connection.close()

def connectDB(dbType):
if dbType == 'sqlite':
connectstring = 'sqlite://'
engine = create_engine(connectstring, echo=True)
elif dbType == 'postgres':
connectstring = 'postgresql://postgres:test@localhost/'
engine = create_engine(connectstring, echo=False)
con = engine.connect()
checkExists = con.execute(SELECT datname FROM 
pg_catalog.pg_database WHERE datname = 'testcasedb';).fetchall()
if not len(checkExists):
con.execute(commit)
con.execute(CREATE DATABASE testcasedb)
con.execute(commit)
con.close()
engine.dispose()
engine = create_engine(connectstring+'testcasedb', echo=True)
elif dbType == 'mysql':
connectstring = 'mysql+mysqlconnector://root:test@localhost/'
engine = create_engine(connectstring, echo=False)
con = engine.connect()
checkExists = con.execute(SELECT SCHEMA_NAME FROM 
INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'testcasedb';).fetchall()
if not len(checkExists):
con.execute(CREATE DATABASE IF NOT EXISTS testcasedb;)#IF NOT 
EXISTS gives a warning on mysql that throws a DatabaseError in sqla
con.close()
engine.dispose()
engine = create_engine(connectstring+'testcasedb', echo=True)
return engine


testCase('sqlite') #works
testCase('postgres') #works
testCase('mysql') #fails!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bug bindparam/MySql

2014-03-07 Thread Michael Bayer
I'll look into what happens there but you probably need to use the same 
bindparam() object for now:

b = bindparam('username', type_=String)

tt.name == b
tt.username == b

On Mar 7, 2014, at 5:47 AM, Jas Per muz...@gmail.com wrote:

 I get an exception, when I try to use a bindparam variable twice in a query 
 in MySql like this:
 
 TestTable.name == bindparam('username', type_ = String),
 TestTable.username == bindparam('username', type_ = String)
 
 - ubuntu 13.10 64bit
 - python 3.3.2
 - sqlalchemy 0.9.3
 - mysql 5.5.35
 - mysql-connector-python 1.1.4
 
 tested with sqlite and postgres as well, both do not fail. looks like mysql 
 needs two separate parameters {'username_1': 'test','username_2': 'test'} ..? 
 full test case:
 
 from sqlalchemy.engine import create_engine
 from sqlalchemy.sql.expression import select, and_, bindparam, insert
 from sqlalchemy.types import String,Integer
 from sqlalchemy.ext.declarative.api import declarative_base
 from sqlalchemy.sql.schema import Column
 
 def testCase(dbType):
 engine = connectDB(dbType)
 connection = engine.connect()
 
 Base = declarative_base()
 
 class TestTable(Base):
 __tablename__ = 'TestTable'
 
 ID = Column(Integer, primary_key=True)
 name = Column(String(255), nullable=False)
 username = Column(String(255), nullable=False)
 
 Base.metadata.create_all(bind=engine)
 
 insData = [{'name':'test','username':'test'}]
 connection.execute(insert(TestTable,insData))
 
 statement = select([TestTable]).where(and_(
 TestTable.name == bindparam('username', type_ = String),
 TestTable.username == bindparam('username', type_ = 
 String)
  ))
 
 dbres = 
 connection.execute(statement,username='test',usName='test').fetchall()
 
 assert len(dbres)
 print('PASSED: '+dbType)
 engine.dispose()
 connection.close()
 
 def connectDB(dbType):
 if dbType == 'sqlite':
 connectstring = 'sqlite://'
 engine = create_engine(connectstring, echo=True)
 elif dbType == 'postgres':
 connectstring = 'postgresql://postgres:test@localhost/'
 engine = create_engine(connectstring, echo=False)
 con = engine.connect()
 checkExists = con.execute(SELECT datname FROM pg_catalog.pg_database 
 WHERE datname = 'testcasedb';).fetchall()
 if not len(checkExists):
 con.execute(commit)
 con.execute(CREATE DATABASE testcasedb)
 con.execute(commit)
 con.close()
 engine.dispose()
 engine = create_engine(connectstring+'testcasedb', echo=True)
 elif dbType == 'mysql':
 connectstring = 'mysql+mysqlconnector://root:test@localhost/'
 engine = create_engine(connectstring, echo=False)
 con = engine.connect()
 checkExists = con.execute(SELECT SCHEMA_NAME FROM 
 INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'testcasedb';).fetchall()
 if not len(checkExists):
 con.execute(CREATE DATABASE IF NOT EXISTS testcasedb;)#IF NOT 
 EXISTS gives a warning on mysql that throws a DatabaseError in sqla
 con.close()
 engine.dispose()
 engine = create_engine(connectstring+'testcasedb', echo=True)
 return engine
 
 
 testCase('sqlite') #works
 testCase('postgres') #works
 testCase('mysql') #fails!
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bug bindparam/MySql

2014-03-07 Thread Jas Per
unfortunately that doesn't help - its MySql server rejecting the statement 
(SQlite and Postgres accept their delivered statements)
thanks for helping!

On Friday, March 7, 2014 3:44:00 PM UTC+1, Michael Bayer wrote:

 I’ll look into what happens there but you probably need to use the same 
 bindparam() object for now:

 b = bindparam(‘username’, type_=String)

 tt.name == b
 tt.username == b

 On Mar 7, 2014, at 5:47 AM, Jas Per muz...@gmail.com javascript: 
 wrote:

 I get an exception, when I try to use a bindparam variable twice in a 
 query in MySql like this:

 TestTable.name == bindparam('username', type_ = String),
 TestTable.username == bindparam('username', type_ = String)

 - ubuntu 13.10 64bit
 - python 3.3.2
 - sqlalchemy 0.9.3
 - mysql 5.5.35
 - mysql-connector-python 1.1.4

 tested with sqlite and postgres as well, both do not fail. looks like 
 mysql needs two separate parameters {'username_1': 'test','username_2': 
 'test'} ..? full test case:

 from sqlalchemy.engine import create_engine
 from sqlalchemy.sql.expression import select, and_, bindparam, insert
 from sqlalchemy.types import String,Integer
 from sqlalchemy.ext.declarative.api import declarative_base
 from sqlalchemy.sql.schema import Column

 def testCase(dbType):
 engine = connectDB(dbType)
 connection = engine.connect()
 
 Base = declarative_base()
 
 class TestTable(Base):
 __tablename__ = 'TestTable'
 
 ID = Column(Integer, primary_key=True)
 name = Column(String(255), nullable=False)
 username = Column(String(255), nullable=False)
 
 Base.metadata.create_all(bind=engine)
 
 insData = [{'name':'test','username':'test'}]
 connection.execute(insert(TestTable,insData))
 
 statement = select([TestTable]).where(and_(
 TestTable.name == bindparam('username', type_ = 
 String),
 TestTable.username == bindparam('username', type_ = 
 String)
  ))
 
 dbres = 
 connection.execute(statement,username='test',usName='test').fetchall()
 
 assert len(dbres)
 print('PASSED: '+dbType)
 engine.dispose()
 connection.close()

 def connectDB(dbType):
 if dbType == 'sqlite':
 connectstring = 'sqlite://'
 engine = create_engine(connectstring, echo=True)
 elif dbType == 'postgres':
 connectstring = 'postgresql://postgres:test@localhost/'
 engine = create_engine(connectstring, echo=False)
 con = engine.connect()
 checkExists = con.execute(SELECT datname FROM 
 pg_catalog.pg_database WHERE datname = 'testcasedb';).fetchall()
 if not len(checkExists):
 con.execute(commit)
 con.execute(CREATE DATABASE testcasedb)
 con.execute(commit)
 con.close()
 engine.dispose()
 engine = create_engine(connectstring+'testcasedb', echo=True)
 elif dbType == 'mysql':
 connectstring = 'mysql+mysqlconnector://root:test@localhost/'
 engine = create_engine(connectstring, echo=False)
 con = engine.connect()
 checkExists = con.execute(SELECT SCHEMA_NAME FROM 
 INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'testcasedb';).fetchall()
 if not len(checkExists):
 con.execute(CREATE DATABASE IF NOT EXISTS testcasedb;)#IF 
 NOT EXISTS gives a warning on mysql that throws a DatabaseError in sqla
 con.close()
 engine.dispose()
 engine = create_engine(connectstring+'testcasedb', echo=True)
 return engine


 testCase('sqlite') #works
 testCase('postgres') #works
 testCase('mysql') #fails!

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bug bindparam/MySql

2014-03-07 Thread Michael Bayer
doesn't fail for me.  MySQL/connector/python is up to 1.1.6 so try that.

otherwise, need full log output + stack trace + exception message.



2014-03-07 11:52:06,805 INFO sqlalchemy.engine.base.Engine INSERT INTO 
`TestTable` (name, username) VALUES (%(name_0)s, %(username_0)s)
2014-03-07 11:52:06,806 INFO sqlalchemy.engine.base.Engine {'name_0': 'test', 
'username_0': 'test'}
2014-03-07 11:52:06,806 INFO sqlalchemy.engine.base.Engine COMMIT
2014-03-07 11:52:06,807 INFO sqlalchemy.engine.base.Engine SELECT 
`TestTable`.`ID`, `TestTable`.name, `TestTable`.username 
FROM `TestTable` 
WHERE `TestTable`.name = %(username)s AND `TestTable`.username = %(username)s
2014-03-07 11:52:06,807 INFO sqlalchemy.engine.base.Engine {'username': 'test'}
PASSED: mysql




On Mar 7, 2014, at 11:36 AM, Jas Per muz...@gmail.com wrote:

 unfortunately that doesn't help - its MySql server rejecting the statement 
 (SQlite and Postgres accept their delivered statements)
 thanks for helping!
 
 On Friday, March 7, 2014 3:44:00 PM UTC+1, Michael Bayer wrote:
 I'll look into what happens there but you probably need to use the same 
 bindparam() object for now:
 
 b = bindparam('username', type_=String)
 
 tt.name == b
 tt.username == b
 
 On Mar 7, 2014, at 5:47 AM, Jas Per muz...@gmail.com wrote:
 
 I get an exception, when I try to use a bindparam variable twice in a query 
 in MySql like this:
 
 TestTable.name == bindparam('username', type_ = String),
 TestTable.username == bindparam('username', type_ = String)
 
 - ubuntu 13.10 64bit
 - python 3.3.2
 - sqlalchemy 0.9.3
 - mysql 5.5.35
 - mysql-connector-python 1.1.4
 
 tested with sqlite and postgres as well, both do not fail. looks like mysql 
 needs two separate parameters {'username_1': 'test','username_2': 'test'} 
 ..? full test case:
 
 from sqlalchemy.engine import create_engine
 from sqlalchemy.sql.expression import select, and_, bindparam, insert
 from sqlalchemy.types import String,Integer
 from sqlalchemy.ext.declarative.api import declarative_base
 from sqlalchemy.sql.schema import Column
 
 def testCase(dbType):
 engine = connectDB(dbType)
 connection = engine.connect()
 
 Base = declarative_base()
 
 class TestTable(Base):
 __tablename__ = 'TestTable'
 
 ID = Column(Integer, primary_key=True)
 name = Column(String(255), nullable=False)
 username = Column(String(255), nullable=False)
 
 Base.metadata.create_all(bind=engine)
 
 insData = [{'name':'test','username':'test'}]
 connection.execute(insert(TestTable,insData))
 
 statement = select([TestTable]).where(and_(
 TestTable.name == bindparam('username', type_ = String),
 TestTable.username == bindparam('username', type_ = 
 String)
  ))
 
 dbres = 
 connection.execute(statement,username='test',usName='test').fetchall()
 
 assert len(dbres)
 print('PASSED: '+dbType)
 engine.dispose()
 connection.close()
 
 def connectDB(dbType):
 if dbType == 'sqlite':
 connectstring = 'sqlite://'
 engine = create_engine(connectstring, echo=True)
 elif dbType == 'postgres':
 connectstring = 'postgresql://postgres:test@localhost/'
 engine = create_engine(connectstring, echo=False)
 con = engine.connect()
 checkExists = con.execute(SELECT datname FROM 
 pg_catalog.pg_database WHERE datname = 'testcasedb';).fetchall()
 if not len(checkExists):
 con.execute(commit)
 con.execute(CREATE DATABASE testcasedb)
 con.execute(commit)
 con.close()
 engine.dispose()
 engine = create_engine(connectstring+'testcasedb', echo=True)
 elif dbType == 'mysql':
 connectstring = 'mysql+mysqlconnector://root:test@localhost/'
 engine = create_engine(connectstring, echo=False)
 con = engine.connect()
 checkExists = con.execute(SELECT SCHEMA_NAME FROM 
 INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'testcasedb';).fetchall()
 if not len(checkExists):
 con.execute(CREATE DATABASE IF NOT EXISTS testcasedb;)#IF NOT 
 EXISTS gives a warning on mysql that throws a DatabaseError in sqla
 con.close()
 engine.dispose()
 engine = create_engine(connectstring+'testcasedb', echo=True)
 return engine
 
 
 testCase('sqlite') #works
 testCase('postgres') #works
 testCase('mysql') #fails!
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com.
 To post to this group, send email to sqlal...@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.
 
 
 -- 
 You received this message because you are subscribed to the Google 

Re: [sqlalchemy] Bug bindparam/MySql

2014-03-07 Thread Jas Per
upgrading connector to 1.1.6 didn't change anything. tried python 2.7 after 
that et voila: works! after some debugging found this in 
mysql/connector/cursor.py line 498ff:

if isinstance(params, dict):
for key, value in self._process_params_dict(params).items():
stmt = stmt.replace(key, value, 1)

the python2.7 version uses the old string formatting, so this is only in 
python3 - will talk to the connector guys.
anyway thanks for having a look and your work on sqlalchemy, very much 
appreciated!

On Friday, March 7, 2014 5:54:20 PM UTC+1, Michael Bayer wrote:

 doesn’t fail for me.  MySQL/connector/python is up to 1.1.6 so try that.

 otherwise, need full log output + stack trace + exception message.



 2014-03-07 11:52:06,805 INFO sqlalchemy.engine.base.Engine INSERT INTO 
 `TestTable` (name, username) VALUES (%(name_0)s, %(username_0)s)
 2014-03-07 11:52:06,806 INFO sqlalchemy.engine.base.Engine {'name_0': 
 'test', 'username_0': 'test'}
 2014-03-07 11:52:06,806 INFO sqlalchemy.engine.base.Engine COMMIT
 2014-03-07 11:52:06,807 INFO sqlalchemy.engine.base.Engine SELECT 
 `TestTable`.`ID`, `TestTable`.name, `TestTable`.username 
 FROM `TestTable` 
 WHERE `TestTable`.name = %(username)s AND `TestTable`.username = 
 %(username)s
 2014-03-07 11:52:06,807 INFO sqlalchemy.engine.base.Engine {'username': 
 'test'}
 PASSED: mysql




 On Mar 7, 2014, at 11:36 AM, Jas Per muz...@gmail.com javascript: 
 wrote:

 unfortunately that doesn't help - its MySql server rejecting the statement 
 (SQlite and Postgres accept their delivered statements)
 thanks for helping!

 On Friday, March 7, 2014 3:44:00 PM UTC+1, Michael Bayer wrote:

 I’ll look into what happens there but you probably need to use the same 
 bindparam() object for now:

 b = bindparam(‘username’, type_=String)

 tt.name == b
 tt.username == b

 On Mar 7, 2014, at 5:47 AM, Jas Per muz...@gmail.com wrote:

 I get an exception, when I try to use a bindparam variable twice in a 
 query in MySql like this:

 TestTable.name == bindparam('username', type_ = String),
 TestTable.username == bindparam('username', type_ = String)

 - ubuntu 13.10 64bit
 - python 3.3.2
 - sqlalchemy 0.9.3
 - mysql 5.5.35
 - mysql-connector-python 1.1.4

 tested with sqlite and postgres as well, both do not fail. looks like 
 mysql needs two separate parameters {'username_1': 'test','username_2': 
 'test'} ..? full test case:

 from sqlalchemy.engine import create_engine
 from sqlalchemy.sql.expression import select, and_, bindparam, insert
 from sqlalchemy.types import String,Integer
 from sqlalchemy.ext.declarative.api import declarative_base
 from sqlalchemy.sql.schema import Column

 def testCase(dbType):
 engine = connectDB(dbType)
 connection = engine.connect()
 
 Base = declarative_base()
 
 class TestTable(Base):
 __tablename__ = 'TestTable'
 
 ID = Column(Integer, primary_key=True)
 name = Column(String(255), nullable=False)
 username = Column(String(255), nullable=False)
 
 Base.metadata.create_all(bind=engine)
 
 insData = [{'name':'test','username':'test'}]
 connection.execute(insert(TestTable,insData))
 
 statement = select([TestTable]).where(and_(
 TestTable.name == bindparam('username', type_ = 
 String),
 TestTable.username == bindparam('username', type_ = 
 String)
  ))
 
 dbres = 
 connection.execute(statement,username='test',usName='test').fetchall()
 
 assert len(dbres)
 print('PASSED: '+dbType)
 engine.dispose()
 connection.close()

 def connectDB(dbType):
 if dbType == 'sqlite':
 connectstring = 'sqlite://'
 engine = create_engine(connectstring, echo=True)
 elif dbType == 'postgres':
 connectstring = 'postgresql://postgres:test@localhost/'
 engine = create_engine(connectstring, echo=False)
 con = engine.connect()
 checkExists = con.execute(SELECT datname FROM 
 pg_catalog.pg_database WHERE datname = 'testcasedb';).fetchall()
 if not len(checkExists):
 con.execute(commit)
 con.execute(CREATE DATABASE testcasedb)
 con.execute(commit)
 con.close()
 engine.dispose()
 engine = create_engine(connectstring+'testcasedb', echo=True)
 elif dbType == 'mysql':
 connectstring = 'mysql+mysqlconnector://root:test@localhost/'
 engine = create_engine(connectstring, echo=False)
 con = engine.connect()
 checkExists = con.execute(SELECT SCHEMA_NAME FROM 
 INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'testcasedb';).fetchall()
 if not len(checkExists):
 con.execute(CREATE DATABASE IF NOT EXISTS testcasedb;)#IF 
 NOT EXISTS gives a warning on mysql that throws a DatabaseError in sqla
 con.close()
 engine.dispose()
 engine = 

Re: [sqlalchemy] Bug in query with multiple joins when using joined inheritance?

2013-06-15 Thread Michael Bayer
I didn't think it would be, but it is a bug, yes, am applying the patch in 
http://www.sqlalchemy.org/trac/ticket/2759 right now.


On Jun 14, 2013, at 11:52 PM, Seth P spadow...@gmail.com wrote:

 I've encountered what I believe to be a bug in SQLAlchemy (versions 0.8.0 and 
 0.8.1) in a query that joins class/tables that use joined inheritance.
 
 In the code below, I would expect the three queries to produce the same 
 output, namely [u'CCC'], but the first one gives a different (incorrect) 
 result, [u'BBB']. Is this a bug, or is the query malformed?
 In the second query, adding a seemingly gratuitous join with D fixes the 
 problem. And as the third query shows, replacing C by an aliased version also 
 fixes the problem. So whatever is going on seems rather subtle.
 
 Thanks,
 
 Seth
 
 
 
 from sqlalchemy import Column, Integer, String, ForeignKey, create_engine
 from sqlalchemy.ext.declarative.api import declarative_base
 from sqlalchemy.orm import relationship, sessionmaker, scoped_session
 from sqlalchemy.orm.util import aliased
 
 Base = declarative_base(object)
 metadata = Base.metadata
 
 class A(Base):
 __tablename__ = 'A'
 idx = Column(Integer, primary_key=True)
 name = Column(String(20), nullable=False)
 type_idx = Column(Integer, nullable=False)
 __mapper_args__ = { 'polymorphic_on':type_idx }
 
 class B(A):
 __tablename__ = 'B'
 idx = Column(Integer, ForeignKey(str(A.__table__) + .idx), 
 primary_key=True)
 __mapper_args__ = { 'polymorphic_identity':2 }
 
 class C(A):
 __tablename__ = 'C'
 idx = Column(Integer, ForeignKey(str(A.__table__) + .idx), 
 primary_key=True)
 b_idx = Column(Integer, ForeignKey(str(B.__table__) + .idx), 
 nullable=False)
 b = relationship(B, foreign_keys=[b_idx])
 __mapper_args__ = { 'polymorphic_identity':3 }
 
 class D(A):
 __tablename__ = 'D'
 idx = Column(Integer, ForeignKey(str(A.__table__) + .idx), 
 primary_key=True)
 __mapper_args__ = { 'polymorphic_identity':4 }
 
 class CtoD(Base):
 __tablename__ = 'CtoD'
 idx = Column(Integer, primary_key=True)
 c_idx = Column(Integer, ForeignKey(str(C.__table__) + .idx), 
 nullable=False)
 c = relationship(C, foreign_keys=[c_idx])
 d_idx = Column(Integer, ForeignKey(str(D.__table__) + .idx), 
 nullable=False)
 d = relationship(D, foreign_keys=[d_idx])
 
 if __name__ == '__main__':
 engine = create_engine('sqlite:///:memory:', echo=False)
 metadata.create_all(bind=engine)
 Session = scoped_session(sessionmaker(bind=engine))
 session = Session()
 
 # populate tables with a single entry in each table
 b = B(name='BBB')
 c = C(name='CCC', b=b)
 d = D(name='DDD')
 c_to_d = CtoD(c=c, d=d)
 session.add_all([b, c, d, c_to_d])
 session.commit()
 
 sql_query = session.query(B, C.name).join(C, B.idx == C.b_idx).join(CtoD, 
 C.idx == CtoD.c_idx).join(D, CtoD.d_idx == D.idx)
 print [name for (_, name) in sql_query.all()]  # [u'BBB']
 
 sql_query = session.query(B, C.name).join(C, B.idx == C.b_idx).join(CtoD, 
 C.idx == CtoD.c_idx)
 print [name for (_, name) in sql_query.all()]  # [u'CCC']
 
 aliased_C = aliased(C)
 sql_query = session.query(B, aliased_C.name).join(aliased_C, B.idx == 
 aliased_C.b_idx).join(CtoD, aliased_C.idx == CtoD.c_idx).join(D, CtoD.d_idx 
 == D.idx).join(D, CtoD.d_idx == D.idx)
 print [name for (_, name) in sql_query.all()]  # [u'CCC']
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Bug in query with multiple joins when using joined inheritance?

2013-06-15 Thread Michael Bayer
fixed in master and rel_0_8.


On Jun 15, 2013, at 3:02 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 I didn't think it would be, but it is a bug, yes, am applying the patch in 
 http://www.sqlalchemy.org/trac/ticket/2759 right now.
 
 
 On Jun 14, 2013, at 11:52 PM, Seth P spadow...@gmail.com wrote:
 
 I've encountered what I believe to be a bug in SQLAlchemy (versions 0.8.0 
 and 0.8.1) in a query that joins class/tables that use joined inheritance.
 
 In the code below, I would expect the three queries to produce the same 
 output, namely [u'CCC'], but the first one gives a different (incorrect) 
 result, [u'BBB']. Is this a bug, or is the query malformed?
 In the second query, adding a seemingly gratuitous join with D fixes the 
 problem. And as the third query shows, replacing C by an aliased version 
 also fixes the problem. So whatever is going on seems rather subtle.
 
 Thanks,
 
 Seth
 
 
 
 from sqlalchemy import Column, Integer, String, ForeignKey, create_engine
 from sqlalchemy.ext.declarative.api import declarative_base
 from sqlalchemy.orm import relationship, sessionmaker, scoped_session
 from sqlalchemy.orm.util import aliased
 
 Base = declarative_base(object)
 metadata = Base.metadata
 
 class A(Base):
 __tablename__ = 'A'
 idx = Column(Integer, primary_key=True)
 name = Column(String(20), nullable=False)
 type_idx = Column(Integer, nullable=False)
 __mapper_args__ = { 'polymorphic_on':type_idx }
 
 class B(A):
 __tablename__ = 'B'
 idx = Column(Integer, ForeignKey(str(A.__table__) + .idx), 
 primary_key=True)
 __mapper_args__ = { 'polymorphic_identity':2 }
 
 class C(A):
 __tablename__ = 'C'
 idx = Column(Integer, ForeignKey(str(A.__table__) + .idx), 
 primary_key=True)
 b_idx = Column(Integer, ForeignKey(str(B.__table__) + .idx), 
 nullable=False)
 b = relationship(B, foreign_keys=[b_idx])
 __mapper_args__ = { 'polymorphic_identity':3 }
 
 class D(A):
 __tablename__ = 'D'
 idx = Column(Integer, ForeignKey(str(A.__table__) + .idx), 
 primary_key=True)
 __mapper_args__ = { 'polymorphic_identity':4 }
 
 class CtoD(Base):
 __tablename__ = 'CtoD'
 idx = Column(Integer, primary_key=True)
 c_idx = Column(Integer, ForeignKey(str(C.__table__) + .idx), 
 nullable=False)
 c = relationship(C, foreign_keys=[c_idx])
 d_idx = Column(Integer, ForeignKey(str(D.__table__) + .idx), 
 nullable=False)
 d = relationship(D, foreign_keys=[d_idx])
 
 if __name__ == '__main__':
 engine = create_engine('sqlite:///:memory:', echo=False)
 metadata.create_all(bind=engine)
 Session = scoped_session(sessionmaker(bind=engine))
 session = Session()
 
 # populate tables with a single entry in each table
 b = B(name='BBB')
 c = C(name='CCC', b=b)
 d = D(name='DDD')
 c_to_d = CtoD(c=c, d=d)
 session.add_all([b, c, d, c_to_d])
 session.commit()
 
 sql_query = session.query(B, C.name).join(C, B.idx == 
 C.b_idx).join(CtoD, C.idx == CtoD.c_idx).join(D, CtoD.d_idx == D.idx)
 print [name for (_, name) in sql_query.all()]  # [u'BBB']
 
 sql_query = session.query(B, C.name).join(C, B.idx == 
 C.b_idx).join(CtoD, C.idx == CtoD.c_idx)
 print [name for (_, name) in sql_query.all()]  # [u'CCC']
 
 aliased_C = aliased(C)
 sql_query = session.query(B, aliased_C.name).join(aliased_C, B.idx == 
 aliased_C.b_idx).join(CtoD, aliased_C.idx == CtoD.c_idx).join(D, CtoD.d_idx 
 == D.idx).join(D, CtoD.d_idx == D.idx)
 print [name for (_, name) in sql_query.all()]  # [u'CCC']
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Bug in query with multiple joins when using joined inheritance?

2013-06-14 Thread Seth P
I've encountered what I believe to be a bug in SQLAlchemy (versions 0.8.0 
and 0.8.1) in a query that joins class/tables that use joined inheritance.

In the code below, I would expect the three queries to produce the same 
output, namely [u'CCC'], but the first one gives a different (incorrect) 
result, [u'BBB']. Is this a bug, or is the query malformed?
In the second query, adding a seemingly gratuitous join with D fixes the 
problem. And as the third query shows, replacing C by an aliased version 
also fixes the problem. So whatever is going on seems rather subtle.

Thanks,

Seth



from sqlalchemy import Column, Integer, String, ForeignKey, create_engine
from sqlalchemy.ext.declarative.api import declarative_base
from sqlalchemy.orm import relationship, sessionmaker, scoped_session
from sqlalchemy.orm.util import aliased

Base = declarative_base(object)
metadata = Base.metadata

class A(Base):
__tablename__ = 'A'
idx = Column(Integer, primary_key=True)
name = Column(String(20), nullable=False)
type_idx = Column(Integer, nullable=False)
__mapper_args__ = { 'polymorphic_on':type_idx }

class B(A):
__tablename__ = 'B'
idx = Column(Integer, ForeignKey(str(A.__table__) + .idx), 
primary_key=True)
__mapper_args__ = { 'polymorphic_identity':2 }

class C(A):
__tablename__ = 'C'
idx = Column(Integer, ForeignKey(str(A.__table__) + .idx), 
primary_key=True)
b_idx = Column(Integer, ForeignKey(str(B.__table__) + .idx), 
nullable=False)
b = relationship(B, foreign_keys=[b_idx])
__mapper_args__ = { 'polymorphic_identity':3 }

class D(A):
__tablename__ = 'D'
idx = Column(Integer, ForeignKey(str(A.__table__) + .idx), 
primary_key=True)
__mapper_args__ = { 'polymorphic_identity':4 }

class CtoD(Base):
__tablename__ = 'CtoD'
idx = Column(Integer, primary_key=True)
c_idx = Column(Integer, ForeignKey(str(C.__table__) + .idx), 
nullable=False)
c = relationship(C, foreign_keys=[c_idx])
d_idx = Column(Integer, ForeignKey(str(D.__table__) + .idx), 
nullable=False)
d = relationship(D, foreign_keys=[d_idx])

if __name__ == '__main__':
engine = create_engine('sqlite:///:memory:', echo=False)
metadata.create_all(bind=engine)
Session = scoped_session(sessionmaker(bind=engine))
session = Session()

# populate tables with a single entry in each table
b = B(name='BBB')
c = C(name='CCC', b=b)
d = D(name='DDD')
c_to_d = CtoD(c=c, d=d)
session.add_all([b, c, d, c_to_d])
session.commit()

sql_query = session.query(B, C.name).join(C, B.idx == 
C.b_idx).join(CtoD, C.idx == CtoD.c_idx).join(D, CtoD.d_idx == D.idx)
print [name for (_, name) in sql_query.all()]  # [u'BBB']

sql_query = session.query(B, C.name).join(C, B.idx == 
C.b_idx).join(CtoD, C.idx == CtoD.c_idx)
print [name for (_, name) in sql_query.all()]  # [u'CCC']

aliased_C = aliased(C)
sql_query = session.query(B, aliased_C.name).join(aliased_C, B.idx == 
aliased_C.b_idx).join(CtoD, aliased_C.idx == CtoD.c_idx).join(D, CtoD.d_idx 
== D.idx).join(D, CtoD.d_idx == D.idx)
print [name for (_, name) in sql_query.all()]  # [u'CCC']

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [BUG][PATCH] Function names not quoted when necessary

2013-06-07 Thread Michael Bayer
logged this as http://www.sqlalchemy.org/trac/attachment/ticket/2749


On Jun 6, 2013, at 10:27 PM, Ryan Kelly rpkell...@gmail.com wrote:

 Function names in SQL can contain pretty much anything, e.g.:
 
 =# create function A Bug?(integer) returns integer as $$ select $1; $$ 
 language sql;
 CREATE FUNCTION
 
 But when attempting to use the function from SQLAlchemy:
 
 from sqlalchemy.sql.expression import func
 bug = getattr(func, A Bug?)(1)
 session.query(bug).all()
 
 ProgrammingError: (ProgrammingError) syntax error at or near ?
 LINE 1: SELECT A Bug?(1) AS A Bug?_1
 'SELECT A Bug?(%(A Bug?_2)s) AS A Bug?_1' {'A Bug?_2': 1}
 
 -Ryan P. Kelly
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 
 quote_function_names.patch

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] [BUG][PATCH] Function names not quoted when necessary

2013-06-06 Thread Ryan Kelly
Function names in SQL can contain pretty much anything, e.g.:

=# create function A Bug?(integer) returns integer as $$ select $1; $$ 
language sql;
CREATE FUNCTION

But when attempting to use the function from SQLAlchemy:

from sqlalchemy.sql.expression import func
bug = getattr(func, A Bug?)(1)
session.query(bug).all()

ProgrammingError: (ProgrammingError) syntax error at or near ?
LINE 1: SELECT A Bug?(1) AS A Bug?_1
 'SELECT A Bug?(%(A Bug?_2)s) AS A Bug?_1' {'A Bug?_2': 1}

-Ryan P. Kelly

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


diff --git a/lib/sqlalchemy/sql/compiler.py b/lib/sqlalchemy/sql/compiler.py
index dd2a6e0..ada56c6 100644
--- a/lib/sqlalchemy/sql/compiler.py
+++ b/lib/sqlalchemy/sql/compiler.py
@@ -603,7 +603,10 @@ class SQLCompiler(engine.Compiled):
 if disp:
 return disp(func, **kwargs)
 else:
-name = FUNCTIONS.get(func.__class__, func.name + %(expr)s)
+name = FUNCTIONS.get(
+func.__class__,
+self.preparer.quote(func.name, None) + %(expr)s
+)
 return ..join(list(func.packagenames) + [name]) % \
 {'expr': self.function_argspec(func, **kwargs)}
 
diff --git a/test/sql/test_compiler.py b/test/sql/test_compiler.py
index 473a422..6ea4d2a 100644
--- a/test/sql/test_compiler.py
+++ b/test/sql/test_compiler.py
@@ -2481,6 +2481,49 @@ class SelectTest(fixtures.TestBase, AssertsCompiledSQL):
 and_, (a,), (b,)
 )
 
+def test_func(self):
+f1 = func.somefunc(1)
+self.assert_compile(
+select([f1]),
+SELECT somefunc(:somefunc_2) AS somefunc_1,
+)
+self.assert_compile(
+select([f1.label(f1)]),
+SELECT somefunc(:somefunc_1) AS f1,
+)
+
+f2 = func.somefunc(table1.c.name)
+self.assert_compile(
+select([f2]),
+SELECT somefunc(mytable.name) AS somefunc_1 FROM mytable,
+)
+self.assert_compile(
+select([f2.label(f2)]),
+SELECT somefunc(mytable.name) AS f2 FROM mytable,
+)
+
+f3 = getattr(func, Needs Quotes?)(table1.c.myid)
+self.assert_compile(
+select([f3]),
+'SELECT Needs Quotes?(mytable.myid) AS Needs Quotes?_1 FROM '
+'mytable'
+)
+self.assert_compile(
+select([f3.label(f3)]),
+'SELECT Needs Quotes?(mytable.myid) AS f3 FROM mytable',
+)
+
+f4 = getattr(func, query from pg_stat_activity; --)()
+self.assert_compile(
+select([f4]),
+'SELECT query from pg_stat_activity; --() AS query from '
+'pg_stat_activity; --_1',
+)
+self.assert_compile(
+select([f4.label(f4)]),
+'SELECT query from pg_stat_activity; --(mytable.myid) AS f4'
+)
+
 
 class KwargPropagationTest(fixtures.TestBase):
 


[sqlalchemy] bug in dogpile advanced example?

2013-05-22 Thread avdd
two through twelve actually shows 25 .. 40


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] [BUG] [PATCH] Calling yield_per followed by execution_options results in AttributeError

2013-02-07 Thread Ryan Kelly
Calling yield_per on a query followed by execution_options results in
the following error:

Traceback (most recent call last):
  File /tmp/execution_options.py, line 18, in module
query = query.execution_options(stream_results=True)
  File string, line 1, in lambda
  File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line
50, in generate
fn(self, *args[1:], **kw)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line
1040, in execution_options
self._execution_options = self._execution_options.union(kwargs)
AttributeError: 'dict' object has no attribute 'union'

Attached is a patch with a test case.

-Ryan Kelly

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


diff -r 53b53ad288ad lib/sqlalchemy/orm/query.py
--- a/lib/sqlalchemy/orm/query.py	Thu Feb 07 20:29:47 2013 -0500
+++ b/lib/sqlalchemy/orm/query.py	Thu Feb 07 23:05:45 2013 -0500
@@ -712,8 +712,7 @@
 
 
 self._yield_per = count
-self._execution_options = self._execution_options.copy()
-self._execution_options['stream_results'] = True
+self.execution_options(stream_results=True)
 
 def get(self, ident):
 Return an instance based on the given primary key identifier,
diff -r 53b53ad288ad test/orm/test_query.py
--- a/test/orm/test_query.py	Thu Feb 07 20:29:47 2013 -0500
+++ b/test/orm/test_query.py	Thu Feb 07 23:05:45 2013 -0500
@@ -1784,6 +1784,13 @@
 except StopIteration:
 pass
 
+def test_yield_per_and_execution_options(self):
+User = self.classes.User
+
+sess = create_session()
+q = sess.query(User).yield_per(1)
+q = q.execution_options(stream_results=True)
+
 class HintsTest(QueryTest, AssertsCompiledSQL):
 def test_hints(self):
 User = self.classes.User


Re: [sqlalchemy] [BUG] [PATCH] Calling yield_per followed by execution_options results in AttributeError

2013-02-07 Thread Michael Bayer
that's an ImmutableDict so we can just call union() on it as we're supposed to 
bethis is http://www.sqlalchemy.org/trac/ticket/2661 and it's committed in 
reec3f59080ac, thanks !



On Feb 7, 2013, at 11:09 PM, Ryan Kelly wrote:

 Calling yield_per on a query followed by execution_options results in
 the following error:
 
 Traceback (most recent call last):
  File /tmp/execution_options.py, line 18, in module
query = query.execution_options(stream_results=True)
  File string, line 1, in lambda
  File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line
 50, in generate
fn(self, *args[1:], **kw)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line
 1040, in execution_options
self._execution_options = self._execution_options.union(kwargs)
 AttributeError: 'dict' object has no attribute 'union'
 
 Attached is a patch with a test case.
 
 -Ryan Kelly
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 
 yield_per.patch

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Bug Report: A regression in the MSSQL Dialect in 0.8.x

2012-12-06 Thread Michael Bayer

On Dec 5, 2012, at 6:50 PM, Derek Harland wrote:

 
 I wonder if a solution here is to somehow allow the schema argument to also 
 be given as a tuple.  eg 
   
   schema=x.y  would generate a DDL path as x.y
   schema=[x.y]  would generate a DDL path as [x.y]
   schema=[a.b, x.y]  would generate a DDL path as [a.b].[x.y]
 
 Thus eg MSIdentifierPreparer.quote_schema could be something like:
 
def quote_schema(self, schema, force=True):
Prepare a quoted table and schema name.
if not isinstance(schema, (list, tuple)):
schema = schema.split('.')
result = '.'.join([self.quote(x, force) for x in schema])
return result
 
 then _owner_plus_db could special case on whether schema is a list.

schema is a pretty major argument that I'd rather not dilute its datatype, it 
seems more intuitive anyway that we'd just allow SQL-server style quoting to be 
significant:

C, schema=A.B

C, schema=[A.B]

C, schema=[A.B].[C.D]

that way...it works exactly like SQL Server does.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Bug Report: A regression in the MSSQL Dialect in 0.8.x

2012-12-05 Thread Michael Bayer

On Dec 4, 2012, at 6:27 PM, Derek Harland wrote:

 
 Yes
 
 In that case, how does SQL server make the distinction?
 
 If things have an embedded . then SQL server would ideally make the 
 distinction based on you quoting the database/schema names.  This could be 
 done in the Transact-SQL manner:
 
   [database].[example.schema.with.dots].[table]
 
 or via ANSI sql quoting.
 
   database.example.schema.with.dots.table
 
 The MSSQL dialect currently quotes identifiers using the T-SQL standard
 
 If the DB has:
 
 database A - schema B - table C
 database DEFAULT - schema A.B - table C
 
 then what does SQL server consider A.B.C to mean ?
 
 By default, unless any identifier is quoted then A.B.C will be read to mean 
 database.schema.table.  ie database A - schema B - table C.
 
 To path to the latter you'd need to use [A.B].C or A.B.C.

So, the quotes are *required* in order for SQL Server to see that A.B is the 
schemaname, right ?   Meaning, database.schema is the default, schema 
with dots is the exception case.   

In that case, a schema name with a dot in it is not supported at all right now 
- unless you embedded quotes into it (doesn't SQL Server use [] for quotes 
also?)   I'm not sure how this is a regression - are you saying that a schema 
name with a dot in it *does* work in 0.7 and is interpreted as just schema name?

The key here is that SQL server I am assuming is *not* doing logic like, check 
for DBname A, if not present then assume schema name is 'A.B'.

 
 Personally, if I call get_view_names(..., schema=A.B) ... I'd expect it to 
 be looking in schema A.B in the current database.  If I wanted to look in 
 another database I'd expect to be calling something like get_view_names(..., 
 schema=B, dbname=A) probably.

The schema parameter has for a while now supported the idea of allowing 
dotted names to be present, and while it is called schema, it really means, 
dotted qualifier for the Table.   For example when you use it with SQLite, 
that's not a schema, its a different database file setup using ATTACH.

So right now, we have users using Table like this:

Table(mytable, metadata, schema=dbname.schemaname)

and that works in 0.7 as well, it's just reflection that doesn't work there.   
And since we know that A.B.C without extra quotes means 
dbname.schema.table, the interpretation of schema as dbname.schemaname is 
not new at all in 0.8.With quotes required, the feature of supporting a 
schema name with a dot in it is not supported at all in 0.7 either.

Support for schema names with dots in them seems like it would entirely be a 
feature add. Let's take it over to 
http://www.sqlalchemy.org/trac/ticket/2626.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Bug Report: A regression in the MSSQL Dialect in 0.8.x

2012-12-05 Thread Derek Harland

On 6/12/2012, at 5:26 AM, Michael Bayer wrote:

 
 On Dec 4, 2012, at 6:27 PM, Derek Harland wrote:
 
 
 Yes
 
 In that case, how does SQL server make the distinction?
 
 If things have an embedded . then SQL server would ideally make the 
 distinction based on you quoting the database/schema names.  This could be 
 done in the Transact-SQL manner:
 
  [database].[example.schema.with.dots].[table]
 
 or via ANSI sql quoting.
 
  database.example.schema.with.dots.table
 
 The MSSQL dialect currently quotes identifiers using the T-SQL standard
 
 If the DB has:
 
 database A - schema B - table C
 database DEFAULT - schema A.B - table C
 
 then what does SQL server consider A.B.C to mean ?
 
 By default, unless any identifier is quoted then A.B.C will be read to mean 
 database.schema.table.  ie database A - schema B - table C.
 
 To path to the latter you'd need to use [A.B].C or A.B.C.
 
 So, the quotes are *required* in order for SQL Server to see that A.B is 
 the schemaname, right ?   Meaning, database.schema is the default, 
 schema with dots is the exception case.   

That's how it appears from my testing.

 In that case, a schema name with a dot in it is not supported at all right 
 now - unless you embedded quotes into it (doesn't SQL Server use [] for 
 quotes also?)   I'm not sure how this is a regression - are you saying that a 
 schema name with a dot in it *does* work in 0.7 and is interpreted as just 
 schema name?

Hmm ... I imagine it absolutely *doesn't* work in most of the api, as you note 
below.  I struck a problem in the reflection part of the api ... where I was 
generating a model from an existing database to use with alembic and doing 
something like:

# get_schema_names returns the schemas in only the database we are 
connected to in both 0.7 and 0.8
   inspector = reflection.Inspector.from_engine(engine) 
for schema in inspector.get_schema_names():
# this loop will work in 0.7 and fail in 0.8 
# because in 0.8 it will split a schema called x.y into 
database x, schema y
# and look for all views in [x].[y]
# Whereas in 0.7 it looks in [x.y]
for name in inspector.get_view_names(schema)
inspector.get_view_definition(name, schema)

 The key here is that SQL server I am assuming is *not* doing logic like, 
 check for DBname A, if not present then assume schema name is 'A.B'.

I've tested that ... it definitely doesn't.  

 
 
 Personally, if I call get_view_names(..., schema=A.B) ... I'd expect it to 
 be looking in schema A.B in the current database.  If I wanted to look in 
 another database I'd expect to be calling something like get_view_names(..., 
 schema=B, dbname=A) probably.
 
 The schema parameter has for a while now supported the idea of allowing 
 dotted names to be present, and while it is called schema, it really means, 
 dotted qualifier for the Table.   For example when you use it with SQLite, 
 that's not a schema, its a different database file setup using ATTACH.
 
 So right now, we have users using Table like this:
 
 Table(mytable, metadata, schema=dbname.schemaname)
 
 and that works in 0.7 as well, it's just reflection that doesn't work there.  
  And since we know that A.B.C without extra quotes means 
 dbname.schema.table, the interpretation of schema as dbname.schemaname 
 is not new at all in 0.8.With quotes required, the feature of supporting 
 a schema name with a dot in it is not supported at all in 0.7 either.
 
 Support for schema names with dots in them seems like it would entirely be a 
 feature add. Let's take it over to 
 http://www.sqlalchemy.org/trac/ticket/2626.

I wonder if a solution here is to somehow allow the schema argument to also 
be given as a tuple.  eg 

schema=x.y  would generate a DDL path as x.y
schema=[x.y]  would generate a DDL path as [x.y]
schema=[a.b, x.y]  would generate a DDL path as [a.b].[x.y]

Thus eg MSIdentifierPreparer.quote_schema could be something like:

def quote_schema(self, schema, force=True):
Prepare a quoted table and schema name.
if not isinstance(schema, (list, tuple)):
schema = schema.split('.')
result = '.'.join([self.quote(x, force) for x in schema])
return result

then _owner_plus_db could special case on whether schema is a list.

derek. 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Bug Report: A regression in the MSSQL Dialect in 0.8.x

2012-12-04 Thread Michael Bayer

On Dec 3, 2012, at 10:20 PM, Derek Harland wrote:

 The MSSQL dialect in 0.8.x seems to have had many of the reflection methods 
 changed from something like:
 
@reflection.cache
def get_view_names(self, connection, schema=None, **kw):
 
 to 
 
@reflection.cache
@_db_plus_owner_listing
def get_view_names(self, connection, dbname, owner, schema, **kw):
 
 where the decorator _db_plus_owner_listing is defined as:
 
def _db_plus_owner_listing(fn):
def wrap(dialect, connection, schema=None, **kw):
dbname, owner = _owner_plus_db(dialect, schema)
return _switch_db(dbname, connection, fn, dialect, connection,
dbname, owner, schema, **kw)
return update_wrapper(wrap, fn)
 
 It seems that they are being extended so people can use schema in this 
 fashion: database.schema
 
 This won't work as it stands.  Schemas are themselves allowed to contain a 
 ., thus running any reflection on a schema firstname.lastname crashes (it 
 will try to switch to a database called firstname and inspect the schema 
 lastname)

my understanding is that users are able to use this dbname.schema.table 
syntax directly in SQL server, is that correct?

In that case, how does SQL server make the distinction?

If the DB has:

database A - schema B - table C
database DEFAULT - schema A.B - table C

then what does SQL server consider A.B.C to mean ?










 
 derek.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Bug Report: A regression in the MSSQL Dialect in 0.8.x

2012-12-04 Thread Derek Harland
On 5/12/2012, at 4:30 AM, Michael Bayer wrote:

 
 On Dec 3, 2012, at 10:20 PM, Derek Harland wrote:
 
 The MSSQL dialect in 0.8.x seems to have had many of the reflection methods 
 changed from something like:
 
   @reflection.cache
   def get_view_names(self, connection, schema=None, **kw):
 
 to 
 
   @reflection.cache
   @_db_plus_owner_listing
   def get_view_names(self, connection, dbname, owner, schema, **kw):
 
 where the decorator _db_plus_owner_listing is defined as:
 
   def _db_plus_owner_listing(fn):
   def wrap(dialect, connection, schema=None, **kw):
   dbname, owner = _owner_plus_db(dialect, schema)
   return _switch_db(dbname, connection, fn, dialect, connection,
   dbname, owner, schema, **kw)
   return update_wrapper(wrap, fn)
 
 It seems that they are being extended so people can use schema in this 
 fashion: database.schema
 
 This won't work as it stands.  Schemas are themselves allowed to contain a 
 ., thus running any reflection on a schema firstname.lastname crashes 
 (it will try to switch to a database called firstname and inspect the 
 schema lastname)
 
 my understanding is that users are able to use this dbname.schema.table 
 syntax directly in SQL server, is that correct?

Yes

 In that case, how does SQL server make the distinction?

If things have an embedded . then SQL server would ideally make the 
distinction based on you quoting the database/schema names.  This could be done 
in the Transact-SQL manner:

   [database].[example.schema.with.dots].[table]

or via ANSI sql quoting.

   database.example.schema.with.dots.table

The MSSQL dialect currently quotes identifiers using the T-SQL standard

 If the DB has:
 
 database A - schema B - table C
 database DEFAULT - schema A.B - table C
 
 then what does SQL server consider A.B.C to mean ?

By default, unless any identifier is quoted then A.B.C will be read to mean 
database.schema.table.  ie database A - schema B - table C.

To path to the latter you'd need to use [A.B].C or A.B.C.

Personally, if I call get_view_names(..., schema=A.B) ... I'd expect it to be 
looking in schema A.B in the current database.  If I wanted to look in another 
database I'd expect to be calling something like get_view_names(..., 
schema=B, dbname=A) probably.

derek.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Bug Report: A regression in the MSSQL Dialect in 0.8.x

2012-12-03 Thread Derek Harland
The MSSQL dialect in 0.8.x seems to have had many of the reflection methods 
changed from something like:

@reflection.cache
def get_view_names(self, connection, schema=None, **kw):

to 

@reflection.cache
@_db_plus_owner_listing
def get_view_names(self, connection, dbname, owner, schema, **kw):

where the decorator _db_plus_owner_listing is defined as:

def _db_plus_owner_listing(fn):
def wrap(dialect, connection, schema=None, **kw):
dbname, owner = _owner_plus_db(dialect, schema)
return _switch_db(dbname, connection, fn, dialect, connection,
dbname, owner, schema, **kw)
return update_wrapper(wrap, fn)

It seems that they are being extended so people can use schema in this fashion: 
database.schema

This won't work as it stands.  Schemas are themselves allowed to contain a ., 
thus running any reflection on a schema firstname.lastname crashes (it will 
try to switch to a database called firstname and inspect the schema 
lastname)

derek.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Bug: Inefficient query being generated by relationship, mixin class combo

2012-06-18 Thread Fayaz Yusuf Khan
The attached script generates an inefficient query at the end:

SELECT `People`.friday_id AS `People_friday_id`, `People`.parent_id AS 
`People_parent_id`, `People`.user_hash AS `People_user_hash` 
FROM `People` 
WHERE `People`.friday_id = %s AND `People`.user_hash = `People`.user_hash

Please note the user_hash = user_hash clause being generated there. This 
query takes a heavy toll on the datastore and has become a huge bottleneck 
in our application.

This is the core ORM configuration:

class User(Base):

hash = Column(String(64), primary_key=True)


class UserMixin(object):

@declared_attr
def user_hash(cls):
return Column(String(64), ForeignKey('User.hash'), primary_key=True)


class People(UserMixin, Base):

friday_id = Column(BigInteger, primary_key=True, nullable=False,
   autoincrement=False)

# This line mysteriously fixes the query
#user_hash = Column(String(64), ForeignKey('User.hash'),
#   primary_key=True)

parent_id = Column(BigInteger)

@declared_attr
def parent(cls):
return relationship(
'People', remote_side=[cls.friday_id, cls.user_hash],
post_update=True)

__table_args__ = (
ForeignKeyConstraint(
['parent_id', 'user_hash'],
['People.friday_id', 'People.user_hash'],
ondelete='CASCADE'),)

As shown in the comments, if I remove the UserMixin, the query becomes sane.

-- 
Fayaz Yusuf Khan
Cloud architect, Dexetra SS, India
fayaz.yusuf.khan_AT_gmail_DOT_com, fayaz_AT_dexetra_DOT_com
+91-9746-830-823

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

from sqlalchemy import *

from sqlalchemy.orm import sessionmaker, backref, relationship
from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.ext.declarative import declarative_base, declared_attr


class TableNameMixin(object):

@declared_attr
def __tablename__(cls):
return cls.__name__


Base = declarative_base(cls=TableNameMixin)


class User(Base):

hash = Column(String(64), primary_key=True)


class UserMixin(object):

@declared_attr
def user_hash(cls):
return Column(String(64), ForeignKey('User.hash'), primary_key=True)


class People(UserMixin, Base):

friday_id = Column(BigInteger, primary_key=True, nullable=False,
   autoincrement=False)
# This line mysteriously fixes the query
#user_hash = Column(String(64), ForeignKey('User.hash'), primary_key=True)
parent_id = Column(BigInteger)

@declared_attr
def parent(cls):
return relationship(
'People', remote_side=[cls.friday_id, cls.user_hash],
post_update=True)

__table_args__ = (
ForeignKeyConstraint(
['parent_id', 'user_hash'],
['People.friday_id', 'People.user_hash'],
ondelete='CASCADE'),)


engine = create_engine('mysql://root@localhost', echo=True)
engine.execute('CREATE DATABASE test')
engine.execute('USE test')
Session = sessionmaker()
Session.configure(bind=engine)
Base.metadata.bind = engine
Base.metadata.create_all()
session = Session()

hash_string = '0' * 64
session.add(User(hash=hash_string))
session.flush()
session.add(People(user_hash='0'*64, friday_id=1, parent_id=1))
session.add(People(user_hash='0'*64, friday_id=2, parent_id=1))
people = session.query(People).filter_by(friday_id=2).one()
print After query
print people.parent
engine.execute('DROP DATABASE test')



Re: [sqlalchemy] Bug: Inefficient query being generated by relationship, mixin class combo

2012-06-18 Thread Michael Bayer
When you say this:

@declared_attr
def parent(cls):
...
  remote_side=[cls.friday_id, cls.user_hash],

Delcarative calls parent within the same configuration phase that it calls 
user_hash for the first time to generate a new Column.So when you call 
upon cls.user_hash, you're calling the UserMixin.user_hash classmethod, getting 
at a newly generated Column, which is then stuffed into the remote_side 
argument.   Declarative calls cls.user_hash itself to get a Column which is the 
one it actually applies to the class.  So there are two user_hash columns 
floating around.  If you stick a print statement into the user_hash 
@declared_attr you can see this.

Just place that remote_side into quotes, so that it is evaluated at mapper 
configuration time, long after People.user_hash has been established as a 
Column on People:

@declared_attr
def parent(cls):
return relationship(
'People', 
remote_side=[People.friday_id, People.user_hash],
post_update=True)


Also note that the pattern at play here is the composite self referential key 
with column pointing to itself.  This use case is not fully functional until 
version 0.8, in 0.7 and earlier you'll see operations like join() and 
joinedload() failing to alias the user_hash=user_hash clause correctly.
See 
http://www.sqlalchemy.org/trac/wiki/08Migration#Rewrittenrelationshipmechanics .

On Jun 18, 2012, at 3:40 AM, Fayaz Yusuf Khan wrote:

 The attached script generates an inefficient query at the end:
 
 SELECT `People`.friday_id AS `People_friday_id`, `People`.parent_id AS 
 `People_parent_id`, `People`.user_hash AS `People_user_hash` 
 FROM `People` 
 WHERE `People`.friday_id = %s AND `People`.user_hash = `People`.user_hash
 
 Please note the user_hash = user_hash clause being generated there. This 
 query takes a heavy toll on the datastore and has become a huge bottleneck 
 in our application.
 
 This is the core ORM configuration:
 
 class User(Base):
 
hash = Column(String(64), primary_key=True)
 
 
 class UserMixin(object):
 
@declared_attr
def user_hash(cls):
return Column(String(64), ForeignKey('User.hash'), primary_key=True)
 
 
 class People(UserMixin, Base):
 
friday_id = Column(BigInteger, primary_key=True, nullable=False,
   autoincrement=False)
 
# This line mysteriously fixes the query
#user_hash = Column(String(64), ForeignKey('User.hash'),
#   primary_key=True)
 
parent_id = Column(BigInteger)
 
@declared_attr
def parent(cls):
return relationship(
'People', remote_side=[cls.friday_id, cls.user_hash],
post_update=True)
 
__table_args__ = (
ForeignKeyConstraint(
['parent_id', 'user_hash'],
['People.friday_id', 'People.user_hash'],
ondelete='CASCADE'),)
 
 As shown in the comments, if I remove the UserMixin, the query becomes sane.
 
 -- 
 Fayaz Yusuf Khan
 Cloud architect, Dexetra SS, India
 fayaz.yusuf.khan_AT_gmail_DOT_com, fayaz_AT_dexetra_DOT_com
 +91-9746-830-823
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 
 test.py

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Bug in joinedload('*') ?!

2012-05-08 Thread alex
Hi all,

I want to use session.query(...).options(joinedload('*')) to load all
related objects eagerly.

It seems not to work if inheritance is involved.
This is a complete working example using SQLAlchemy 0.7.7 and Elixir
0.8.0dev:
-
from elixir import *
from sqlalchemy import create_engine
from sqlalchemy.orm import joinedload

engine = create_engine('sqlite:///:memory:', echo = False)
metadata.bind = engine

class PersistentObject(Entity):
pass

class Movie(PersistentObject):
title = Field(Unicode(30))
director = ManyToOne('Director', inverse = movies)

class Director(PersistentObject):
name = Field(Unicode(60))
movies = OneToMany('Movie', inverse = director)

setup_all(True)

rscott = Director(name=uRidley Scott)
alien = Movie(title=uAlien)
brunner = Movie(title=uBlade Runner)
rscott.movies.append(brunner)
rscott.movies.append(alien)
session.commit()

print without joinedload
session.close_all()
d = session.query(Director).first()
for i in session: print i
print with joinedload
session.close_all()
d = session.query(Director).options(joinedload('*')).first()
for i in session: print i

--

The last line should also print the movies, which does not happen.
When you set Entity as the baseclass of Movie and Director it works.
Is this a bug or is there a reason I don't see?

Thx in advance,
alex

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Bug in joinedload('*') ?!

2012-05-08 Thread Michael Bayer
When you eager load from a class to itself, this is a self-referential eager 
load.A generic lazy=joined  needs to know the depth at which you'd like 
to go when this occurs, otherwise it will stop as soon as it sees itself in the 
path of classes to be joined.The behavior of joinedload('*') is the same as 
setting lazy=joined on the relationship - it's not the same as saying 
joinedload(Director.movies), which it would take to mean as an explicit, single 
joined load.

In this case, even though this is from a base class to itself, the subclass is 
different, so perhaps the check for self-referential joining needs to be tuned. 
 that is ticket #2481.Here, if you must use joinedload('*') and not regular 
joinedload(Director.movies) (I'd never use '*' with joinedload, add a new 
relationship and your app grinds to a halt), then add a join depth to the 
relationship (sorry, this is relationship(), I don't know how to set this with 
Elixir):

movies = relationship(Movie, foreign_keys=Movie.director_id, join_depth=2)





On May 8, 2012, at 8:58 AM, alex wrote:

 Hi all,
 
 I want to use session.query(...).options(joinedload('*')) to load all
 related objects eagerly.
 
 It seems not to work if inheritance is involved.
 This is a complete working example using SQLAlchemy 0.7.7 and Elixir
 0.8.0dev:
 -
 from elixir import *
 from sqlalchemy import create_engine
 from sqlalchemy.orm import joinedload
 
 engine = create_engine('sqlite:///:memory:', echo = False)
 metadata.bind = engine
 
 class PersistentObject(Entity):
pass
 
 class Movie(PersistentObject):
title = Field(Unicode(30))
director = ManyToOne('Director', inverse = movies)
 
 class Director(PersistentObject):
name = Field(Unicode(60))
movies = OneToMany('Movie', inverse = director)
 
 setup_all(True)
 
 rscott = Director(name=uRidley Scott)
 alien = Movie(title=uAlien)
 brunner = Movie(title=uBlade Runner)
 rscott.movies.append(brunner)
 rscott.movies.append(alien)
 session.commit()
 
 print without joinedload
 session.close_all()
 d = session.query(Director).first()
 for i in session: print i
 print with joinedload
 session.close_all()
 d = session.query(Director).options(joinedload('*')).first()
 for i in session: print i
 
 --
 
 The last line should also print the movies, which does not happen.
 When you set Entity as the baseclass of Movie and Director it works.
 Is this a bug or is there a reason I don't see?
 
 Thx in advance,
 alex
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Bug? Cannot call super().__init__ in class which extends declarative_base

2012-03-08 Thread Adam Parkin
When augmenting the 
basehttp://docs.sqlalchemy.org/en/latest/orm/extensions/declarative.html?highlight=declarative_base#augmenting-the-baseyou
 create a class, and then tell declarative_base to inherit from the 
class you've created.  Ex:

class MyBase(object):
def __init__(self):
# code here

Base = declarative_base(cls=MyBase)

class SomeTable(Base):
__tablename__ = mytable
   # rest of table defn
def __init__(self):
super(SomeTable, self).__init__()  # call parent class init

But now when a SomeTable instance is created, the __init__ of MyBase will 
*not* be called.  If the __init__ does something important, this is a 
problem.   One alternative is to not use super(), instead calling the 
MyBase.__init__ method directly from SomeTable, but this isn't really a 
solution if the hierarchy above MyBase is complex (some of the classes it 
inherits from *could* use super(), in which case those would not work 
correctly).

The reason for all of this is the mechanics of super() require that 
*all*classes in the inheritance hierarchy call super() correctly (
reference1 http://fuhm.org/super-harmful/, 
reference2http://rhettinger.wordpress.com/2011/05/26/super-considered-super), 
and the class created by declarative_base() does not do this.  

Is this a bug, or intentional?  A workaround is to reassign the constructor 
of the class created by declarative_base() to one which calls super(), but 
it seems to me the default constructor created by _declarative_constructor 
could be modified to call super().  The workaround I'm now doing is 
outlined below.

def cooperative_constructor(cls):
initmeth = cls.__init__
def _concrete_init(self, **kwargs):
# enable cooperative inheritance
super(cls, self).__init__() 
# pass through to original constructor
initmeth(self, **kwargs)
return _concrete_init

class MyBase(object):
# body here

Base = declarative_base(cls=MyBase)
Base.__init__ = cooperative_constructor(Base)

class SomeTable(Base):
__tablename__ = sometable
def __init__(self, **kwargs):
super(SomeTable, self).__init__(**kwargs)
# rest of code...

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/67R7seDnH90J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Bug? Cannot call super().__init__ in class which extends declarative_base

2012-03-08 Thread Michael Bayer

On Mar 8, 2012, at 1:15 PM, Adam Parkin wrote:

 When augmenting the base you create a class, and then tell declarative_base 
 to inherit from the class you've created.  Ex:
 
 class MyBase(object):
 def __init__(self):
 # code here
 
 Base = declarative_base(cls=MyBase)
 
 class SomeTable(Base):
 __tablename__ = mytable
# rest of table defn
 def __init__(self):
 super(SomeTable, self).__init__()  # call parent class init
 
 But now when a SomeTable instance is created, the __init__ of MyBase will 
 *not* be called.  If the __init__ does something important, this is a 
 problem.   One alternative is to not use super(), instead calling the 
 MyBase.__init__ method directly from SomeTable, but this isn't really a 
 solution if the hierarchy above MyBase is complex (some of the classes it 
 inherits from could use super(), in which case those would not work 
 correctly).


for this case you'd build your declarative base like this:

Base = declarative_base(cls=MyBase, constructor=MyBase.__init__)

this overrides the constructor normally applied to the declarative base class.  
 In this case you'd also need to roll the behavior of **kwargs to suit your 
tastes, if that means implementing declarative's default behavior of assigning 
k=v to the object.  Or you could call 
sqlalchemy.ext.declarative._declarative_constructor which is actually the 
implementation of this.


 
 Is this a bug, or intentional?  A workaround is to reassign the constructor 
 of the class created by declarative_base() to one which calls super(), but it 
 seems to me the default constructor created by _declarative_constructor could 
 be modified to call super(). 

The way I see it is that it could go either way.The way it is now is 
simpler and also the super() call is not necessary in the vast majority of 
cases.   For that reason I'm sort of against calling the super() by default, it 
seems a little more defensive than I'd like to be at this level.   The patch 
below implements sort of how this would have to look, it's more complicated - 
we have to get the new class, tack on the __init__ after the fact so that the 
new class we've created is available for the super() call... I'm sort of at the 
level of *shrugs* here on this one.

diff -r 478ec58a7ea4 lib/sqlalchemy/ext/declarative.py
--- a/lib/sqlalchemy/ext/declarative.py Thu Mar 08 00:51:49 2012 -0800
+++ b/lib/sqlalchemy/ext/declarative.py Thu Mar 08 14:17:11 2012 -0800
@@ -1542,27 +1542,30 @@
 def __get__(desc, self, cls):
 return desc.fget(cls)
 
-def _declarative_constructor(self, **kwargs):
-A simple constructor that allows initialization from kwargs.
+def _get_const(meta):
+def _declarative_constructor(self, **kwargs):
+A simple constructor that allows initialization from kwargs.
 
-Sets attributes on the constructed instance using the names and
-values in ``kwargs``.
+Sets attributes on the constructed instance using the names and
+values in ``kwargs``.
 
-Only keys that are present as
-attributes of the instance's class are allowed. These could be,
-for example, any mapped columns or relationships.
-
-cls_ = type(self)
-for k in kwargs:
-if not hasattr(cls_, k):
-raise TypeError(
-%r is an invalid keyword argument for %s %
-(k, cls_.__name__))
-setattr(self, k, kwargs[k])
-_declarative_constructor.__name__ = '__init__'
+Only keys that are present as
+attributes of the instance's class are allowed. These could be,
+for example, any mapped columns or relationships.
+
+cls_ = type(self)
+for k in kwargs:
+if not hasattr(cls_, k):
+raise TypeError(
+%r is an invalid keyword argument for %s %
+(k, cls_.__name__))
+setattr(self, k, kwargs[k])
+super(meta, self).__init__()
+_declarative_constructor.__name__ = '__init__'
+return _declarative_constructor
 
 def declarative_base(bind=None, metadata=None, mapper=None, cls=object,
- name='Base', constructor=_declarative_constructor,
+ name='Base', 
  class_registry=None,
  metaclass=DeclarativeMeta):
 Construct a base class for declarative class definitions.
@@ -1631,12 +1634,12 @@
 class_dict = dict(_decl_class_registry=class_registry,
   metadata=lcl_metadata)
 
-if constructor:
-class_dict['__init__'] = constructor
 if mapper:
 class_dict['__mapper_cls__'] = mapper
 
-return metaclass(name, bases, class_dict)
+m = metaclass(name, bases, class_dict)
+m.__init__ = _get_const(m)
+return m
 
 def _undefer_column_name(key, column):
 if column.key is None:




 The workaround I'm now doing is outlined below.
 
 def cooperative_constructor(cls):
 initmeth = cls.__init__
   

Re: [sqlalchemy] bug in DDL event with %

2012-01-18 Thread Michael Bayer
you have to escape those out with %%.


On Jan 18, 2012, at 3:06 AM, lestat wrote:

 Hi!
 
 
 If I create DDL like this with %:
 
 event.listen(db.metadata, 'after_create', DDL(

CREATE OR REPLACE FUNCTION test_func() RETURNS void AS $$
DECLARE
max_user_id INTEGER;
BEGIN
SELECT INTO max_user_id MAX(id) FROM user;
RAISE INFO 'max_user_id: (%)', max_user_id;
END;
$$ LANGUAGE plpgsql;
 

 ))
 
 
 and in python shell call db.create_all() for create tables I got
 exception:
 
 
 
  1 db.create_all()
 
 /usr/lib/python2.7/site-packages/flaskext/sqlalchemy.pyc in
 create_all(self, bind, app)
793Parameters were added
794 
 -- 795 self._execute_for_all_tables(app, bind, 'create_all')
796
797 def drop_all(self, bind='__all__', app=None):
 
 /usr/lib/python2.7/site-packages/flaskext/sqlalchemy.pyc in
 _execute_for_all_tables(self, app, bind, operation)
785 tables = self.get_tables_for_bind(bind)
786 op = getattr(self.Model.metadata, operation)
 -- 787 op(bind=self.get_engine(app, bind), tables=tables)
788
789 def create_all(self, bind='__all__', app=None):
 
 /usr/lib/python2.7/site-packages/sqlalchemy/schema.pyc in
 create_all(self, bind, tables, checkfirst)
   2513 self,
   2514 checkfirst=checkfirst,
 - 2515 tables=tables)
   2516
   2517 def drop_all(self, bind=None, tables=None,
 checkfirst=True):
 
 /usr/lib/python2.7/site-packages/sqlalchemy/engine/base.pyc in
 _run_visitor(self, visitorcallable, element, connection, **kwargs)
   2232 conn = connection
   2233 try:
 - 2234 conn._run_visitor(visitorcallable, element,
 **kwargs)
   2235 finally:
   2236 if connection is None:
 
 /usr/lib/python2.7/site-packages/sqlalchemy/engine/base.pyc in
 _run_visitor(self, visitorcallable, element, **kwargs)
   1902 def _run_visitor(self, visitorcallable, element,
 **kwargs):
   1903 visitorcallable(self.dialect, self,
 - 1904 **kwargs).traverse_single(element)
   1905
   1906
 
 /usr/lib/python2.7/site-packages/sqlalchemy/sql/visitors.pyc in
 traverse_single(self, obj, **kw)
 84 meth = getattr(v, visit_%s % obj.__visit_name__,
 None)
 85 if meth:
 --- 86 return meth(obj, **kw)
 87
 88 def iterate(self, obj):
 
 /usr/lib/python2.7/site-packages/sqlalchemy/engine/ddl.pyc in
 visit_metadata(self, metadata)
 70 tables=collection,
 71
 checkfirst=self.checkfirst,
 --- 72 _ddl_runner=self)
 73
 74 def visit_table(self, table, create_ok=False):
 
 /usr/lib/python2.7/site-packages/sqlalchemy/event.pyc in
 __call__(self, *args, **kw)
272 fn(*args, **kw)
273 for fn in self.listeners:
 -- 274 fn(*args, **kw)
275
276 # I'm not entirely thrilled about the overhead here,
 
 
 /usr/lib/python2.7/site-packages/sqlalchemy/schema.pyc in
 __call__(self, target, bind, **kw)
   2823
   2824 if self._should_execute(target, bind, **kw):
 - 2825 return bind.execute(self.against(target))
   2826
   2827 def _check_ddl_on(self, on):
 
 /usr/lib/python2.7/site-packages/sqlalchemy/engine/base.pyc in
 execute(self, object, *multiparams, **params)
   1403 object,
   1404 multiparams,
 - 1405 params)
   1406 else:
   1407 raise exc.InvalidRequestError(
 
 /usr/lib/python2.7/site-packages/sqlalchemy/engine/base.pyc in
 _execute_ddl(self, ddl, multiparams, params)
   1488 dialect = self.dialect
   1489
 - 1490 compiled = ddl.compile(dialect=dialect)
   1491 ret = self._execute_context(
   1492 dialect,
 
 /usr/lib/python2.7/site-packages/sqlalchemy/sql/expression.pyc in
 compile(self, bind, dialect, **kw)
   1720 else:
   1721 dialect = default.DefaultDialect()
 - 1722 return self._compiler(dialect, bind=bind, **kw)
   1723
   1724 def _compiler(self, dialect, **kw):
 
 /usr/lib/python2.7/site-packages/sqlalchemy/schema.pyc in
 _compiler(self, dialect, **kw)
   2850 Dialect.
   2851
 - 2852 return dialect.ddl_compiler(dialect, self, **kw)
   2853
   2854 class DDL(DDLElement):
 
 /usr/lib/python2.7/site-packages/sqlalchemy/engine/base.pyc in
 __init__(self, dialect, statement, bind)
697 self.statement = statement
698 self.can_execute = statement.supports_execution
 -- 699 self.string = self.process(self.statement)
700
701 @util.deprecated(0.7, :class:`.Compiled` objects now
 compile 
 
 

Re: [sqlalchemy] bug in reflection.py sqla 0.7.3

2011-12-01 Thread Michael Bayer
I'm not able to reproduce that, and also this code should likely be replaced by 
our existing topological sort code.  Can you provide a succinct reproducing 
example ?


On Dec 1, 2011, at 2:22 AM, Robert Forkel wrote:

 Hi,
 trying to use Inspector.get_table_names with order_by='foreign_key'
 causes the following exception:
 
 Traceback (most recent call last):
  File db_inspector.py, line 20, in module
for table in insp.get_table_names(schema=schema,
 order_by='foreign_key'):
  File lib/python2.6/site-packages/sqlalchemy/engine/reflection.py,
 line 173, in get_table_names
ordered_tnames.index(ref_pos, tname)
 TypeError: slice indices must be integers or None or have an __index__
 method
 
 which can be remdied by the following patch:
 
 (gulpenv)$ diff -crB lib/python2.6/site-packages/sqlalchemy/engine/
 reflection.py lib/python2.6/site-packages/sqlalchemy/engine/
 reflection.py.orig*** lib/python2.6/site-packages/sqlalchemy/engine/
 reflection.py 2011-12-01 08:15:01.600838080 +0100
 --- lib/python2.6/site-packages/sqlalchemy/engine/reflection.py.orig
 2011-12-01 08:14:40.980828074 +0100
 ***
 *** 169,175 
  if table_pos  ref_pos:
  ordered_tnames.pop(table_pos) # rtable
 moves up 1
  # insert just below rtable
 ! ordered_tnames.insert(ref_pos, tname)
  tnames = ordered_tnames
  return tnames
 
 --- 169,175 
  if table_pos  ref_pos:
  ordered_tnames.pop(table_pos) # rtable
 moves up 1
  # insert just below rtable
 ! ordered_tnames.index(ref_pos, tname)
  tnames = ordered_tnames
  return tnames
 
 best regards
 robert
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] bug in reflection.py sqla 0.7.3

2011-12-01 Thread Michael Bayer
nevermind, I need to randomize that list in order to trigger it


On Dec 1, 2011, at 1:42 PM, Michael Bayer wrote:

 I'm not able to reproduce that, and also this code should likely be replaced 
 by our existing topological sort code.  Can you provide a succinct 
 reproducing example ?
 
 
 On Dec 1, 2011, at 2:22 AM, Robert Forkel wrote:
 
 Hi,
 trying to use Inspector.get_table_names with order_by='foreign_key'
 causes the following exception:
 
 Traceback (most recent call last):
 File db_inspector.py, line 20, in module
   for table in insp.get_table_names(schema=schema,
 order_by='foreign_key'):
 File lib/python2.6/site-packages/sqlalchemy/engine/reflection.py,
 line 173, in get_table_names
   ordered_tnames.index(ref_pos, tname)
 TypeError: slice indices must be integers or None or have an __index__
 method
 
 which can be remdied by the following patch:
 
 (gulpenv)$ diff -crB lib/python2.6/site-packages/sqlalchemy/engine/
 reflection.py lib/python2.6/site-packages/sqlalchemy/engine/
 reflection.py.orig*** lib/python2.6/site-packages/sqlalchemy/engine/
 reflection.py2011-12-01 08:15:01.600838080 +0100
 --- lib/python2.6/site-packages/sqlalchemy/engine/reflection.py.orig
 2011-12-01 08:14:40.980828074 +0100
 ***
 *** 169,175 
 if table_pos  ref_pos:
 ordered_tnames.pop(table_pos) # rtable
 moves up 1
 # insert just below rtable
 ! ordered_tnames.insert(ref_pos, tname)
 tnames = ordered_tnames
 return tnames
 
 --- 169,175 
 if table_pos  ref_pos:
 ordered_tnames.pop(table_pos) # rtable
 moves up 1
 # insert just below rtable
 ! ordered_tnames.index(ref_pos, tname)
 tnames = ordered_tnames
 return tnames
 
 best regards
 robert
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] bug in reflection.py sqla 0.7.3

2011-12-01 Thread Michael Bayer
thats fixed in r2b66b5abf755,  will be in 0.7.4 or you can get the tip off the 
download page.




On Dec 1, 2011, at 1:47 PM, Michael Bayer wrote:

 nevermind, I need to randomize that list in order to trigger it
 
 
 On Dec 1, 2011, at 1:42 PM, Michael Bayer wrote:
 
 I'm not able to reproduce that, and also this code should likely be replaced 
 by our existing topological sort code.  Can you provide a succinct 
 reproducing example ?
 
 
 On Dec 1, 2011, at 2:22 AM, Robert Forkel wrote:
 
 Hi,
 trying to use Inspector.get_table_names with order_by='foreign_key'
 causes the following exception:
 
 Traceback (most recent call last):
 File db_inspector.py, line 20, in module
  for table in insp.get_table_names(schema=schema,
 order_by='foreign_key'):
 File lib/python2.6/site-packages/sqlalchemy/engine/reflection.py,
 line 173, in get_table_names
  ordered_tnames.index(ref_pos, tname)
 TypeError: slice indices must be integers or None or have an __index__
 method
 
 which can be remdied by the following patch:
 
 (gulpenv)$ diff -crB lib/python2.6/site-packages/sqlalchemy/engine/
 reflection.py lib/python2.6/site-packages/sqlalchemy/engine/
 reflection.py.orig*** lib/python2.6/site-packages/sqlalchemy/engine/
 reflection.py   2011-12-01 08:15:01.600838080 +0100
 --- lib/python2.6/site-packages/sqlalchemy/engine/reflection.py.orig
 2011-12-01 08:14:40.980828074 +0100
 ***
 *** 169,175 
if table_pos  ref_pos:
ordered_tnames.pop(table_pos) # rtable
 moves up 1
# insert just below rtable
 ! ordered_tnames.insert(ref_pos, tname)
tnames = ordered_tnames
return tnames
 
 --- 169,175 
if table_pos  ref_pos:
ordered_tnames.pop(table_pos) # rtable
 moves up 1
# insert just below rtable
 ! ordered_tnames.index(ref_pos, tname)
tnames = ordered_tnames
return tnames
 
 best regards
 robert
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] bug in reflection.py sqla 0.7.3

2011-11-30 Thread Robert Forkel
Hi,
trying to use Inspector.get_table_names with order_by='foreign_key'
causes the following exception:

Traceback (most recent call last):
  File db_inspector.py, line 20, in module
for table in insp.get_table_names(schema=schema,
order_by='foreign_key'):
  File lib/python2.6/site-packages/sqlalchemy/engine/reflection.py,
line 173, in get_table_names
ordered_tnames.index(ref_pos, tname)
TypeError: slice indices must be integers or None or have an __index__
method

which can be remdied by the following patch:

(gulpenv)$ diff -crB lib/python2.6/site-packages/sqlalchemy/engine/
reflection.py lib/python2.6/site-packages/sqlalchemy/engine/
reflection.py.orig*** lib/python2.6/site-packages/sqlalchemy/engine/
reflection.py   2011-12-01 08:15:01.600838080 +0100
--- lib/python2.6/site-packages/sqlalchemy/engine/reflection.py.orig
2011-12-01 08:14:40.980828074 +0100
***
*** 169,175 
  if table_pos  ref_pos:
  ordered_tnames.pop(table_pos) # rtable
moves up 1
  # insert just below rtable
! ordered_tnames.insert(ref_pos, tname)
  tnames = ordered_tnames
  return tnames

--- 169,175 
  if table_pos  ref_pos:
  ordered_tnames.pop(table_pos) # rtable
moves up 1
  # insert just below rtable
! ordered_tnames.index(ref_pos, tname)
  tnames = ordered_tnames
  return tnames

best regards
robert

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Bug? Declarative syntax for double self-referential constructs with association object

2011-08-26 Thread Thomas Jacob
Hello List

I'd like to define a many-to-many relationship using the declarative
syntax between a table to itself, but employing an association
object (since I require the association to have attributes
also).

Here's what I tried:

#=

from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker, relation, backref


DB_engine = create_engine('sqlite:///:memory:', echo=True)
DB_Base = declarative_base(bind=DB_engine)
DB_Session = sessionmaker()
DB_Session.configure(bind=DB_engine)
DB_session =  DB_Session()



class Object(DB_Base):
__tablename__ = 'object'

object_id = Column(Integer, primary_key=True)
label = Column(Text)


class ObjectAssociation(DB_Base):
__tablename__ = 'object_association'
__table_args__ = (
PrimaryKeyConstraint('left_object_id', 'right_object_id'),
{}
)

left_object_id = Column(Integer, ForeignKey('object.object_id'))
right_object_id = Column(Integer, ForeignKey('object.object_id'))
label = Column(Text)

left_object = relation(Object,
primaryjoin=(left_object_id==Object.object_id),
backref=backref(relate_down)
)
right_object = relation(Object,
primaryjoin=(right_object_id==Object.object_id),
backref=backref(relate_up)
)


DB_Base.metadata.create_all()

o = Object()
o.label = 'Object1'

o2 = Object()
o2.label = 'Object2'

o2.relate_down.append(o)

DB_session.commit()

#=


But this raises an exception:

#=

2011-08-26 20:48:28,422 INFO sqlalchemy.engine.base.Engine PRAGMA
table_info(object)
2011-08-26 20:48:28,422 INFO sqlalchemy.engine.base.Engine ()
2011-08-26 20:48:28,422 INFO sqlalchemy.engine.base.Engine PRAGMA
table_info(object_association)
2011-08-26 20:48:28,422 INFO sqlalchemy.engine.base.Engine ()
2011-08-26 20:48:28,423 INFO sqlalchemy.engine.base.Engine 
CREATE TABLE object (
object_id INTEGER NOT NULL, 
label TEXT, 
PRIMARY KEY (object_id)
)


2011-08-26 20:48:28,423 INFO sqlalchemy.engine.base.Engine ()
2011-08-26 20:48:28,423 INFO sqlalchemy.engine.base.Engine COMMIT
2011-08-26 20:48:28,423 INFO sqlalchemy.engine.base.Engine 
CREATE TABLE object_association (
left_object_id INTEGER, 
right_object_id INTEGER, 
label TEXT, 
PRIMARY KEY (left_object_id, right_object_id), 
FOREIGN KEY(left_object_id) REFERENCES object (object_id), 
FOREIGN KEY(right_object_id) REFERENCES object (object_id)
)


2011-08-26 20:48:28,423 INFO sqlalchemy.engine.base.Engine ()
2011-08-26 20:48:28,424 INFO sqlalchemy.engine.base.Engine COMMIT
Traceback (most recent call last):
  File bug.py, line 53, in module
o2.relate_down.append(o)
  File
/usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/collections.py,
line 952, in append
item = __set(self, item, _sa_initiator)
  File
/usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/collections.py,
line 927, in __set
item = getattr(executor, 'fire_append_event')(item, _sa_initiator)
  File
/usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/collections.py,
line 618, in fire_append_event
item, initiator)
  File
/usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/attributes.py,
line 745, in fire_append_event
value = fn(state, value, initiator or self)
  File
/usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/attributes.py,
line 943, in emit_backref_from_collection_append_event
child_state.manager[key].impl.append(
KeyError: 'left_object'

#===

Is this a bug? Am I doing something wrong?

Any clue as to what is going on here would be greatly appreciated.

The platform I ran this on was  Ubuntu 10.04's Python 2.6.5 + SQLAlchemy
0.7.2.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Bug? Declarative syntax for double self-referential constructs with association object

2011-08-26 Thread Thomas Jacob

On Aug 26, 2011, at 9:36 PM, Michael Bayer wrote:

 the error message here isn't fantastic but I think you mean to say:
 
 oa = ObjectAssociation()
 o2.relate_down.append(oa)
 o.relate_up.append(oa)
 
 as relate_up/relate_down accept ObjectAssociation objects, not Object
 objects, as members.

Oops, that sounds plausible. I guess I was tacitly assuming that
SQLalchemy would somehow abstract away from the association
object and let me create relations between the Objects directly (but then how 
would I define
the extra attributes of the association?).

Thanks!
 
 
 
 
 On Aug 26, 2:53 pm, Thomas Jacob ja...@internet24.de wrote:
 Hello List
 
 I'd like to define a many-to-many relationship using the declarative
 syntax between a table to itself, but employing an association
 object (since I require the association to have attributes
 also).
 
 Here's what I tried:
 
 #=
 
 from sqlalchemy import *
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import sessionmaker, relation, backref
 
 DB_engine = create_engine('sqlite:///:memory:', echo=True)
 DB_Base = declarative_base(bind=DB_engine)
 DB_Session = sessionmaker()
 DB_Session.configure(bind=DB_engine)
 DB_session =  DB_Session()
 
 class Object(DB_Base):
 __tablename__ = 'object'
 
 object_id = Column(Integer, primary_key=True)
 label = Column(Text)
 
 class ObjectAssociation(DB_Base):
 __tablename__ = 'object_association'
 __table_args__ = (
 PrimaryKeyConstraint('left_object_id', 'right_object_id'),
 {}
 )
 
 left_object_id = Column(Integer, ForeignKey('object.object_id'))
 right_object_id = Column(Integer, ForeignKey('object.object_id'))
 label = Column(Text)
 
 left_object = relation(Object,
 primaryjoin=(left_object_id==Object.object_id),
 backref=backref(relate_down)
 )
 right_object = relation(Object,
 primaryjoin=(right_object_id==Object.object_id),
 backref=backref(relate_up)
 )
 
 DB_Base.metadata.create_all()
 
 o = Object()
 o.label = 'Object1'
 
 o2 = Object()
 o2.label = 'Object2'
 
 o2.relate_down.append(o)
 
 DB_session.commit()
 
 #=
 
 But this raises an exception:
 
 #=
 
 2011-08-26 20:48:28,422 INFO sqlalchemy.engine.base.Engine PRAGMA
 table_info(object)
 2011-08-26 20:48:28,422 INFO sqlalchemy.engine.base.Engine ()
 2011-08-26 20:48:28,422 INFO sqlalchemy.engine.base.Engine PRAGMA
 table_info(object_association)
 2011-08-26 20:48:28,422 INFO sqlalchemy.engine.base.Engine ()
 2011-08-26 20:48:28,423 INFO sqlalchemy.engine.base.Engine
 CREATE TABLE object (
 object_id INTEGER NOT NULL,
 label TEXT,
 PRIMARY KEY (object_id)
 )
 
 2011-08-26 20:48:28,423 INFO sqlalchemy.engine.base.Engine ()
 2011-08-26 20:48:28,423 INFO sqlalchemy.engine.base.Engine COMMIT
 2011-08-26 20:48:28,423 INFO sqlalchemy.engine.base.Engine
 CREATE TABLE object_association (
 left_object_id INTEGER,
 right_object_id INTEGER,
 label TEXT,
 PRIMARY KEY (left_object_id, right_object_id),
 FOREIGN KEY(left_object_id) REFERENCES object (object_id),
 FOREIGN KEY(right_object_id) REFERENCES object (object_id)
 )
 
 2011-08-26 20:48:28,423 INFO sqlalchemy.engine.base.Engine ()
 2011-08-26 20:48:28,424 INFO sqlalchemy.engine.base.Engine COMMIT
 Traceback (most recent call last):
   File bug.py, line 53, in module
 o2.relate_down.append(o)
   File
 /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/collections.py,
 line 952, in append
 item = __set(self, item, _sa_initiator)
   File
 /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/collections.py,
 line 927, in __set
 item = getattr(executor, 'fire_append_event')(item, _sa_initiator)
   File
 /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/collections.py,
 line 618, in fire_append_event
 item, initiator)
   File
 /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/attributes.py,
 line 745, in fire_append_event
 value = fn(state, value, initiator or self)
   File
 /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/attributes.py,
 line 943, in emit_backref_from_collection_append_event
 child_state.manager[key].impl.append(
 KeyError: 'left_object'
 
 #===
 
 Is this a bug? Am I doing something wrong?
 
 Any clue as to what is going on here would be greatly appreciated.
 
 The platform I ran this on was  Ubuntu 10.04's Python 2.6.5 + SQLAlchemy
 0.7.2.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more 

Re: [sqlalchemy] Bug? Declarative syntax for double self-referential constructs with association object

2011-08-26 Thread Michael Bayer

On Aug 26, 2011, at 4:06 PM, Thomas Jacob wrote:

 
 On Aug 26, 2011, at 9:36 PM, Michael Bayer wrote:
 
 the error message here isn't fantastic but I think you mean to say:
 
 oa = ObjectAssociation()
 o2.relate_down.append(oa)
 o.relate_up.append(oa)
 
 as relate_up/relate_down accept ObjectAssociation objects, not Object
 objects, as members.
 
 Oops, that sounds plausible. I guess I was tacitly assuming that
 SQLalchemy would somehow abstract away from the association
 object and let me create relations between the Objects directly (but then how 
 would I define
 the extra attributes of the association?).

So the next step with association objects is to use a proxy that will in most 
cases hide the middle object. I use these to a huge degree and you can 
really do some amazing stuff with them.  You can read up on that at 
http://www.sqlalchemy.org/docs/orm/extensions/associationproxy.html .




 
Thanks!
 
 
 
 
 On Aug 26, 2:53 pm, Thomas Jacob ja...@internet24.de wrote:
 Hello List
 
 I'd like to define a many-to-many relationship using the declarative
 syntax between a table to itself, but employing an association
 object (since I require the association to have attributes
 also).
 
 Here's what I tried:
 
 #=
 
 from sqlalchemy import *
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import sessionmaker, relation, backref
 
 DB_engine = create_engine('sqlite:///:memory:', echo=True)
 DB_Base = declarative_base(bind=DB_engine)
 DB_Session = sessionmaker()
 DB_Session.configure(bind=DB_engine)
 DB_session =  DB_Session()
 
 class Object(DB_Base):
__tablename__ = 'object'
 
object_id = Column(Integer, primary_key=True)
label = Column(Text)
 
 class ObjectAssociation(DB_Base):
__tablename__ = 'object_association'
__table_args__ = (
PrimaryKeyConstraint('left_object_id', 'right_object_id'),
{}
)
 
left_object_id = Column(Integer, ForeignKey('object.object_id'))
right_object_id = Column(Integer, ForeignKey('object.object_id'))
label = Column(Text)
 
left_object = relation(Object,
primaryjoin=(left_object_id==Object.object_id),
backref=backref(relate_down)
)
right_object = relation(Object,
primaryjoin=(right_object_id==Object.object_id),
backref=backref(relate_up)
)
 
 DB_Base.metadata.create_all()
 
 o = Object()
 o.label = 'Object1'
 
 o2 = Object()
 o2.label = 'Object2'
 
 o2.relate_down.append(o)
 
 DB_session.commit()
 
 #=
 
 But this raises an exception:
 
 #=
 
 2011-08-26 20:48:28,422 INFO sqlalchemy.engine.base.Engine PRAGMA
 table_info(object)
 2011-08-26 20:48:28,422 INFO sqlalchemy.engine.base.Engine ()
 2011-08-26 20:48:28,422 INFO sqlalchemy.engine.base.Engine PRAGMA
 table_info(object_association)
 2011-08-26 20:48:28,422 INFO sqlalchemy.engine.base.Engine ()
 2011-08-26 20:48:28,423 INFO sqlalchemy.engine.base.Engine
 CREATE TABLE object (
object_id INTEGER NOT NULL,
label TEXT,
PRIMARY KEY (object_id)
 )
 
 2011-08-26 20:48:28,423 INFO sqlalchemy.engine.base.Engine ()
 2011-08-26 20:48:28,423 INFO sqlalchemy.engine.base.Engine COMMIT
 2011-08-26 20:48:28,423 INFO sqlalchemy.engine.base.Engine
 CREATE TABLE object_association (
left_object_id INTEGER,
right_object_id INTEGER,
label TEXT,
PRIMARY KEY (left_object_id, right_object_id),
FOREIGN KEY(left_object_id) REFERENCES object (object_id),
FOREIGN KEY(right_object_id) REFERENCES object (object_id)
 )
 
 2011-08-26 20:48:28,423 INFO sqlalchemy.engine.base.Engine ()
 2011-08-26 20:48:28,424 INFO sqlalchemy.engine.base.Engine COMMIT
 Traceback (most recent call last):
  File bug.py, line 53, in module
o2.relate_down.append(o)
  File
 /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/collections.py,
 line 952, in append
item = __set(self, item, _sa_initiator)
  File
 /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/collections.py,
 line 927, in __set
item = getattr(executor, 'fire_append_event')(item, _sa_initiator)
  File
 /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/collections.py,
 line 618, in fire_append_event
item, initiator)
  File
 /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/attributes.py,
 line 745, in fire_append_event
value = fn(state, value, initiator or self)
  File
 /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/attributes.py,
 line 943, in emit_backref_from_collection_append_event
child_state.manager[key].impl.append(
 KeyError: 'left_object'
 
 #===
 
 Is this a bug? Am I doing something wrong?
 
 Any clue as to what is going on here would be greatly appreciated.
 
 The platform I ran this on was  

[sqlalchemy] bug: distinct ON using subquery with un-named alias fails.

2011-04-21 Thread Jon Nelson
I have a test case. If this formats badly, I'll attach it as a file.
I'm not sure if this is a distinct on problem or not (probably not,
but here is a test case anyway).

If I don't specify a name for the alias, the compile fails. :-(


diff -r 070e47edcfad test/dialect/test_postgresql.py
--- a/test/dialect/test_postgresql.py   Fri Apr 15 00:43:01 2011 -0400
+++ b/test/dialect/test_postgresql.py   Thu Apr 21 12:27:42 2011 -0500
@@ -1214,6 +1214,32 @@
 'SELECT DISTINCT ON (mytable.id, mytable.a) mytable.id, '
 'mytable.a \nFROM mytable')

+def test_distinct_on_subquery(self):
+t1 = Table('mytable1', MetaData(testing.db), Column('id',
+  Integer, primary_key=True), Column('a', String(8)))
+t2 = Table('mytable2', MetaData(testing.db), Column('id',
+  Integer, primary_key=True), Column('a', String(8)))
+
+sq = select([t1]).alias()
+q = select([t2.c.id,sq.c.id], distinct=sq.c.id).where(t2.c.id==sq.c.id)
+self.assert_compile(
+q,
+SELECT DISTINCT ON (anon_1.id) mytable2.id, anon_1.id 
+FROM mytable2, (SELECT mytable1.id AS id, mytable1.a AS a 
+FROM mytable1) AS anon_1 
+WHERE mytable2.id = anon_1.id
+)
+
+sq = select([t1]).alias('sq')
+q = select([t2.c.id,sq.c.id], distinct=sq.c.id).where(t2.c.id==sq.c.id)
+self.assert_compile(
+q,
+SELECT DISTINCT ON (sq.id) mytable2.id, sq.id 
+FROM mytable2, (SELECT mytable1.id AS id, mytable1.a AS a 
+FROM mytable1) AS sq 
+WHERE mytable2.id = sq.id
+)
+
 def test_schema_reflection(self):
 note: this test requires that the 'test_schema' schema be
 separate and accessible by the test user


-- 
Jon

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] bug: distinct ON using subquery with un-named alias fails.

2011-04-21 Thread Michael Bayer
yeah lets start using trac for these, these are bugs with tests.

this is http://www.sqlalchemy.org/trac/ticket/2142 and your previous one is 2141



On Apr 21, 2011, at 1:31 PM, Jon Nelson wrote:

 diff -r 070e47edcfad test/dialect/test_postgresql.py
 --- a/test/dialect/test_postgresql.py   Fri Apr 15 00:43:01 2011 -0400
 +++ b/test/dialect/test_postgresql.py   Thu Apr 21 12:27:42 2011 -0500
 @@ -1214,6 +1214,32 @@
 'SELECT DISTINCT ON (mytable.id, mytable.a) mytable.id, '
 'mytable.a \nFROM mytable')
 
 +def test_distinct_on_subquery(self):
 +t1 = Table('mytable1', MetaData(testing.db), Column('id',
 +  Integer, primary_key=True), Column('a', String(8)))
 +t2 = Table('mytable2', MetaData(testing.db), Column('id',
 +  Integer, primary_key=True), Column('a', String(8)))
 +
 +sq = select([t1]).alias()
 +q = select([t2.c.id,sq.c.id], 
 distinct=sq.c.id).where(t2.c.id==sq.c.id)
 +self.assert_compile(
 +q,
 +SELECT DISTINCT ON (anon_1.id) mytable2.id, anon_1.id 
 +FROM mytable2, (SELECT mytable1.id AS id, mytable1.a AS a 
 +FROM mytable1) AS anon_1 
 +WHERE mytable2.id = anon_1.id
 +)
 +
 +sq = select([t1]).alias('sq')
 +q = select([t2.c.id,sq.c.id], 
 distinct=sq.c.id).where(t2.c.id==sq.c.id)
 +self.assert_compile(
 +q,
 +SELECT DISTINCT ON (sq.id) mytable2.id, sq.id 
 +FROM mytable2, (SELECT mytable1.id AS id, mytable1.a AS a 
 +FROM mytable1) AS sq 
 +WHERE mytable2.id = sq.id
 +)
 +
 def test_schema_reflection(self):
 note: this test requires that the 'test_schema' schema be
 separate and accessible by the test user

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Bug in mssql dialect?

2011-03-04 Thread Michael Naber
You are right. Thanks once again.

On Mar 3, 2011, at 11:11, Michael Bayer mike...@zzzcomputing.com wrote:

 That looks certainly like a misconfigured relationship().   Not sure why 
 SQLite would let it pass through (sqlite is in general extremely liberal), 
 but that's clearly a literal string passed to a join expression sent to 
 relationship as in relationship(... primaryjoin=and_(some expression, 
 Edge._discriminator=='use')) or something like that. When using strings 
 for relationship(), the full expression must be a string, not the components.
 
 
 
 On Mar 3, 2011, at 10:55 AM, Michael Naber wrote:
 
 Last line is invalid t-sql. The ‘==’ should just be ‘=’. I get the problem 
 when using mssql but not with sqlite. Saw the bug on sqa version 0.6.6, 
 upgraded to 0.7b2 and still having issue. Please let me know if you need 
 more info and I'll be happy to provide.
  
 Thanks,
 Michael
  
  
 Traceback (most recent call last):
   File C:\Program Files\eclipse-SDK-3.6.2-win32\eclipse\dropins\Pydev 
 1.6.5\plugins\org.python.pydev.debug_1.6.5.2011020317\pysrc\pydevd.py, line 
 1133, in module
 debugger.run(setup['file'], None, None)
   File C:\Program Files\eclipse-SDK-3.6.2-win32\eclipse\dropins\Pydev 
 1.6.5\plugins\org.python.pydev.debug_1.6.5.2011020317\pysrc\pydevd.py, line 
 918, in run
 execfile(file, globals, locals) #execute the script
   File C:\OpsPylonDev\TransformationBA-Trunk\pydev-setup-app.py, line 5, 
 in module
 SetupCommand('setup-app').run(['development.ini'])
   File 
 C:\Python27\lib\site-packages\pastescript-1.7.3-py2.7.egg\paste\script\appinstall.py,
  line 68, in run
 return super(AbstractInstallCommand, self).run(new_args)
   File 
 C:\Python27\lib\site-packages\pastescript-1.7.3-py2.7.egg\paste\script\command.py,
  line 218, in run
 result = self.command()
   File 
 C:\Python27\lib\site-packages\pastescript-1.7.3-py2.7.egg\paste\script\appinstall.py,
  line 456, in command
 self, config_file, section, self.sysconfig_install_vars(installer))
   File 
 C:\Python27\lib\site-packages\pastescript-1.7.3-py2.7.egg\paste\script\appinstall.py,
  line 598, in setup_config
 mod.setup_app, command, filename, section, vars)
   File 
 C:\Python27\lib\site-packages\pastescript-1.7.3-py2.7.egg\paste\script\appinstall.py,
  line 612, in _call_setup_app
 func(command, conf, vars)
   File C:\OpsPylonDev\TransformationBA-Trunk\transformationba\websetup.py, 
 line 34, in setup_app
 insert_data()
   File 
 C:\OpsPylonDev\TransformationBA-Trunk\transformationba\model\data\__init__.py,
  line 42, in insert_data
 inserter.insert()
   File transformationba\model\data\inserters\400technology.py, line 23, in 
 insert
 csharp_app.create_artifacts()
   File 
 C:\OpsPylonDev\TransformationBA-Trunk\transformationba\model\data\technology_data\csharp_app.py,
  line 65, in create_artifacts
 parent.uses.append(component)
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\ext\associationproxy.py,
  line 189, in __get__
 proxy = self._new(_lazy_collection(obj, self.target_collection))
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\ext\associationproxy.py,
  line 233, in _new
 self.collection_class = util.duck_type_collection(lazy_collection())
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\ext\associationproxy.py,
  line 335, in __call__
 return getattr(obj, self.target)
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\orm\attributes.py,
  line 162, in __get__
 return self.impl.get(instance_state(instance),dict_)
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\orm\attributes.py,
  line 414, in get
 value = self.callable_(state, passive)
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\orm\strategies.py,
  line 542, in _load_for_state
 result = q.all()
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\orm\query.py,
  line 1636, in all
 return list(self)
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\orm\query.py,
  line 1746, in __iter__
 return self._execute_and_instances(context)
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\orm\query.py,
  line 1752, in _execute_and_instances
 close_with_result=True).execute(querycontext.statement, self._params)
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\engine\base.py,
  line 1259, in execute
 params)
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\engine\base.py,
  line 1392, in _execute_clauseelement
 compiled_sql, distilled_params
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\engine\base.py,
  line 1500, in _execute_context
 context)
   File 
 

[sqlalchemy] Bug in mssql dialect?

2011-03-03 Thread Michael Naber
Last line is invalid t-sql. The ‘==’ should just be ‘=’. I get the problem
when using mssql but not with sqlite. Saw the bug on sqa version 0.6.6,
upgraded to 0.7b2 and still having issue. Please let me know if you need
more info and I'll be happy to provide.

Thanks,
Michael


Traceback (most recent call last):
  File C:\Program Files\eclipse-SDK-3.6.2-win32\eclipse\dropins\Pydev
1.6.5\plugins\org.python.pydev.debug_1.6.5.2011020317\pysrc\pydevd.py, line
1133, in module
debugger.run(setup['file'], None, None)
  File C:\Program Files\eclipse-SDK-3.6.2-win32\eclipse\dropins\Pydev
1.6.5\plugins\org.python.pydev.debug_1.6.5.2011020317\pysrc\pydevd.py, line
918, in run
execfile(file, globals, locals) #execute the script
  File C:\OpsPylonDev\TransformationBA-Trunk\pydev-setup-app.py, line 5,
in module
SetupCommand('setup-app').run(['development.ini'])
  File
C:\Python27\lib\site-packages\pastescript-1.7.3-py2.7.egg\paste\script\appinstall.py,
line 68, in run
return super(AbstractInstallCommand, self).run(new_args)
  File
C:\Python27\lib\site-packages\pastescript-1.7.3-py2.7.egg\paste\script\command.py,
line 218, in run
result = self.command()
  File
C:\Python27\lib\site-packages\pastescript-1.7.3-py2.7.egg\paste\script\appinstall.py,
line 456, in command
self, config_file, section, self.sysconfig_install_vars(installer))
  File
C:\Python27\lib\site-packages\pastescript-1.7.3-py2.7.egg\paste\script\appinstall.py,
line 598, in setup_config
mod.setup_app, command, filename, section, vars)
  File
C:\Python27\lib\site-packages\pastescript-1.7.3-py2.7.egg\paste\script\appinstall.py,
line 612, in _call_setup_app
func(command, conf, vars)
  File C:\OpsPylonDev\TransformationBA-Trunk\transformationba\websetup.py,
line 34, in setup_app
insert_data()
  File
C:\OpsPylonDev\TransformationBA-Trunk\transformationba\model\data\__init__.py,
line 42, in insert_data
inserter.insert()
  File transformationba\model\data\inserters\400technology.py, line 23, in
insert
csharp_app.create_artifacts()
  File
C:\OpsPylonDev\TransformationBA-Trunk\transformationba\model\data\technology_data\csharp_app.py,
line 65, in create_artifacts
parent.uses.append(component)
  File
C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\ext\associationproxy.py,
line 189, in __get__
proxy = self._new(_lazy_collection(obj, self.target_collection))
  File
C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\ext\associationproxy.py,
line 233, in _new
self.collection_class = util.duck_type_collection(lazy_collection())
  File
C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\ext\associationproxy.py,
line 335, in __call__
return getattr(obj, self.target)
  File
C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\orm\attributes.py,
line 162, in __get__
return self.impl.get(instance_state(instance),dict_)
  File
C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\orm\attributes.py,
line 414, in get
value = self.callable_(state, passive)
  File
C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\orm\strategies.py,
line 542, in _load_for_state
result = q.all()
  File
C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\orm\query.py,
line 1636, in all
return list(self)
  File
C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\orm\query.py,
line 1746, in __iter__
return self._execute_and_instances(context)
  File
C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\orm\query.py,
line 1752, in _execute_and_instances
close_with_result=True).execute(querycontext.statement, self._params)
  File
C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\engine\base.py,
line 1259, in execute
params)
  File
C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\engine\base.py,
line 1392, in _execute_clauseelement
compiled_sql, distilled_params
  File
C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\engine\base.py,
line 1500, in _execute_context
context)
  File
C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\engine\base.py,
line 1493, in _execute_context
context)
  File
C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\engine\default.py,
line 325, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('42000', [42000]
[Microsoft][ODBC SQL Server Driver][SQL Server]Incorrect syntax near '='.
(102) (SQLExecDirectW); [42000] [Microsoft][ODBC SQL Server Driver][SQL
Server]Statement(s) could not be prepared. (8180)) uSELECT edge.id AS
edge_id, edge.tail_node_id AS edge_tail_node_id, edge.head_node_id AS
edge_head_node_id, edge._discriminator AS edge__discriminator \nFROM edge
\nWHERE edge.tail_node_id = ? AND Edge._discriminator=='use' (69,)

-- 
You received this message because you are subscribed to the 

Re: [sqlalchemy] Bug in mssql dialect?

2011-03-03 Thread Michael Bayer
That looks certainly like a misconfigured relationship().   Not sure why SQLite 
would let it pass through (sqlite is in general extremely liberal), but that's 
clearly a literal string passed to a join expression sent to relationship as in 
relationship(... primaryjoin=and_(some expression, 
Edge._discriminator=='use')) or something like that. When using strings 
for relationship(), the full expression must be a string, not the components.



On Mar 3, 2011, at 10:55 AM, Michael Naber wrote:

 Last line is invalid t-sql. The ‘==’ should just be ‘=’. I get the problem 
 when using mssql but not with sqlite. Saw the bug on sqa version 0.6.6, 
 upgraded to 0.7b2 and still having issue. Please let me know if you need more 
 info and I'll be happy to provide.
  
 Thanks,
 Michael
  
  
 Traceback (most recent call last):
   File C:\Program Files\eclipse-SDK-3.6.2-win32\eclipse\dropins\Pydev 
 1.6.5\plugins\org.python.pydev.debug_1.6.5.2011020317\pysrc\pydevd.py, line 
 1133, in module
 debugger.run(setup['file'], None, None)
   File C:\Program Files\eclipse-SDK-3.6.2-win32\eclipse\dropins\Pydev 
 1.6.5\plugins\org.python.pydev.debug_1.6.5.2011020317\pysrc\pydevd.py, line 
 918, in run
 execfile(file, globals, locals) #execute the script
   File C:\OpsPylonDev\TransformationBA-Trunk\pydev-setup-app.py, line 5, in 
 module
 SetupCommand('setup-app').run(['development.ini'])
   File 
 C:\Python27\lib\site-packages\pastescript-1.7.3-py2.7.egg\paste\script\appinstall.py,
  line 68, in run
 return super(AbstractInstallCommand, self).run(new_args)
   File 
 C:\Python27\lib\site-packages\pastescript-1.7.3-py2.7.egg\paste\script\command.py,
  line 218, in run
 result = self.command()
   File 
 C:\Python27\lib\site-packages\pastescript-1.7.3-py2.7.egg\paste\script\appinstall.py,
  line 456, in command
 self, config_file, section, self.sysconfig_install_vars(installer))
   File 
 C:\Python27\lib\site-packages\pastescript-1.7.3-py2.7.egg\paste\script\appinstall.py,
  line 598, in setup_config
 mod.setup_app, command, filename, section, vars)
   File 
 C:\Python27\lib\site-packages\pastescript-1.7.3-py2.7.egg\paste\script\appinstall.py,
  line 612, in _call_setup_app
 func(command, conf, vars)
   File C:\OpsPylonDev\TransformationBA-Trunk\transformationba\websetup.py, 
 line 34, in setup_app
 insert_data()
   File 
 C:\OpsPylonDev\TransformationBA-Trunk\transformationba\model\data\__init__.py,
  line 42, in insert_data
 inserter.insert()
   File transformationba\model\data\inserters\400technology.py, line 23, in 
 insert
 csharp_app.create_artifacts()
   File 
 C:\OpsPylonDev\TransformationBA-Trunk\transformationba\model\data\technology_data\csharp_app.py,
  line 65, in create_artifacts
 parent.uses.append(component)
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\ext\associationproxy.py,
  line 189, in __get__
 proxy = self._new(_lazy_collection(obj, self.target_collection))
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\ext\associationproxy.py,
  line 233, in _new
 self.collection_class = util.duck_type_collection(lazy_collection())
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\ext\associationproxy.py,
  line 335, in __call__
 return getattr(obj, self.target)
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\orm\attributes.py,
  line 162, in __get__
 return self.impl.get(instance_state(instance),dict_)
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\orm\attributes.py,
  line 414, in get
 value = self.callable_(state, passive)
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\orm\strategies.py,
  line 542, in _load_for_state
 result = q.all()
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\orm\query.py,
  line 1636, in all
 return list(self)
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\orm\query.py,
  line 1746, in __iter__
 return self._execute_and_instances(context)
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\orm\query.py,
  line 1752, in _execute_and_instances
 close_with_result=True).execute(querycontext.statement, self._params)
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\engine\base.py,
  line 1259, in execute
 params)
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\engine\base.py,
  line 1392, in _execute_clauseelement
 compiled_sql, distilled_params
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\engine\base.py,
  line 1500, in _execute_context
 context)
   File 
 C:\Python27\lib\site-packages\sqlalchemy-0.7b2-py2.7.egg\sqlalchemy\engine\base.py,
  line 1493, in _execute_context
 context)
   File 
 

Re: [sqlalchemy] bug with declarative's _decl_class_registry

2010-10-06 Thread Chris Withers

On 05/10/2010 18:03, Michael Bayer wrote:



Also, is there a 'trunk' or 'tip' anywhere now to put 0.7-targeted code?


there's not.  You'd make a bitbucket repo and link to it on the trac ticket.


Done:

http://www.sqlalchemy.org/trac/changeset/6868:d8580d6765d4

http://www.sqlalchemy.org/trac/ticket/1937

http://www.sqlalchemy.org/trac/ticket/1938

Chris

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] bug with declarative's _decl_class_registry

2010-10-05 Thread Chris Withers

Hi All,

Start off with a base.py module:

from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()

Now, say we have a module, a.py:

from sqlalchemy import *
from base import Base

class Something(Base):
__tablename__ = 'foo'
id = Column('id', Integer, primary_key=True)

...and another module, b.py:

from sqlalchemy import *
from base import Base

class Something(Base):
__tablename__ = 'bar'
id = Column('id', Integer, primary_key=True)

...and finally, a third module, c.py:

from sqlalchemy import *
from sqlalchemy.orm import relationship
from base import Base

import a,b

class AnotherThing(Base):
__tablename__ = 'baz'
id = Column('id', Integer, primary_key=True)
addresses = relationship(Something)

...what table will that relationship be to?

 import c
 c.AnotherThing._decl_class_registry['Something']
class 'b.Something'

I think an exception should be raised if a class name already exists in 
_decl_class_registry when the assignment is made in _as_declarative.
Are there any cases where it would be legit to have one class override 
another in _decl_class_registry in this way?

If the answer is no, I'll commit a test and patch asap...

cheers,

Chris

PS: Also, in the above, how come no foreign keys are created?

 c.AnotherThing.__table__.foreign_keys
OrderedSet([])
 import b
 b.Something.__table__.foreign_keys
OrderedSet([])
 import a
 a.Something.__table__.foreign_keys
OrderedSet([])

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] bug with declarative's _decl_class_registry

2010-10-05 Thread Michael Bayer

On Oct 5, 2010, at 11:04 AM, Chris Withers wrote:

 Hi All,
 
 Start off with a base.py module:
 
 from sqlalchemy.ext.declarative import declarative_base
 Base = declarative_base()
 
 Now, say we have a module, a.py:
 
 from sqlalchemy import *
 from base import Base
 
 class Something(Base):
__tablename__ = 'foo'
id = Column('id', Integer, primary_key=True)
 
 ...and another module, b.py:
 
 from sqlalchemy import *
 from base import Base
 
 class Something(Base):
__tablename__ = 'bar'
id = Column('id', Integer, primary_key=True)
 
 ...and finally, a third module, c.py:
 
 from sqlalchemy import *
 from sqlalchemy.orm import relationship
 from base import Base
 
 import a,b
 
 class AnotherThing(Base):
__tablename__ = 'baz'
id = Column('id', Integer, primary_key=True)
addresses = relationship(Something)
 
 ...what table will that relationship be to?
 
  import c
  c.AnotherThing._decl_class_registry['Something']
 class 'b.Something'
 
 I think an exception should be raised if a class name already exists in 
 _decl_class_registry when the assignment is made in _as_declarative.

yeah definitely, though in 0.6 it needs to be a warning to start since some 
folks might be doing this semi-intentionally.


 Are there any cases where it would be legit to have one class override 
 another in _decl_class_registry in this way?
 If the answer is no, I'll commit a test and patch asap...
 
 cheers,
 
 Chris
 
 PS: Also, in the above, how come no foreign keys are created?

theres no usage of ForeignKey() or ForeignKeyConstraint().


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] bug with declarative's _decl_class_registry

2010-10-05 Thread Chris Withers

On 05/10/2010 16:10, Michael Bayer wrote:

I think an exception should be raised if a class name already exists in 
_decl_class_registry when the assignment is made in _as_declarative.


yeah definitely, though in 0.6 it needs to be a warning to start since some 
folks might be doing this semi-intentionally.


What could the intention possibly be?

Also, is there a 'trunk' or 'tip' anywhere now to put 0.7-targeted code?

Chris

--
Simplistix - Content Management, Batch Processing  Python Consulting
   - http://www.simplistix.co.uk

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] bug with declarative's _decl_class_registry

2010-10-05 Thread Michael Bayer

On Oct 5, 2010, at 12:37 PM, Chris Withers wrote:

 On 05/10/2010 16:10, Michael Bayer wrote:
 I think an exception should be raised if a class name already exists in 
 _decl_class_registry when the assignment is made in _as_declarative.
 
 yeah definitely, though in 0.6 it needs to be a warning to start since some 
 folks might be doing this semi-intentionally.
 
 What could the intention possibly be?

they named two classes the same thing, they happened to have configured 
relationship() using the class object and not the string registry, and their 
app works.  If 0.6 turns that into an exception they get sudden application 
failure upgrading on a minor point release.

 
 Also, is there a 'trunk' or 'tip' anywhere now to put 0.7-targeted code?

there's not.  You'd make a bitbucket repo and link to it on the trac ticket.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] bug in sqllite dialect?

2010-04-20 Thread Michael Bayer
Chris Withers wrote:
 Michael Bayer wrote:
 Chris Withers wrote:
 Michael Bayer wrote:
 Has anyone (hi, list, talking to you too!) already done a custom type
 for this specific problem?
 people do custom types for all sorts of things.  In the case of the
 Decimal here I'd likely subclass sqlalchemy.databases.sqlite.SLNumeric
 which should ensure that your own bind_processor() and
 result_processor()
 methods can be called.
 Okay, but how do I make sure this is only used when sqlite this the
 engine?

 You can use a TypeDecorator to switch between implementations.  I had a
 plan to add a pre-fab type for this to core that allows easy switching
 of
 implementations.

 Okay, this is what I've ended up with for this one:

 from decimal import Decimal
 from sqlalchemy import types
 from sqlalchemy.databases.sqlite import SQLiteDialect

 class Numeric(types.TypeDecorator):
  A numeric type that respects precision with SQLite

  - always returns Decimals
  - always rounds as necessary as a result
  

  impl = types.Numeric

  def bind_processor(self, dialect):
  if isinstance(dialect,SQLiteDialect):
  def process(value):
  if value is None:
  return None
  else:
  return float(value)
  return process
  else:
  return super(Numeric,self).bind_processor(dialect)

  def result_processor(self, dialect):
  if isinstance(dialect,SQLiteDialect):
  fstring = %%.%df % self.impl.scale
  def process(value):
  if value is None:
  return None
  else:
  return Decimal(fstring % value)
  return process
  else:
  return super(Numeric,self).result_processor(dialect)

 What should I have done differently and why?

 cheers,

 Chris

 PS: While looking into how to do this for 0.5.8, I saw this dire warning
 in 0.6:

 util.warn(Dialect %s+%s does *not* support Decimal objects natively, 
and SQLAlchemy must convert from floating point - 
rounding errors and other issues may occur. 
Please consider storing Decimal numbers as strings or 
integers on this platform for lossless storage. %
  (dialect.name, dialect.driver))

 ...which I also saw applies to SQLite.

 What are the rounding errors and others issues that you allude to?
 What is the integer/string recommendation saying? Use a String column?
 Use an Integer column? Is this something I could work into a
 TypeDecorator? Should I?

the whole ugly discussion is at http://www.sqlalchemy.org/trac/ticket/1759


 cheers,

 Chris

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] bug in sqllite dialect?

2010-04-20 Thread Chris Withers

Michael Bayer wrote:

the whole ugly discussion is at http://www.sqlalchemy.org/trac/ticket/1759


Speshul...

*sigh*

I'm glad I don't have your responsibilities ;-)

Chris

--
Simplistix - Content Management, Batch Processing  Python Consulting
   - http://www.simplistix.co.uk

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] bug in sqllite dialect?

2010-03-05 Thread Chris Withers

Michael Bayer wrote:

Has anyone (hi, list, talking to you too!) already done a custom type
for this specific problem?


people do custom types for all sorts of things.  In the case of the
Decimal here I'd likely subclass sqlalchemy.databases.sqlite.SLNumeric
which should ensure that your own bind_processor() and result_processor()
methods can be called. 


Okay, but how do I make sure this is only used when sqlite this the engine?

Chris

--
Simplistix - Content Management, Batch Processing  Python Consulting
- http://www.simplistix.co.uk

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] bug in sqllite dialect?

2010-03-05 Thread Michael Bayer
Chris Withers wrote:
 Michael Bayer wrote:
 Has anyone (hi, list, talking to you too!) already done a custom type
 for this specific problem?

 people do custom types for all sorts of things.  In the case of the
 Decimal here I'd likely subclass sqlalchemy.databases.sqlite.SLNumeric
 which should ensure that your own bind_processor() and
 result_processor()
 methods can be called.

 Okay, but how do I make sure this is only used when sqlite this the
 engine?

You can use a TypeDecorator to switch between implementations.  I had a
plan to add a pre-fab type for this to core that allows easy switching of
implementations.



 Chris

 --
 Simplistix - Content Management, Batch Processing  Python Consulting
  - http://www.simplistix.co.uk

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] bug in sqllite dialect?

2010-03-04 Thread Chris Withers

Hi Michael,

Thanks for this, I thought I asked this separately but I can't find the 
mail now...


How would you recommend I work this now in 0.5.8 until I can move to 
0.6.0? (which will take some months :-S)


I seem to remember you suggesting a custom type. Where can I find 
examples of those to work against?


Has anyone (hi, list, talking to you too!) already done a custom type 
for this specific problem?


cheers,

Chris

Michael Bayer wrote:

fixed in r6859.  please don't use those crappy pysqlite converters.


On Feb 26, 2010, at 8:50 AM, Chris Withers wrote:


from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.orm.session import Session
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.schema import Column
from sqlalchemy.types import String, Numeric, Integer

import unittest
from decimal import Decimal

class Test(unittest.TestCase):

   def test_truncate(self):
   # setup
   engine = create_engine(sqlite://)
   self.Session = sessionmaker(
   bind=engine,
   autoflush=True,
   autocommit=False
   )
   Base = declarative_base(bind=engine)
   class MyModel(Base):
   __tablename__ = 'test'
   id = Column(Integer, primary_key=True)
   value = Column(Numeric(precision=36,scale=12))
   Base.metadata.create_all()
   session = self.Session()

   # precision=36 scale=12 should mean this can handle 12 decimal places
   # and this has 12 decimal places.
   session.add(MyModel(value=152.737826714556))
   session.commit()

   obj = session.query(MyModel).one()

   # this will fail with the output, it shouldn't
   # Decimal(152.737826715) != Decimal(152.737826714556)
   self.assertEqual(obj.value, Decimal(152.737826714556))




--
Simplistix - Content Management, Batch Processing  Python Consulting
- http://www.simplistix.co.uk

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] bug in sqllite dialect?

2010-03-04 Thread Michael Bayer
Chris Withers wrote:
 Hi Michael,

 Thanks for this, I thought I asked this separately but I can't find the
 mail now...

 How would you recommend I work this now in 0.5.8 until I can move to
 0.6.0? (which will take some months :-S)

 I seem to remember you suggesting a custom type. Where can I find
 examples of those to work against?

 Has anyone (hi, list, talking to you too!) already done a custom type
 for this specific problem?

people do custom types for all sorts of things.  In the case of the
Decimal here I'd likely subclass sqlalchemy.databases.sqlite.SLNumeric
which should ensure that your own bind_processor() and result_processor()
methods can be called.  Or as in the doc below you can subclass TypeEngine
directly.

http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/types.html#custom-types



 cheers,

 Chris

 Michael Bayer wrote:
 fixed in r6859.  please don't use those crappy pysqlite converters.


 On Feb 26, 2010, at 8:50 AM, Chris Withers wrote:

 from sqlalchemy import create_engine
 from sqlalchemy.orm import sessionmaker
 from sqlalchemy.orm.session import Session
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.schema import Column
 from sqlalchemy.types import String, Numeric, Integer

 import unittest
 from decimal import Decimal

 class Test(unittest.TestCase):

def test_truncate(self):
# setup
engine = create_engine(sqlite://)
self.Session = sessionmaker(
bind=engine,
autoflush=True,
autocommit=False
)
Base = declarative_base(bind=engine)
class MyModel(Base):
__tablename__ = 'test'
id = Column(Integer, primary_key=True)
value = Column(Numeric(precision=36,scale=12))
Base.metadata.create_all()
session = self.Session()

# precision=36 scale=12 should mean this can handle 12 decimal
 places
# and this has 12 decimal places.
session.add(MyModel(value=152.737826714556))
session.commit()

obj = session.query(MyModel).one()

# this will fail with the output, it shouldn't
# Decimal(152.737826715) != Decimal(152.737826714556)
self.assertEqual(obj.value, Decimal(152.737826714556))


 --
 Simplistix - Content Management, Batch Processing  Python Consulting
  - http://www.simplistix.co.uk

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] bug in DROP TABLE CASCADE

2010-02-28 Thread Manlio Perillo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

When reading the code in the `sql.compiler` module, I noted this:

def visit_drop_table(self, drop):
ret = \nDROP TABLE  + self.preparer.format_table(drop.element)
if drop.cascade:
ret +=  CASCADE CONSTRAINTS
return ret

The problem is that this syntax is not supported by all backends!
I have checked the SQL 2003 standard (a draft), and it seems that
CASCADE CONSTRAINTS is *not* standard.


Example:


from sqlalchemy import schema, types, sql, create_engine


metadata = schema.MetaData()
test = schema.Table(
'test', metadata,
schema.Column('x', types.Integer, primary_key=True)
)


engine = create_engine('sqlite://',  echo=True)

engine.execute(schema.CreateTable(test))
engine.execute(schema.DropTable(test, cascade=True))



This fails on both PostgreSQL and SQLite.


My suggestion:

1) Remove CONSTRAINTS from the DROP statetemt in `sql.compiler`
2) override visit_drop_table method for dialects that don't support
   CASCADE syntax (like sqlite)
3) override visit_drop_table for dialects with extended syntax
   (Oracle and MSSQL?)

   By the way: does Oracle support the SQL standard
   DROP TABLE CASCADE ?



Regards  Manlio
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkuK0EgACgkQscQJ24LbaURzvQCgiYGEMKsiOr4lmXnYBtidq5vp
FL0AnAqsZCnsbidkJcKW1s8EGeWErsnu
=XNiZ
-END PGP SIGNATURE-

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] bug in DROP TABLE CASCADE

2010-02-28 Thread Michael Bayer

On Feb 28, 2010, at 8:09 PM, Manlio Perillo wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Michael Bayer ha scritto:
 On Feb 28, 2010, at 3:21 PM, Manlio Perillo wrote:
 
 When reading the code in the `sql.compiler` module, I noted this:
 
   def visit_drop_table(self, drop):
   ret = \nDROP TABLE  + self.preparer.format_table(drop.element)
   if drop.cascade:
   ret +=  CASCADE CONSTRAINTS
   return ret
 
 The problem is that this syntax is not supported by all backends!
 I have checked the SQL 2003 standard (a draft), and it seems that
 CASCADE CONSTRAINTS is *not* standard.
 
 
 that was proof of concept only.  removed in r6884.
 
 
 Why?
 
 I think it is fine.

it wasnt documented or tested anywhere so if someone wants to implement it 
correctly across backends that would be fine.no point leaving broken code 
in trunk until then.



 
 
 Manlio
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.10 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
 
 iEYEARECAAYFAkuLE70ACgkQscQJ24LbaUSszwCgglzVSnb12atzAwi2UDdtVoPn
 jt4AniOVDjwwBNDmGFFBw+a/mJynX7dY
 =Wjhp
 -END PGP SIGNATURE-
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] bug in sqllite dialect?

2010-02-26 Thread Chris Withers

Michael Bayer wrote:

not sure if this is obvious, its the sqlite3.PARSE_DECLTYPES.  The
SQLite date types don't expect this to be turned on.  That is a
handy feature which I'm not sure was available in such a simple form
when I first wrote against the pysqlite dialect in Python 2.3.


Indeed, the dates are the problem here. As we talked about at PyCon, the 
dates thing is a side effect of the thing my colleagues were trying to 
solve. I've attached a test case which demonstrates the problem.


The test can be made to parse by adding the following:

import sqlite3
sqlite3.register_converter('NUMERIC',Decimal)

...and creating the engine as follows:

engine = create_engine(
sqlite://,
connect_args = {'detect_types':sqlite3.PARSE_DECLTYPES}
)

..but then we have the problem that my original mail was about. Any 
other solutions or explanations on the truncating Decimals front?



A workaround is to use a dummy Date type that returns None for
bind_processor() and result_processor().


Not gonna fly here, there's too many projects and developers this would 
touch :'(



I don't see any accessor on the SQLite connection that could tell us
if this flag is enabled.  We don't want to do an isinstance()
because those are quite expensive.




So what we can do here is utilize 0.6's test the connection trick,
to issue a select current_timestamp() from the SQLite connection,
and if it comes back as datetime we'd assume PARSE_DECLTYPES is on,
or at least some kind of date-based processor has been added.  then
the SQLite date types would consult this flag.  I added #1685 for
this which is tentatively targeted at 0.6.0 just so I dont lose
track of it.


It sounds a bit icky, but I guess if there's no other way?


We might want to look into having 0.6 set a default handler for date
types in any case, would need to ensure its completely compatible
with what we're doing now.


I dunno what this means...


 Also not sure if you're aware, pool_recycle is not advisable with
a :memory: database.  it would zap out your DB.  sqlite also doesnt
require any encoding since it only accepts unicode strings - the
param is unused by SQLalchemy with sqlite.


Yeah, both of these are there 'cos we swap out testing engine between 
MySQL and SQLite, I'll make sure they're only passed when we're really 
using MySQL...


cheers,

Chris

--
Simplistix - Content Management, Batch Processing  Python Consulting
- http://www.simplistix.co.uk
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.orm.session import Session
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.schema import Column
from sqlalchemy.types import String, Numeric, Integer
 
import unittest
from decimal import Decimal

class Test(unittest.TestCase):

def test_truncate(self):
# setup
engine = create_engine(sqlite://)
self.Session = sessionmaker(
bind=engine,
autoflush=True,
autocommit=False
)
Base = declarative_base(bind=engine)
class MyModel(Base):
__tablename__ = 'test'
id = Column(Integer, primary_key=True)
value = Column(Numeric(precision=36,scale=12))
Base.metadata.create_all()
session = self.Session()

# precision=36 scale=12 should mean this can handle 12 decimal places
# and this has 12 decimal places.
session.add(MyModel(value=152.737826714556))
session.commit()

obj = session.query(MyModel).one()

# this will fail with the output, it shouldn't
# Decimal(152.737826715) != Decimal(152.737826714556)
self.assertEqual(obj.value, Decimal(152.737826714556))
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] bug in sqllite dialect?

2010-02-26 Thread Michael Bayer

On Feb 26, 2010, at 8:50 AM, Chris Withers wrote:

 Michael Bayer wrote:
 not sure if this is obvious, its the sqlite3.PARSE_DECLTYPES.  The
 SQLite date types don't expect this to be turned on.  That is a
 handy feature which I'm not sure was available in such a simple form
 when I first wrote against the pysqlite dialect in Python 2.3.
 
 Indeed, the dates are the problem here. As we talked about at PyCon, the 
 dates thing is a side effect of the thing my colleagues were trying to solve. 
 I've attached a test case which demonstrates the problem.
 
 The test can be made to parse by adding the following:
 
 import sqlite3
 sqlite3.register_converter('NUMERIC',Decimal)
 
 ...and creating the engine as follows:
 
 engine = create_engine(
sqlite://,
connect_args = {'detect_types':sqlite3.PARSE_DECLTYPES}
)
 
 ..but then we have the problem that my original mail was about. Any other 
 solutions or explanations on the truncating Decimals front?

The short answer is that Pysqlite's functionality is not fine grained enough 
(cant do it just for decimals, it forces itself in for dates) and it is simply 
not compatible with SQLAlchemy's system - it is seriously flawed in that it 
doesn't even provide its typing information in cursor.description so its 
impossible for us to smoothly work around it and detect when it has kicked in 
and when it has not (see the doc below for details).

If you'd like to use it, add the native_datetime flag as described here:  
http://www.sqlalchemy.org/docs/reference/dialects/sqlite.html#compatibility-with-sqlite3-native-date-and-datetime-types
 .   

If you'd like to stay on planet earth with us and not try to use Pysqlite's not 
very useful behavior, I still don't have any confirming test of what the issue 
with Decimals is. I'd like a test case that uses no special SQLite flags 
whatsoever.


 Not gonna fly here, there's too many projects and developers this would touch 
 :'(

its just an import.   Python is pretty handy like that.





 
 A workaround is to use a dummy Date type that returns None for
 bind_processor() and result_processor().
 
 
 I don't see any accessor on the SQLite connection that could tell us
 if this flag is enabled.  We don't want to do an isinstance()
 because those are quite expensive.
 
 
 So what we can do here is utilize 0.6's test the connection trick,
 to issue a select current_timestamp() from the SQLite connection,
 and if it comes back as datetime we'd assume PARSE_DECLTYPES is on,
 or at least some kind of date-based processor has been added.  then
 the SQLite date types would consult this flag.  I added #1685 for
 this which is tentatively targeted at 0.6.0 just so I dont lose
 track of it.
 
 It sounds a bit icky, but I guess if there's no other way?
 
 We might want to look into having 0.6 set a default handler for date
 types in any case, would need to ensure its completely compatible
 with what we're doing now.
 
 I dunno what this means...
 
 Also not sure if you're aware, pool_recycle is not advisable with
 a :memory: database.  it would zap out your DB.  sqlite also doesnt
 require any encoding since it only accepts unicode strings - the
 param is unused by SQLalchemy with sqlite.
 
 Yeah, both of these are there 'cos we swap out testing engine between MySQL 
 and SQLite, I'll make sure they're only passed when we're really using 
 MySQL...
 
 cheers,
 
 Chris
 
 -- 
 Simplistix - Content Management, Batch Processing  Python Consulting
- http://www.simplistix.co.uk
 from sqlalchemy import create_engine
 from sqlalchemy.orm import sessionmaker
 from sqlalchemy.orm.session import Session
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.schema import Column
 from sqlalchemy.types import String, Numeric, Integer
 
 import unittest
 from decimal import Decimal
 
 class Test(unittest.TestCase):
 
def test_truncate(self):
# setup
engine = create_engine(sqlite://)
self.Session = sessionmaker(
bind=engine,
autoflush=True,
autocommit=False
)
Base = declarative_base(bind=engine)
class MyModel(Base):
__tablename__ = 'test'
id = Column(Integer, primary_key=True)
value = Column(Numeric(precision=36,scale=12))
Base.metadata.create_all()
session = self.Session()
 
# precision=36 scale=12 should mean this can handle 12 decimal places
# and this has 12 decimal places.
session.add(MyModel(value=152.737826714556))
session.commit()
 
obj = session.query(MyModel).one()
 
# this will fail with the output, it shouldn't
# Decimal(152.737826715) != Decimal(152.737826714556)
self.assertEqual(obj.value, Decimal(152.737826714556))
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe 

Re: [sqlalchemy] bug in sqllite dialect?

2010-02-26 Thread Michael Bayer

On Feb 26, 2010, at 8:50 AM, Chris Withers wrote:

 Michael Bayer wrote:
 not sure if this is obvious, its the sqlite3.PARSE_DECLTYPES.  The
 SQLite date types don't expect this to be turned on.  That is a
 handy feature which I'm not sure was available in such a simple form
 when I first wrote against the pysqlite dialect in Python 2.3.
 
 Indeed, the dates are the problem here. As we talked about at PyCon, the 
 dates thing is a side effect of the thing my colleagues were trying to solve. 
 I've attached a test case which demonstrates the problem.

ah sorry, didnt see the test case.you don't need SQLite converters for 
this, you need a custom type for now until we fix that issue.  ticket is 
forthcoming.





 
 The test can be made to parse by adding the following:
 
 import sqlite3
 sqlite3.register_converter('NUMERIC',Decimal)
 
 ...and creating the engine as follows:
 
 engine = create_engine(
sqlite://,
connect_args = {'detect_types':sqlite3.PARSE_DECLTYPES}
)
 
 ..but then we have the problem that my original mail was about. Any other 
 solutions or explanations on the truncating Decimals front?
 
 A workaround is to use a dummy Date type that returns None for
 bind_processor() and result_processor().
 
 Not gonna fly here, there's too many projects and developers this would touch 
 :'(
 
 I don't see any accessor on the SQLite connection that could tell us
 if this flag is enabled.  We don't want to do an isinstance()
 because those are quite expensive.
 
 
 So what we can do here is utilize 0.6's test the connection trick,
 to issue a select current_timestamp() from the SQLite connection,
 and if it comes back as datetime we'd assume PARSE_DECLTYPES is on,
 or at least some kind of date-based processor has been added.  then
 the SQLite date types would consult this flag.  I added #1685 for
 this which is tentatively targeted at 0.6.0 just so I dont lose
 track of it.
 
 It sounds a bit icky, but I guess if there's no other way?
 
 We might want to look into having 0.6 set a default handler for date
 types in any case, would need to ensure its completely compatible
 with what we're doing now.
 
 I dunno what this means...
 
 Also not sure if you're aware, pool_recycle is not advisable with
 a :memory: database.  it would zap out your DB.  sqlite also doesnt
 require any encoding since it only accepts unicode strings - the
 param is unused by SQLalchemy with sqlite.
 
 Yeah, both of these are there 'cos we swap out testing engine between MySQL 
 and SQLite, I'll make sure they're only passed when we're really using 
 MySQL...
 
 cheers,
 
 Chris
 
 -- 
 Simplistix - Content Management, Batch Processing  Python Consulting
- http://www.simplistix.co.uk
 from sqlalchemy import create_engine
 from sqlalchemy.orm import sessionmaker
 from sqlalchemy.orm.session import Session
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.schema import Column
 from sqlalchemy.types import String, Numeric, Integer
 
 import unittest
 from decimal import Decimal
 
 class Test(unittest.TestCase):
 
def test_truncate(self):
# setup
engine = create_engine(sqlite://)
self.Session = sessionmaker(
bind=engine,
autoflush=True,
autocommit=False
)
Base = declarative_base(bind=engine)
class MyModel(Base):
__tablename__ = 'test'
id = Column(Integer, primary_key=True)
value = Column(Numeric(precision=36,scale=12))
Base.metadata.create_all()
session = self.Session()
 
# precision=36 scale=12 should mean this can handle 12 decimal places
# and this has 12 decimal places.
session.add(MyModel(value=152.737826714556))
session.commit()
 
obj = session.query(MyModel).one()
 
# this will fail with the output, it shouldn't
# Decimal(152.737826715) != Decimal(152.737826714556)
self.assertEqual(obj.value, Decimal(152.737826714556))
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] bug in sqllite dialect?

2010-02-26 Thread Michael Bayer
ticket #1717

On Feb 26, 2010, at 10:28 AM, Michael Bayer wrote:

 
 On Feb 26, 2010, at 8:50 AM, Chris Withers wrote:
 
 Michael Bayer wrote:
 not sure if this is obvious, its the sqlite3.PARSE_DECLTYPES.  The
 SQLite date types don't expect this to be turned on.  That is a
 handy feature which I'm not sure was available in such a simple form
 when I first wrote against the pysqlite dialect in Python 2.3.
 
 Indeed, the dates are the problem here. As we talked about at PyCon, the 
 dates thing is a side effect of the thing my colleagues were trying to 
 solve. I've attached a test case which demonstrates the problem.
 
 The test can be made to parse by adding the following:
 
 import sqlite3
 sqlite3.register_converter('NUMERIC',Decimal)
 
 ...and creating the engine as follows:
 
 engine = create_engine(
   sqlite://,
   connect_args = {'detect_types':sqlite3.PARSE_DECLTYPES}
   )
 
 ..but then we have the problem that my original mail was about. Any other 
 solutions or explanations on the truncating Decimals front?
 
 The short answer is that Pysqlite's functionality is not fine grained enough 
 (cant do it just for decimals, it forces itself in for dates) and it is 
 simply not compatible with SQLAlchemy's system - it is seriously flawed in 
 that it doesn't even provide its typing information in cursor.description so 
 its impossible for us to smoothly work around it and detect when it has 
 kicked in and when it has not (see the doc below for details).
 
 If you'd like to use it, add the native_datetime flag as described here:  
 http://www.sqlalchemy.org/docs/reference/dialects/sqlite.html#compatibility-with-sqlite3-native-date-and-datetime-types
  .   
 
 If you'd like to stay on planet earth with us and not try to use Pysqlite's 
 not very useful behavior, I still don't have any confirming test of what the 
 issue with Decimals is. I'd like a test case that uses no special SQLite 
 flags whatsoever.
 
 
 Not gonna fly here, there's too many projects and developers this would 
 touch :'(
 
 its just an import.   Python is pretty handy like that.
 
 
 
 
 
 
 A workaround is to use a dummy Date type that returns None for
 bind_processor() and result_processor().
 
 
 I don't see any accessor on the SQLite connection that could tell us
 if this flag is enabled.  We don't want to do an isinstance()
 because those are quite expensive.
 
 
 So what we can do here is utilize 0.6's test the connection trick,
 to issue a select current_timestamp() from the SQLite connection,
 and if it comes back as datetime we'd assume PARSE_DECLTYPES is on,
 or at least some kind of date-based processor has been added.  then
 the SQLite date types would consult this flag.  I added #1685 for
 this which is tentatively targeted at 0.6.0 just so I dont lose
 track of it.
 
 It sounds a bit icky, but I guess if there's no other way?
 
 We might want to look into having 0.6 set a default handler for date
 types in any case, would need to ensure its completely compatible
 with what we're doing now.
 
 I dunno what this means...
 
 Also not sure if you're aware, pool_recycle is not advisable with
 a :memory: database.  it would zap out your DB.  sqlite also doesnt
 require any encoding since it only accepts unicode strings - the
 param is unused by SQLalchemy with sqlite.
 
 Yeah, both of these are there 'cos we swap out testing engine between MySQL 
 and SQLite, I'll make sure they're only passed when we're really using 
 MySQL...
 
 cheers,
 
 Chris
 
 -- 
 Simplistix - Content Management, Batch Processing  Python Consulting
   - http://www.simplistix.co.uk
 from sqlalchemy import create_engine
 from sqlalchemy.orm import sessionmaker
 from sqlalchemy.orm.session import Session
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.schema import Column
 from sqlalchemy.types import String, Numeric, Integer
 
 import unittest
 from decimal import Decimal
 
 class Test(unittest.TestCase):
 
   def test_truncate(self):
   # setup
   engine = create_engine(sqlite://)
   self.Session = sessionmaker(
   bind=engine,
   autoflush=True,
   autocommit=False
   )
   Base = declarative_base(bind=engine)
   class MyModel(Base):
   __tablename__ = 'test'
   id = Column(Integer, primary_key=True)
   value = Column(Numeric(precision=36,scale=12))
   Base.metadata.create_all()
   session = self.Session()
 
   # precision=36 scale=12 should mean this can handle 12 decimal places
   # and this has 12 decimal places.
   session.add(MyModel(value=152.737826714556))
   session.commit()
 
   obj = session.query(MyModel).one()
 
   # this will fail with the output, it shouldn't
   # Decimal(152.737826715) != Decimal(152.737826714556)
   self.assertEqual(obj.value, Decimal(152.737826714556))
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this 

  1   2   >