[sqlalchemy] Re: Asymmetry between engine.begin() and connection.begin()

2016-02-23 Thread Jonathan Beluch
Actually my example isn't fully correct for the case of passing in a 
connection not in a transaction but the question remains.

On Tuesday, February 23, 2016 at 9:05:03 PM UTC-7, Jonathan Beluch wrote:
>
> Is there a better way of doing this? Basically I have a function that 
> takes a connectable (engine or connection) and I want to start a 
> transaction. However I can't just call .begin() because it could return a 
> Transaction or a Connection depending on what was passed in and I need a 
> connection.
>
> @contextlib.contextmanager
> def _transaction(connectable):
> if hasattr(connectable, 'in_transaction') and 
> connectable.in_transaction():
> yield connectable
> else:
> with connectable.begin() as conn:
> yield conn
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Asymmetry between engine.begin() and connection.begin()

2016-02-23 Thread Jonathan Beluch
Is there a better way of doing this? Basically I have a function that takes 
a connectable (engine or connection) and I want to start a transaction. 
However I can't just call .begin() because it could return a Transaction or 
a Connection depending on what was passed in and I need a connection.

@contextlib.contextmanager
def _transaction(connectable):
if hasattr(connectable, 'in_transaction') and 
connectable.in_transaction():
yield connectable
else:
with connectable.begin() as conn:
yield conn

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Dependency rule tried to blank-out primary key column when trying to update using association proxy

2016-02-23 Thread Mike Bayer



On 02/23/2016 06:11 PM, Piotr Dobrogost wrote:

On Tuesday, February 23, 2016 at 5:44:45 PM UTC+1, Mike Bayer wrote:



That the old TextValue objects need to be deleted I understand. What I
don't see
is what makes SA specifically blank-out primary key column
'text_value.text_id'?


the "blanking out" is what occurs if you do *not* delete the old 
TextValue.  It means to be de-associated with the parent Text object 
which necessarily means setting TextValue.text_id to None, which in this 
mapping happens to also be part of the primary key.   If the TextValue 
is instead to be deleted, then this step does not occur.




Asking differently; I would understand if I got some integrity errors
while SA
would be trying to insert new rows with the same keys without first deleting
old rows but I have no idea how SA "came up with the idea" to blank-out
primary key?

In this specific example, marking the TextValue as deleted ends up
gving
you an UPDATE of TextValue, because the ORM converts an
INSERT/DELETE of
the same primary key into a single UPDATE.  But if your dictionary


Is this an implementation detail or official semantics?


this is how the ORM deals with the case of a DELETE of a row, then an 
INSERT of a row with that same identity.  The unit of work currently has 
the limitation that within a single flush, all INSERT statements happen 
before all DELETE statements, so in this case it takes the presence of 
both commands and turns them into an UPDATE.  however, it's still 
considered to be a delete + insert - if you're using the versioning 
extensions, the operation will still show up as these two at the session 
level.



 It's very

important for me
to keep identity of existing TextValue objects and only update values in
their 'value' column.


so this is what your example currently does, minus association proxy and 
the dict mapping:


# values = {'pl': u'org', 'en': u'org'}
session.add(
Text(_values=[
  TextValue(lang_id='pl', value='org'),
  TextValue(lang_id='en', value='org')])

text = session.query(Text).one()

# text.values = {'pl': u'modified', 'en': u'modified'}
text._values.clear()
text._values.append(TextValue(lang_id='pl', value='modified'))
text._values.append(TextValue(lang_id='en', value='modified'))


now if it instead was written like this, we wouldn't get redundant 
TextValue objects:


values = {'pl': u'org', 'en': u'org'}
session.add(Text(values=values))
text = session.query(Text).one()
text.values['pl'] = 'modified'
text.values['en'] = 'modified'


it would even keep the same TextValue if you did it like this:

text.values.update({'pl': u'modified', 'en': u'modified'})

so currently, the assignment you're doing treats this as "remove the 
old, add the new".  This is called a "bulk assignment" inside the 
collection mechanics.   The associationproxy has a few hooks to modify 
this but none of them currently provide a direct link to not first do 
the "clear" of the current value, if the incoming value is not the same 
object.  However, we can just override __set__ and make assignment do 
anything we want:


from sqlalchemy.ext.associationproxy import AssociationProxy


class DictUpdateProxy(AssociationProxy):
def __set__(self, obj, values):
proxy = self.__get__(obj, None)

keys_to_remove = set(proxy).difference(values)
for key in keys_to_remove:
del proxy[key]
proxy.update(values)


class Text(Base):
__tablename__ = 'text'
id = Column(Integer, primary_key=True)
_values = relationship('TextValue', 
collection_class=attribute_mapped_collection('lang_id'))


values = DictUpdateProxy(
'_values', 'value',
creator=lambda k, v: TextValue(lang_id=k, value=v))


in that way you can control exactly what __set__() does and it will 
never remove a TextValue object where the same identity is coming in on 
assignment.











(The reason it's important is that I'm using sqlalchemy-continuum
extension for automatic
versioning). Is there some preferred way to make sure identity is intact
and modifications
are done through UPDATE and not DELETE/INSERT?
Also, you say that in this specific example there is no DELETE needed
yet there's an error.
Does it mean SA tries to make sure DELETEs would be possible if they
were needed even if
in this specific example they are not needed?


Regards,
Piotr

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" 

Re: [sqlalchemy] Custom collection: nested dicts

2016-02-23 Thread Mike Bayer



On 02/23/2016 04:50 PM, Sergey Mozgovoy wrote:

Recently I ran into two cases that required some custom collection
behavior, and it turned out that what I needed did not exist in
SQLAlchemy, so I ended up implementing custom collection classes:

1) Nested mapped collections
(http://stackoverflow.com/questions/23354380/how-do-i-create-a-multi-level-dictionary-backed-by-a-single-table-in-sqlalchemy).
2) A usual (non-nested) mapped collection that groups objects with equal
keys into sets. So it maps objects' attribute to sets of objects (rather
than to single objects, as ordinary "attribute_mapped_collection").

Clearly, the above cases can be regarded as just special case of a
general collection class with extended capabilities. And
association_proxy: sometimes we don't want objects themselves, just
their property or attribute. So this collection class should be able to
act like association_proxy, too.

What I came up with is kind of ad-hoc and half-baked implementation of
this extended collection-mapping functionality. But I'd like to improve
it, make things more general and share with the community.

Is this functionality used often enough to care ? Am I missing some
extension or library for SQLAlchemy that does just exactly this ?

If not, I'd be glad to contribute to SQLAlchemy.



it sounds pretty useful, though a lot of these extension things are 
published as standalone projects, such as sqlalchemy-utils.  that way 
they can be developed and released at a higher velocity until they are 
industrial strength.  it also serves to ensure the maintenance burden is 
on the folks who created it; once it goes into SQLA direct, now *I* have 
to support it directly and that's a larger and larger burden for me.





-
Best regards,
Sergey

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] alembic del sys.modules[module_id] leading to orm mapper error

2016-02-23 Thread Mike Bayer



On 02/23/2016 03:38 PM, Will Angenent wrote:

Hi,

I think I’ve finally got to the bottom of it.

This triggers the problem:
- Load a module with some model classes using imp.load_source
- Delete the module from sys.modules
- Add a reference to the model classes using _mapper_registry.keys() so
it doesn't get garbage collected


so that would be where your application is making a reference to objects 
in the module file.   SQLAlchemy itself has no such reference.



- Trigger garbage collection with gc.collect()
- Trigger execution of configure_mappers() with a query

This causes the module to be garbage collected, but the classes to
remain in memory.

Perhaps the original problem was that garbage collection was happening
sometime after the list(_mapper_registry) in
sqlalchemy.mapper.orm.configure_mappers but before the call to the
secondary function. I can demonstrate the problem quite easily with the
code below. The first bit of test.py demonstrates that GC can remove the
module contents without removing classes inside it. The second part
triggers the sqlalchemy mapper error like I had before.


Right, so no bug in SQLA or Alembic is demonstrated here because this 
relies upon an artificial access of a private ORM variable.






file: test.py
—
#!/usr/bin/env python

from sqlalchemy import Column, Integer
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy.orm.mapper import _mapper_registry
import gc
import imp
import sys


# Adapted from alembic.util.pyfiles, using python 2k versions
def load_module_py(module_id, path):
 with open(path, 'rb') as fp:
 return imp.load_source(module_id, path, fp)

mod = load_module_py('a_module', 'a_module.py')
Test = mod.Test  # To demonstrate __name__ becoming None after GC
mod = None  # So mod can be garbage collected

print 'Before garbage collection:',
Test.print_name()

# Create a copy of the model classes so they don't get garbage collected
non_weak_copy_of_mapper_registry_classes = _mapper_registry.keys()

# Garbage collect the module class
del sys.modules['a_module']
gc.collect()

print 'After garbage collection:',
Test.print_name()

# Do a query to trigger the ORM to process the mappers
Base = declarative_base()


class Table3(Base):
 __tablename__ = 'table3'

 id = Column(Integer, primary_key=True)


engine = create_engine('sqlite:///:memory:', echo=False)
Session = sessionmaker(bind=engine)
Session.configure(bind=engine)
session = Session()
Base.metadata.create_all(session.bind)
session.query(Table3).all()

---
file: a_module.py
from sqlalchemy import Table, Column, Integer, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship

Base = declarative_base()


a_pivot_table = Table(
 'table1_to_table2', Base.metadata,
 Column('table1_id', ForeignKey('table1.id'), primary_key=True),
 Column('table2_id', ForeignKey('table2.id'), primary_key=True)
)


def get_a_pivot_table():
 print 'get_a_pivot_table __name__:', __name__
 print 'get_a_pivot_table a_pivot_table:', a_pivot_table
 return a_pivot_table


class Table1(Base):
 __tablename__ = 'table1'
 id = Column(Integer, primary_key=True)


class Table2(Base):
 __tablename__ = 'table2'
 id = Column(Integer, primary_key=True)
 table1_stuff = relationship("Table1", secondary=get_a_pivot_table)


class Test(object):
 @staticmethod
 def print_name():
 print '__name__:', __name__

---
The output:

$ ./test.py
Before garbage collection: __name__: a_module
After garbage collection: __name__: None
get_a_pivot_table __name__: None
get_a_pivot_table a_pivot_table: None
Traceback (most recent call last):
   File "./test.py", line 50, in 
 session.query(Table3).all()
   File
"/Users/wangenent/code/sqlalchemy/weak-refs/venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
line 1260, in query
 return self._query_cls(entities, self, **kwargs)
   File
"/Users/wangenent/code/sqlalchemy/weak-refs/venv/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
line 110, in __init__
 self._set_entities(entities)
   File
"/Users/wangenent/code/sqlalchemy/weak-refs/venv/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
line 120, in _set_entities
 self._set_entity_selectables(self._entities)
   File
"/Users/wangenent/code/sqlalchemy/weak-refs/venv/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
line 150, in _set_entity_selectables
 ent.setup_entity(*d[entity])
   File
"/Users/wangenent/code/sqlalchemy/weak-refs/venv/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
line 3421, in setup_entity
 self._with_polymorphic = ext_info.with_polymorphic_mappers
   File
"/Users/wangenent/code/sqlalchemy/weak-refs/venv/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py",
line 754, in __get__
 obj.__dict__[self.__name__] = result = self.fget(obj)
   File

Re: [sqlalchemy] "cache lookup failed" on reflection

2016-02-23 Thread Mike Bayer



On 02/23/2016 11:58 AM, Matt Smith wrote:

Hi sqlalchemy!

When using sqlalchemy (core) 1.0.8 with redshift-sqlalchemy 0.4.1, I
encountered the following exception:

|
InternalError:(psycopg2.InternalError)cache lookup failed forrelation
3262644
|

This is the query it failed on:

|
SELECT c.oid
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid =c.relnamespace
WHERE (pg_catalog.pg_table_is_visible(c.oid))
AND c.relname =%(table_name)s AND c.relkind in('r','v','m','f')



this error is being raised by the driver and I'd guess is some 
redshift-specific error, I've certainly never seen anything like that 
from Postgresql.



|

Our hypothesis is that another process operating on the same Redshift
cluster (which runs a table shuffling process and deletes tables), is
causing our code to error during schema reflection. Does this sound
correct to you, and if so, is there a way for our code to mitigate this
issue?


you'd have to make sure your code doesn't attempt to reflect the 
database while other processes are modifying it.   I'd not expect that 
to work on any kind of system.   Database schema mutability is not 
something intended to tolerate concurrency.







Thanks, Matt Smith

Here's the relevant portion of the stack trace:

|
...snip ...
 engine,meta =ENGINES.get('redshift'),ENGINES.get_meta('redshift')
File"ourcode.py",line 38,inget_meta
 meta.reflect(bind=self.get(name))
File"sqlalchemy/sql/schema.py",line 3647,inreflect
Table(name,self,**reflect_opts)
File"sqlalchemy/sql/schema.py",line 416,in__new__
 metadata._remove_table(name,schema)
File"sqlalchemy/util/langhelpers.py",line 60,in__exit__
 compat.reraise(exc_type,exc_value,exc_tb)
File"sqlalchemy/sql/schema.py",line 411,in__new__
 table._init(name,metadata,*args,**kw)
File"sqlalchemy/sql/schema.py",line 484,in_init
self._autoload(metadata,autoload_with,include_columns)
File"sqlalchemy/sql/schema.py",line 496,in_autoload
self,include_columns,exclude_columns
File"sqlalchemy/engine/base.py",line 1477,inrun_callable
returncallable_(self,*args,**kwargs)
File"sqlalchemy/engine/default.py",line 364,inreflecttable
returninsp.reflecttable(table,include_columns,exclude_columns)
File"sqlalchemy/engine/reflection.py",line 563,inreflecttable
 table_name,schema,**table.dialect_kwargs):
File"sqlalchemy/engine/reflection.py",line 369,inget_columns
**kw)
File"",line 2,inget_columns
File"sqlalchemy/engine/reflection.py",line 54,incache
 ret =fn(self,con,*args,**kw)
File"sqlalchemy/dialects/postgresql/base.py",line 2325,inget_columns
 info_cache=kw.get('info_cache'))
File"",line 2,inget_table_oid
File"sqlalchemy/engine/reflection.py",line 54,incache
 ret =fn(self,con,*args,**kw)
File"sqlalchemy/dialects/postgresql/base.py",line 2220,inget_table_oid
 c =connection.execute(s,table_name=table_name,schema=schema)
File"sqlalchemy/engine/base.py",line 914,inexecute
returnmeth(self,multiparams,params)
File"sqlalchemy/sql/elements.py",line 323,in_execute_on_connection
returnconnection._execute_clauseelement(self,multiparams,params)
File"sqlalchemy/engine/base.py",line 1010,in_execute_clauseelement
 compiled_sql,distilled_params
File"sqlalchemy/engine/base.py",line 1146,in_execute_context
 context)
File"sqlalchemy/engine/base.py",line 1341,in_handle_dbapi_exception
 exc_info
File"sqlalchemy/util/compat.py",line 199,inraise_from_cause
 reraise(type(exception),exception,tb=exc_tb)
File"sqlalchemy/engine/base.py",line 1139,in_execute_context
 context)
File"sqlalchemy/engine/default.py",line 450,indo_execute
 cursor.execute(statement,parameters)
|

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] NoneType not callable exception

2016-02-23 Thread bill . adams
Thanks for your help on this. We'll stick with 0.9 for the time being and 
keep an eye on the issue.

Thanks,
Bill

On Saturday, February 20, 2016 at 12:28:42 AM UTC-5, Mike Bayer wrote:
>
>
>
> On 02/19/2016 11:52 PM, bill@level12.io  wrote: 
> > I got a test case working. It seems that the limit/offset operations 
> > have an effect. The script fails less than half the time and so far only 
> > fails when I run python with the -R flag 
>
>
> this reproduces and is a critical issue captured in 
>
> https://bitbucket.org/zzzeek/sqlalchemy/issues/3657/positional-result-column-logic-failing.
>  
>
>   the full nature of the failure is not yet understood. 
>
>
>
> > 
> > 
> > | 
> > import sqlalchemy as sa 
> > import sqlalchemy.orm as saorm 
> > from sqlalchemy.ext.declarative import declarative_base 
> > 
> > engine = 
> > sa.create_engine('mssql+pymssql://
> badams:password@192.168.56.101:1443/testdb', 
> > echo=True) 
> > session = saorm.sessionmaker(bind=engine)() 
> > 
> > Base = declarative_base() 
> > 
> > class Person(Base): 
> >  __tablename__ = 'people' 
> >  id = sa.Column(sa.Integer, primary_key=True) 
> >  name = sa.Column(sa.String) 
> > 
> > Base.metadata.create_all(engine) 
> > 
> > session.query(Person).delete() 
> > 
> > session.add(Person(name='foo')) 
> > session.add(Person(name='bar')) 
> > 
> > session.commit() 
> > 
> > results = session.query( 
> >  Person.name.label('person'), 
> > ).add_entity( 
> >  Person 
> > ).order_by( 
> >  Person.name 
> > ) 
> > 
> > print results.count() 
> > print results.limit(1).offset(1).all() 
> > 
> > 
> > 
> > | 
> > 
> > Thanks, 
> > Bill 
> > 
> > 
> > On Friday, February 19, 2016 at 11:20:44 PM UTC-5, Bill Adams wrote: 
> > 
> > Yes, that is what I was trying to describe. I've been trying to 
> > create a simple test case but have as of yet been unable to 
> > reproduce the problem in a simpler environment. I was hoping someone 
> > had encountered something similar before. I'll keep trying to get 
> > that MCVE "working".. 
> > 
> > Thanks, 
> > Bill 
> > 
> > On Fri, Feb 19, 2016 at 11:13 PM, Mike Bayer 
> > > wrote: 
> > 
> > 
> > 
> > On Fri, Feb 19, 2016 at 9:30 PM,   
> > > wrote: 
> > 
> > 
> > The issue seems to be occurring for queries where we use the 
> > add_entity() method to select a declarative model entity 
> > when a column from the same table is already in the query 
> > constructor and labeled. 
> > 
> > 
> > just to make sure, here is that: 
> > 
> > from sqlalchemy import * 
> > from sqlalchemy.orm import * 
> > from sqlalchemy.ext.declarative import declarative_base 
> > 
> > Base = declarative_base() 
> > 
> > 
> > class A(Base): 
> >  __tablename__ = 'a' 
> >  id = Column(Integer, primary_key=True) 
> >  x = Column(Integer) 
> > 
> > e = create_engine("postgresql://scott:tiger@localhost/test", 
> > echo=True) 
> > Base.metadata.drop_all(e) 
> > Base.metadata.create_all(e) 
> > 
> > 
> > s = Session(e) 
> > 
> > s.add(A(x=5)) 
> > s.commit() 
> > 
> > print s.query(A.x.label("foo")).add_entity(A).all() 
> > 
> > 
> > query output at the end: 
> > 
> > BEGIN (implicit) 
> > 2016-02-19 23:12:33,455 INFO sqlalchemy.engine.base.Engine 
> > SELECT a.x AS foo, a.id  AS a_id, a.x AS a_x 
> > FROM a 
> > 2016-02-19 23:12:33,455 INFO sqlalchemy.engine.base.Engine {} 
> > [(5, <__main__.A object at 0x7f0c2fd94990>)] 
> > 
> > 
> > -- 
> > You received this message because you are subscribed to the 
> > Google Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from 
> > it, send an email to sqlalchemy+...@googlegroups.com 
>  
> > . 
> > To post to this group, send email to sqlal...@googlegroups.com 
>  
> > . 
> > Visit this group at https://groups.google.com/group/sqlalchemy 
> > . 
> > For more options, visit https://groups.google.com/d/optout 
> > . 
> > 
> > 
> > 
> > 
> > -- 
> > 
> > *Bill Adams* 
> > Developer 
> > Direct: 502.276.1006 
> > Office: 812.285.8766 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> > Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to sqlalchemy+...@googlegroups.com  
> > . 
> > To post to this 

[sqlalchemy] Custom collection: nested dicts

2016-02-23 Thread Sergey Mozgovoy
Recently I ran into two cases that required some custom collection 
behavior, and it turned out that what I needed did not exist in SQLAlchemy, 
so I ended up implementing custom collection classes:

1) Nested mapped collections 
(http://stackoverflow.com/questions/23354380/how-do-i-create-a-multi-level-dictionary-backed-by-a-single-table-in-sqlalchemy).
2) A usual (non-nested) mapped collection that groups objects with equal 
keys into sets. So it maps objects' attribute to sets of objects (rather 
than to single objects, as ordinary "attribute_mapped_collection").

Clearly, the above cases can be regarded as just special case of a general 
collection class with extended capabilities. And association_proxy: 
sometimes we don't want objects themselves, just their property or 
attribute. So this collection class should be able to act like 
association_proxy, too.

What I came up with is kind of ad-hoc and half-baked implementation of this 
extended collection-mapping functionality. But I'd like to improve it, make 
things more general and share with the community.

Is this functionality used often enough to care ? Am I missing some 
extension or library for SQLAlchemy that does just exactly this ?

If not, I'd be glad to contribute to SQLAlchemy.

-
Best regards,
Sergey

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Relationship between two databases

2016-02-23 Thread Jonathan Vanasco
I'm not sure it's doable either... but I wanted to point out something:

class WeatherStation(MainBase):
class EvaporationData(Base):

They are inheriting from different bases, and `Base` is not associated to 
the same metadata as `MainBase`

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] "cache lookup failed" on reflection

2016-02-23 Thread Matt Smith
Hi sqlalchemy!

When using sqlalchemy (core) 1.0.8 with redshift-sqlalchemy 0.4.1, I 
encountered the following exception:

InternalError: (psycopg2.InternalError) cache lookup failed for relation 
3262644

This is the query it failed on:

SELECT c.oid
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE (pg_catalog.pg_table_is_visible(c.oid))
AND c.relname = %(table_name)s AND c.relkind in ('r', 'v', 'm', 'f')

Our hypothesis is that another process operating on the same Redshift 
cluster (which runs a table shuffling process and deletes tables), is 
causing our code to error during schema reflection. Does this sound correct 
to you, and if so, is there a way for our code to mitigate this issue?

Thanks, Matt Smith

Here's the relevant portion of the stack trace:

... snip ...
engine, meta = ENGINES.get('redshift'), ENGINES.get_meta('redshift')
  File "ourcode.py", line 38, in get_meta
meta.reflect(bind=self.get(name))
  File "sqlalchemy/sql/schema.py", line 3647, in reflect
Table(name, self, **reflect_opts)
  File "sqlalchemy/sql/schema.py", line 416, in __new__
metadata._remove_table(name, schema)
  File "sqlalchemy/util/langhelpers.py", line 60, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
  File "sqlalchemy/sql/schema.py", line 411, in __new__
table._init(name, metadata, *args, **kw)
  File "sqlalchemy/sql/schema.py", line 484, in _init
self._autoload(metadata, autoload_with, include_columns)
  File "sqlalchemy/sql/schema.py", line 496, in _autoload
self, include_columns, exclude_columns
  File "sqlalchemy/engine/base.py", line 1477, in run_callable
return callable_(self, *args, **kwargs)
  File "sqlalchemy/engine/default.py", line 364, in reflecttable
return insp.reflecttable(table, include_columns, exclude_columns)
  File "sqlalchemy/engine/reflection.py", line 563, in reflecttable
table_name, schema, **table.dialect_kwargs):
  File "sqlalchemy/engine/reflection.py", line 369, in get_columns
**kw)
  File "", line 2, in get_columns
  File "sqlalchemy/engine/reflection.py", line 54, in cache
ret = fn(self, con, *args, **kw)
  File "sqlalchemy/dialects/postgresql/base.py", line 2325, in get_columns
info_cache=kw.get('info_cache'))
  File "", line 2, in get_table_oid
  File "sqlalchemy/engine/reflection.py", line 54, in cache
ret = fn(self, con, *args, **kw)
  File "sqlalchemy/dialects/postgresql/base.py", line 2220, in get_table_oid
c = connection.execute(s, table_name=table_name, schema=schema)
  File "sqlalchemy/engine/base.py", line 914, in execute
return meth(self, multiparams, params)
  File "sqlalchemy/sql/elements.py", line 323, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File "sqlalchemy/engine/base.py", line 1010, in _execute_clauseelement
compiled_sql, distilled_params
  File "sqlalchemy/engine/base.py", line 1146, in _execute_context
context)
  File "sqlalchemy/engine/base.py", line 1341, in _handle_dbapi_exception
exc_info
  File "sqlalchemy/util/compat.py", line 199, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File "sqlalchemy/engine/base.py", line 1139, in _execute_context
context)
  File "sqlalchemy/engine/default.py", line 450, in do_execute
cursor.execute(statement, parameters)

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Dependency rule tried to blank-out primary key column when trying to update using association proxy

2016-02-23 Thread Mike Bayer



On 02/23/2016 10:47 AM, Piotr Dobrogost wrote:

Hi!

I'm getting AssertionError: Dependency rule tried to blank-out primary
key column 'text_value.text_id' on instance '' error while trying to update row using association
proxy ('values') like this:

|
session.add(Text(values={'pl':u'org','en':u'org'}))
text =session.query(Text).one()
text.values ={'pl':u'modified','en':u'modified'}
session.commit()

|

Working example is at
https://gist.github.com/piotr-dobrogost/74073cf11006fb68e555
What am I doing wrong?



well the first thing is you definitely don't need 
UniqueConstraint('lang_id', 'text_id') because these columns are already 
in the primary key.


the next issue in this test at least is that you're referring to the 
primary key of a row in Lang but there is no Lang row being created 
here.  SQLite will let you put the invalid primary key into lang_id but 
only because foreign keys aren't enforced by default.


then the reason for the error is that TextValue objects are potentially 
being replaced but the mapping is not emitting a delete for the old 
TextValue, so you'd need to put cascade="all, delete-orphan" on the 
Text._values relationship so that when you assign a new dictionary 
value, the old TextValue objects are deleted.


In this specific example, marking the TextValue as deleted ends up gving 
you an UPDATE of TextValue, because the ORM converts an INSERT/DELETE of 
the same primary key into a single UPDATE.  But if your dictionary 
removed some of those keys you'd need a DELETE so the cascade rule 
should be applied here.








Best regards,
Piotr Dobrogost

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Dependency rule tried to blank-out primary key column when trying to update using association proxy

2016-02-23 Thread Piotr Dobrogost
Hi!

I'm getting AssertionError: Dependency rule tried to blank-out primary key 
column 'text_value.text_id' on instance '' 
error while trying to update row using association proxy ('values') like 
this:

session.add(Text(values={'pl': u'org', 'en': u'org'}))
text = session.query(Text).one()
text.values = {'pl': u'modified', 'en': u'modified'}
session.commit()


Working example is at 
https://gist.github.com/piotr-dobrogost/74073cf11006fb68e555
What am I doing wrong?


Best regards,
Piotr Dobrogost

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] prefix_with for queries with that eager load relationships

2016-02-23 Thread Mike Bayer



On 02/23/2016 04:00 AM, Daniel Kraus wrote:

Hi,

I want to use mysqls `SQL_CALC_FOUND_ROWS` but when I use
`query.prefix_with(...)` it fails when the query eager loads a relationship
because sqlalchemy puts the prefix not at the beginning.

I'm not sure if I should file a bug report or if it's intended behaviour.
If I'm doing something wrong, how can I prefix the query only once in
the beginning?

Here is a simple script to demonstrate the error:

https://gist.github.com/dakra/0424086f5837d722bc58


the joinedload() case "works", as long as you don't use LIMIT or OFFSET, 
as there's no subquery:


SELECT SQL_CALC_FOUND_ROWS users.id AS users_id, users.name AS 
users_name, addresses_1.id AS addresses_1_id, addresses_1.email_address 
AS addresses_1_email_address, addresses_1.user_id AS addresses_1_user_id
FROM users LEFT OUTER JOIN addresses AS addresses_1 ON users.id = 
addresses_1.user_id


but looking at the docs for the purpose of found_rows, it would only be 
used with a LIMIT.   Therefore it's not really valid to use this 
function with joined eager loading of a collection because it only works 
at the top level of the query and a joined eager load is going to return 
more rows than there are actual entities.   If OTOH you are only 
retrieving a many-to-one via joined eager load, this should all work 
totally fine, and even in the case of LIMIT I don't think a subquery is 
applied for simple many-to-one relationships.


So subqueryload is the only practical option when you need collection 
eager loading plus the found rows feature with limit.   In this case you 
definitely don't want this emitted in the subquery because even if it 
were accepted it would mess up your found_rows().   Longer term solution 
here would be to provide flags to the query.prefix_with() method to 
indicate prefixes that should always move to the outside of the query as 
well as prefixes that should not be passed along to subqueryloaders and 
other transformations.


Here's a found_rows modifier that will anticipate a subqueryload and 
erase any _prefixes() nested:


from sqlalchemy.orm.interfaces import MapperOption
from sqlalchemy.sql import visitors


class FoundRows(MapperOption):
def process_query(self, query):
query._prefixes = "SQL_CALC_FOUND_ROWS",

def process_query_conditionally(self, query):
# when subqueryload calls upon loader options, it is passing
# the fully contructed query w/ the original query already
# embedded as a core select() object.  So we will modify the
# select() after the fact.

def visit_select(select):
select._prefixes = ()

# this can be more hardcoded, but here we're searching throughout
# all select() objects and erasing their _prefixes
for from_ in query._from_obj:
visitors.traverse(
from_, {}, {"select": visit_select})


users = s.query(User).options(FoundRows(), 
subqueryload(User.addresses)).limit(3).all()

row_count = s.execute('SELECT FOUND_ROWS()').scalar()

print(users, row_count)




--- cut ---
from sqlalchemy import create_engine, Column, Integer, String, ForeignKey
from sqlalchemy.orm import Session, relationship, subqueryload, joinedload
from sqlalchemy.ext.declarative import declarative_base


Base = declarative_base()

e = create_engine("mysql+mysqlconnector://scott:tiger@localhost/test",
echo=True)

class Address(Base):
 __tablename__ = 'addresses'
 id = Column(Integer, primary_key=True)
 email_address = Column(String(64))
 user_id = Column(Integer, ForeignKey('users.id'))


class User(Base):
 __tablename__ = 'users'

 id = Column(Integer, primary_key=True)
 name = Column(String(64))

 addresses = relationship(Address, backref="user")


Base.metadata.drop_all(e)
Base.metadata.create_all(e)

s = Session(e)


u = User(name='test')
s.add_all([u, Address(email_address='email1', user=u),
Address(email_address='email2', user=u)])
s.commit()

# this works like expected
users = s.query(User).prefix_with('SQL_CALC_FOUND_ROWS').all()
row_count = s.execute('SELECT FOUND_ROWS()').scalar()

print(users, row_count)

# with eager loading (subqueryload or joinedload) it fails
users =
s.query(User).prefix_with('SQL_CALC_FOUND_ROWS').options(subqueryload(User.addresses)).all()
row_count = s.execute('SELECT FOUND_ROWS()').scalar()

print(users, row_count)
--- cut ---


If I execute, the relevant error message is:
sqlalchemy.exc.ProgrammingError:
(mysql.connector.errors.ProgrammingError) 1234 (42000): Incorrect
usage/placement of 'SQL_CALC_FOUND_ROWS' [SQL: 'SELECT addresses.id AS
addresses_id, addresses.email_address AS addresses_email_address,
addresses.user_id AS addresses_user_id, anon_1.users_id AS
anon_1_users_id \nFROM (SELECT SQL_CALC_FOUND_ROWS users.id AS users_id
\nFROM users) AS anon_1 INNER JOIN addresses ON anon_1.users_id =
addresses.user_id ORDER BY anon_1.users_id']

Thanks,
   Daniel

--
You received this message because you 

Re: [sqlalchemy] SQLAlchemy dynamic & customized Collection Class

2016-02-23 Thread Mike Bayer



On 02/23/2016 01:41 AM, Edouard BERTHE wrote:

Hello everybody !

This is a question I have already posted on Stack Overflow
,
but as none is answering I decided to post it here too :) However I
invite you to go to the Stack Overflow page, as it is maybe more clearly
explained than here.

I have a One-To-Many relation between an Article entity and a Post
entity (which are the comments on the article).
Each Post has a date attribute.
What I would like to do is getting the posts related to an article
between two dates directly from this article, using the following syntax :

article.posts[start_date:end_date]

With the "lazy" argument or "relationship", I can do :

 posts = relationship('Post', back_populates='article', lazy='dynamic')

After that, the "posts" attribute isn't a list any longer, but a Query
Object, which allows us to do:

 article.posts.filter(Post.date >= start_date, Post.date <
end_date).all()

Which is still not exactly what I'm looking for.

I think I have to use the "collection_class" attribute of the
relationship, by using a custom Class in which I would override the
"__getitem__" method, but I don't know what to write in this function,
because we don't have access to the query given by the `relationship`
Object !

Does someone have an idea which could help me ?



if you're using "dynamic", the collection_class is not supported in that 
case.  However, you can send query_class to relationship() when using 
dynamic loading.   Build a subclass of Query, and override the 
__getitem__() method of it to look for dates within the slice object - 
when these are detected, convert it to the filter() version above and 
emit that.


With this new Query class, pass it along to your relationship() using 
the query_class parameter:


http://docs.sqlalchemy.org/en/rel_1_0/orm/relationship_api.html?highlight=query_class#sqlalchemy.orm.relationship.params.query_class





Thank you !
Edouard

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Association Proxy query reference ?

2016-02-23 Thread Mike Bayer



On 02/23/2016 06:47 AM, gio wrote:

Hi  thanks for your answer

Without filter i did not expect any big difference between
query(Keyword) or a  hypothetical query(User.keywords)
but I  was thinking it will be nice to   do  something like this
session.query(User.keywords).filter(User ==4)
without joins..

My goal was to have a simple query that return a structure like the
following
with user and user.keywords

from sqlalchemy import func
myquery = session.query(UserKeyword.user_id,
func.array_agg(UserKeyword.keyword_id)).group_by(UserKeyword.user_id).all()

myquery result [(4, [3, 4]), (3, [1, 2])]

I was thinking associationproxy  could help to simplify the ORM
statement but it seems I  have to join


OK well the concept of query(Class.somerelationship) as loading that 
related entity is a long standing issue we have in bitbucket.  It's 
strictly a "nice to have" so has not been a priority for some time.


In this case, we're talking about the associationproxy, which means that 
we'd also extend that behavior to associationproxy, OK.   However, if 
someone queried for query(Class.someproxy), again anything built into 
SQLAlchemy would at best be query(RelatedClass).   What you have there 
is a specific series of columns and transformations, e.g. the user_id + 
array_agg, which also is database specific too (only works on PG).   You 
could in theory subclass associationproxy to provide a 
__clause_element__() which delivers this part, but it wouldn't 
intrinsically do the group_by() part.


The most succinct thing that's easy to do would be if you built a 
with_transformation() transformer, so that you could get all the bits 
you need done in one step.  see 
http://docs.sqlalchemy.org/en/rel_1_0/orm/query.html?highlight=with_transformation#sqlalchemy.orm.query.Query.with_transformation 
for details on that.






Regards
g


Am Montag, 22. Februar 2016 12:51:55 UTC+1 schrieb gio:

Hi
I was trying to use the  associationproxy feature like explained
in  the basic example
http://docs.sqlalchemy.org/en/latest/orm/extensions/associationproxy.html

Now my question is:
Is it possible to do a query like and have as result an array of
array of keywords?

session.query(User.keywords)

Regards
g




--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Relationship between two databases

2016-02-23 Thread Thierry Florac
Hi,

I don't think that the solution can come from SQLAlchemy.
You may have to create a database link between your two databases to be
able to query both of them with a single instruction...

Best regards,
Thierry

2016-02-23 11:43 GMT+01:00 Mehdi :

> Hi
> Is it possible two have a one-to-many or many-to-many relationship between
> two models which they are exist in two different databases?
> I have two oracle dbs on two different machines on lan. so i've created
> two engines like:
> main_engine = create_engine("oracle://user:pass@ip1/sid")
> entry_engine = create_engine("oracle://user:pass@ip2/sid")
>
> Then i've created two different bases for my models and a session:
> meta_data = MetaData()
> DBSession =
> scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
> MainBase = declarative_base(cls=ModelBase, metadata=meta_data)
> EntryBase = declarative_base(cls=ModelBase, metadata=meta_data)
>
> MainBase.metadata.bind = main_engine
> EntryBase.metadata.bind = entry_engine
>
> And let's say i have two simple models like:
> class WeatherStation(MainBase):
> __tablename__ = "weather_stations"
> master_code = Column(NUMBER(10), primary_key=True)
> name = Column(String(50), index=True)
> evaporation_data = relationship("EvaprationData", backref="station")
> and the other one:
> class EvaporationData(Base):
> __tablename__ = "evaporations_data"
> id = Column(NUMBER(30), Sequence("evaporation_data_eid_seq"),
> primary_key=True)
> station_code = Column(NUMBER(10, 0),
> ForeignKey("weather_stations.master_code"))
> * each classes defined in separate files models folder.
>
> Now if i comment out the relationships queries works fine. but with
> relationships between my models, i've got errors like:
> failed to locate a name ("name 'EvaporationData' is not defined"). If this
> is a class name, consider adding this relationship() to the  'measurement.models.weather_station.WeatherStation'> class after both
> dependent classes have been defined.
>
> So is there any solution?
> Thank.
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
http://www.imagesdusport.com -- http://www.ztfy.org

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Association Proxy query reference ?

2016-02-23 Thread gio
Hi  thanks for your answer

Without filter i did not expect any big difference between query(Keyword) 
or a  hypothetical query(User.keywords)
but I  was thinking it will be nice to   do  something like this session.
query(User.keywords).filter(User ==4)
without joins..

My goal was to have a simple query that return a structure like the 
following
with user and user.keywords 

from sqlalchemy import func
myquery = session.query(UserKeyword.user_id, 
func.array_agg(UserKeyword.keyword_id)).group_by(UserKeyword.user_id).all()

myquery result [(4, [3, 4]), (3, [1, 2])] 

I was thinking associationproxy  could help to simplify the ORM statement 
but it seems I  have to join  

Regards
g


Am Montag, 22. Februar 2016 12:51:55 UTC+1 schrieb gio:
>
> Hi  
> I was trying to use the  associationproxy feature like explained 
> in  the basic example  
> http://docs.sqlalchemy.org/en/latest/orm/extensions/associationproxy.html
> Now my question is:
> Is it possible to do a query like and have as result an array of array of 
> keywords?
>
> session.query(User.keywords) 
>
> Regards
> g
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Relationship between two databases

2016-02-23 Thread Mehdi
Hi
Is it possible two have a one-to-many or many-to-many relationship between 
two models which they are exist in two different databases?
I have two oracle dbs on two different machines on lan. so i've created two 
engines like:
main_engine = create_engine("oracle://user:pass@ip1/sid")
entry_engine = create_engine("oracle://user:pass@ip2/sid")

Then i've created two different bases for my models and a session:
meta_data = MetaData()
DBSession = 
scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
MainBase = declarative_base(cls=ModelBase, metadata=meta_data)
EntryBase = declarative_base(cls=ModelBase, metadata=meta_data)

MainBase.metadata.bind = main_engine
EntryBase.metadata.bind = entry_engine

And let's say i have two simple models like:
class WeatherStation(MainBase):
__tablename__ = "weather_stations"
master_code = Column(NUMBER(10), primary_key=True)
name = Column(String(50), index=True)
evaporation_data = relationship("EvaprationData", backref="station")
and the other one:
class EvaporationData(Base):
__tablename__ = "evaporations_data"
id = Column(NUMBER(30), Sequence("evaporation_data_eid_seq"), 
primary_key=True)
station_code = Column(NUMBER(10, 0), 
ForeignKey("weather_stations.master_code"))
* each classes defined in separate files models folder.

Now if i comment out the relationships queries works fine. but with 
relationships between my models, i've got errors like:
failed to locate a name ("name 'EvaporationData' is not defined"). If this 
is a class name, consider adding this relationship() to the  class after both 
dependent classes have been defined.

So is there any solution?
Thank.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] prefix_with for queries with that eager load relationships

2016-02-23 Thread Daniel Kraus
Hi,

I want to use mysqls `SQL_CALC_FOUND_ROWS` but when I use 
`query.prefix_with(...)` it fails when the query eager loads a relationship
because sqlalchemy puts the prefix not at the beginning.

I'm not sure if I should file a bug report or if it's intended behaviour.
If I'm doing something wrong, how can I prefix the query only once in the 
beginning?

Here is a simple script to demonstrate the error:

https://gist.github.com/dakra/0424086f5837d722bc58

--- cut ---
from sqlalchemy import create_engine, Column, Integer, String, ForeignKey
from sqlalchemy.orm import Session, relationship, subqueryload, joinedload
from sqlalchemy.ext.declarative import declarative_base


Base = declarative_base()

e = create_engine("mysql+mysqlconnector://scott:tiger@localhost/test", 
echo=True)

class Address(Base):
__tablename__ = 'addresses'
id = Column(Integer, primary_key=True)
email_address = Column(String(64))
user_id = Column(Integer, ForeignKey('users.id'))


class User(Base):
__tablename__ = 'users'

id = Column(Integer, primary_key=True)
name = Column(String(64))

addresses = relationship(Address, backref="user")


Base.metadata.drop_all(e)
Base.metadata.create_all(e)

s = Session(e)


u = User(name='test')
s.add_all([u, Address(email_address='email1', user=u), 
Address(email_address='email2', user=u)])
s.commit()

# this works like expected
users = s.query(User).prefix_with('SQL_CALC_FOUND_ROWS').all()
row_count = s.execute('SELECT FOUND_ROWS()').scalar()

print(users, row_count)

# with eager loading (subqueryload or joinedload) it fails
users = 
s.query(User).prefix_with('SQL_CALC_FOUND_ROWS').options(subqueryload(User.addresses)).all()
row_count = s.execute('SELECT FOUND_ROWS()').scalar()

print(users, row_count)
--- cut ---


If I execute, the relevant error message is:
sqlalchemy.exc.ProgrammingError: (mysql.connector.errors.ProgrammingError) 
1234 (42000): Incorrect usage/placement of 'SQL_CALC_FOUND_ROWS' [SQL: 
'SELECT addresses.id AS addresses_id, addresses.email_address AS 
addresses_email_address, addresses.user_id AS addresses_user_id, 
anon_1.users_id AS anon_1_users_id \nFROM (SELECT SQL_CALC_FOUND_ROWS 
users.id AS users_id \nFROM users) AS anon_1 INNER JOIN addresses ON 
anon_1.users_id = addresses.user_id ORDER BY anon_1.users_id']

Thanks,
  Daniel

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.