Re: [sqlalchemy] Re: Order by in select doesn't seem to work

2012-01-09 Thread Gaëtan de Menten
On Tue, Dec 27, 2011 at 16:42, Michael Bayer mike...@zzzcomputing.com wrote:

 On Dec 27, 2011, at 10:37 AM, VDK wrote:

 Michael,

 I simplified my code just for test purposes. I'm now working with only
 two columns, without order_by clause, commented a few other lines with
 order by. I'm sure there is no order_by left in my code. I run the
 script with echo set to debug. The result:

 INFO:sqlalchemy.engine.base.Engine.0x...a7d0:{}
 DEBUG:sqlalchemy.engine.base.Engine.0x...a7d0:Col
 ('contributie_alle_leden_id', 'contributie_alle_leden_achternaam')
 2011-12-27 16:23:05,839 INFO sqlalchemy.engine.base.Engine.0x...a7d0
 SELECT contributie_alle_leden.id AS contributie_alle_leden_id,
 contributie_alle_leden.achternaam AS
 contributie_alle_leden_achternaam
 FROM (SELECT Leden.id AS id, Leden.achternaam AS achternaam
 FROM Leden) AS contributie_alle_leden ORDER BY
 contributie_alle_leden.id

 The order by is still added. As sqlalchemy doesn't add things, the
 only suspect now is elixir. This is part of the Camelot framework and
 act as a layer upon SQLAlchemy.

 *elixir* - I just searched your other emails and it appears they fail to 
 mention this extremely critical detail.   Elixir adds default order_bys.   
 You need to use elixir's configuration flags to disable that.

Hey! Elixir is clearly not perfect, but that one was uncalled for
because Elixir *does not* add default order_bys (unless you tell it
to, of course). So it is either Camelot's fault or the user code.

In [1]: from elixir import *

In [2]: class A(Entity):
   ...: name = Field(String(20))
   ...:

In [3]: metadata.bind = 'sqlite://'

In [4]: metadata.bind.echo = True

In [5]: setup_all(True)
2012-01-09 09:53:08,694 INFO sqlalchemy.engine.base.Engine PRAGMA table_info(__
main___a)
2012-01-09 09:53:08,696 INFO sqlalchemy.engine.base.Engine ()
2012-01-09 09:53:08,698 INFO sqlalchemy.engine.base.Engine
CREATE TABLE __main___a (
id INTEGER NOT NULL,
name VARCHAR(20),
PRIMARY KEY (id)
)


2012-01-09 09:53:08,700 INFO sqlalchemy.engine.base.Engine ()
2012-01-09 09:53:08,704 INFO sqlalchemy.engine.base.Engine COMMIT

In [6]: A.query.all()
2012-01-09 09:53:15,782 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2012-01-09 09:53:15,783 INFO sqlalchemy.engine.base.Engine SELECT __main___a.id
AS __main___a_id, __main___a.name AS __main___a_name
FROM __main___a
2012-01-09 09:53:15,786 INFO sqlalchemy.engine.base.Engine ()
Out[6]: []

 I think this email thread is going to become the textbook example of why we 
 took out default order by, the next time someone asks.

-- 
Gaëtan de Menten

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Eager loading hates single table inheritance

2012-01-09 Thread Yuen Ho Wong
Here's a list of pathological test cases that confuses the hell out of SA 
while trying to eager load more than 1 collections which are mapped to the 
same table using single table inheritance. In short, only joinedload*() 
appears to work out of all the eager loading methods. This pretty much 
means that supplying eager load options to a Query object doesn't mean you 
will always get back the same result.

Ideally, I'd like subqueryload*() and contains_eager() to work just like 
joinedload*() to prevent a situation where I have to waste bandwidth 
loading the same data over and over again or doing MxN queries.

Is there anyway that I can do what I want without rewriting my complicated 
query in full SQL expression?

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/u-PW089d3McJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

#! /usr/bin/env python

from sqlalchemy import Column, Integer, Unicode, ForeignKey, Enum, Table, \
create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, backref, sessionmaker, joinedload, \
joinedload_all, contains_eager, subqueryload, subqueryload_all

Base = declarative_base()
metadata = Base.metadata


class SearchOption(Base):
__tablename__ = searchoption

id = Column(Integer, autoincrement=True, primary_key=True)
parent_id = Column(Integer, ForeignKey(id,
   onupdate=CASCADE,
   ondelete=CASCADE))
parent = relationship(SearchOption, uselist=False, remote_side=[id],
  backref=backref(children))
discriminator = Column(type, Enum(origin, food,
name=searchoptiontype))
__mapper_args__ = {polymorphic_on: discriminator}
displayname = Column(Unicode(64), nullable=False)


class OriginOption(SearchOption):
__mapper_args__ = {polymorphic_identity: origin}


class FoodOption(SearchOption):
__mapper_args__ = {polymorphic_identity: food}


product_searchoption_table = Table(product_searchoption, metadata,
   Column(product_id,
  Integer,
  ForeignKey(product.id,
 onupdate=CASCADE,
 ondelete=CASCADE),
  primary_key=True),
   Column(searchoption_id,
  Integer,
  ForeignKey(searchoption.id,
 onupdate=CASCADE,
 ondelete=CASCADE),
  primary_key=True))


class Product(Base):

__tablename__ = product

id = Column(Integer, autoincrement=True, primary_key=True)

origin = relationship(OriginOption, uselist=False,
  secondary=product_searchoption_table)
foods = relationship(FoodOption,
 secondary=product_searchoption_table)


if __name__ == __main__:
engine = create_engine(postgresql+psycopg2://tester:tester@localhost/test_eagerload,
   echo=True)
Session = sessionmaker(engine)
session = Session()

metadata.create_all(engine)

usa = OriginOption(displayname=uusa)
canada = OriginOption(displayname=ucanada)

apple = FoodOption(displayname=uapple)
orange = FoodOption(displayname=uorange)
banana = FoodOption(displayname=ubanana)

product1 = Product(origin=usa, foods=[apple])
product2 = Product(origin=canada, foods=[orange, banana])
product3 = Product()

session.add(product1)
session.add(product2)
session.add(product3)

session.commit()

session.expunge_all()

# If all the collections to eager load belong to a single table inheritance
# mapping, there's no way to let SA know to optimize this it seems.
p = session.query(Product)\
.options(subqueryload_all(Product.origin, Product.foods))\
.filter(Product.id == 2).one()

assert p.id == 2
assert p.origin.displayname == ucanada # This is only eager loaded by the previous query
assert [f.displayname for f in p.foods] == [u'orange', u'banana'] # This is still lazy loaded

session.expunge_all()

# Now all the collections are eagerly loaded, but extremely inefficient
# because of all the MxN queries
p = session.query(Product)\

Re: [sqlalchemy] Eager loading hates single table inheritance

2012-01-09 Thread Michael Bayer

On Jan 9, 2012, at 7:34 AM, Yuen Ho Wong wrote:

 Here's a list of pathological test cases that confuses the hell out of SA 
 while trying to eager load more than 1 collections which are mapped to the 
 same table using single table inheritance. In short, only joinedload*() 
 appears to work out of all the eager loading methods. This pretty much means 
 that supplying eager load options to a Query object doesn't mean you will 
 always get back the same result.

A common theme in SQLAlchemy is that, despite all the flak we get for being 
complicated, SQLAlchemy is actually very simple.   It has a handful of 
constructs which seek to do exactly the same thing in exactly the same way, as 
consistently as possible.   So we sometimes get requests for SQLAlchemy should 
figure out XYZ and it's like, well not really, that would be really 
complicated.

Here we have a mixture of that, as well as some missing features that are 
planned, as well as a few API mis-usages, as well as I wouldn't really do 
things the way you're doing them at least for now.

The first thing I note here is, if I were doing a model like this, I'd either 
use two different association tables between Product-Origin and Product-Food, 
or I'd make one relationship(), and handle the filtering in Python (which is 
essentially what you're wishing SQLAlchemy did here).   The ORM wants to know 
about your table relationships which here is just A-assoc-B.All of the 
issues here, some of which I consider to be SQLA bugs anyway, would go away if 
you did it in that manner, subqueryloading would be efficient, etc.

Another nitpick, joinedload_all(), subqueryload_all() is meant to load along 
chains: A-B-C-D, not siblings, A-B, A-C, i.e.:

p = session.query(Product)\
.options(joinedload_all(Product.origin, Product.foods))\  --- 
incorrect
.filter(Product.id == 2).one()

you'd use two joinedload() twice for that. So all of the examples where 
you're distinguishing xyz_all() from xyz(), all the same thing.   

It's silently ignored now so http://www.sqlalchemy.org/trac/ticket/2370 is 
added.  If I can get to it in 0.7 it will emit a warning, will raise an error 
in 0.8.

In the subqueryload_all()/subqueryload() example, you'll note the second one 
with two separate subqueryload() calls does in fact correctly do the 
subqueryload twice.   There's no way SQLA would ever figure out 
automatically that they're against the same table and then join the two queries 
together, decisionmaking like would be enormously complicated as well as the 
mechanics of how to get a single loader to coordinate among two relationships.  
 If you use just one relationship() to SearchOption then provide filtered 
accessors, then you get exactly the optimization you're looking for.

In the joinedload() example, that's sort of a bug or sort of a missing feature. 
   I can't fix that immediately, because we still have a policy whereby 
eagerloading doesn't parenthesize the joins - it always flattens them out.   A 
discussion of this issue is at http://www.sqlalchemy.org/trac/ticket/2120 and 
at this point it's basically SQLite preventing us from doing it, as it chokes 
on a statement like : select * from a left outer join (b join c on b.id=c.bid) 
on a.id=b.aid; .The example is added in 
http://www.sqlalchemy.org/trac/ticket/2369 .

Ticket 2120 calls for at least an option nested_joins=True, specifying that 
the more correct/efficient system of nesting joins should be used.   This is 
all 0.8 stuff, as the joined eager loading code would be destabilized by this - 
if it turns out to be an isolated option it could move to 0.7.   Your set of 
tests here makes me more interested in the issue though so perhaps we'll see if 
I have time to try some things out.

Finally the contains_eager version.   SQLAlchemy again expects two distinct 
sets of columns for each relationship, so you must join to the table twice:

sa1 = aliased(OriginOption)
sa2 = aliased(FoodOption)

p = session.query(Product)\
.join(sa1, Product.origin)\
.join(sa2, Product.foods)\
.options(contains_eager(Product.origin, alias=sa1),
 contains_eager(Product.foods, alias=sa2))\
.filter(Product.id == 2).one()


Same theme here, you're hoping SQLAlchemy can figure out something in Python, 
i.e. that only certain rows being routed to Product.origin/Product.foods 
should be used for each, but it's not that complicated. It relies upon SQL 
to present it with the correct data geometry and assumes it is correct.  
Second-guessing what it gets back from SQL to check, oh is this some special 
1% edge case where I might have to filter extra types that I'm not supposed to 
receive here? wouldn't be efficient or consistent with how everything else 
works.

So overall, a few bugs we'll fix at some point, but if you want filtering in 
Python, build that onto your Product object.   I would note how even though 

[sqlalchemy] Re: 0.7 event migration

2012-01-09 Thread Kent
 i guess the patch is interacting with that load_on_pending stuff, which I 
 probably added for you also.  It would be nice to really work up a new 
 SQLAlchemy feature: detached/transientobject loading document that really 
 describes what it is we're trying to do here.If you were to write such a 
 document, what example would you give as the rationale ?I know that's the 
 hard part here, but this is often very valuable, to look at your internal 
 system and genericize it into something universally desirable.

As far as such a document, would you want a trac ticket opened with my
use case in a generalized form where others may likely have the same
use case?

Hoping to not upset you here.:

My AttributeImpl.callable_ hack to set a transient state's
session_id, load the relationship, and then set it back to None works
for m2o but when it needs to load a collection (or it can't use get()
I presume), then I am hitting this return None:

class LazyLoader(AbstractRelationshipLoader):
def _load_for_state(self, state, passive):
...
...
lazy_clause = strategy.lazy_clause(state)

if pending:
bind_values = sql_util.bind_values(lazy_clause)
if None in bind_values:
return None###  

q = q.filter(lazy_clause)

in sqla 6.4,
bind_values = sql_util.bind_values(lazy_clause) would return the value
of the foreign key from the transient object

in sqla 7.5,
it returns [None], presumably because the committed values are not
set?

Short term, do you know right off what changed or what I could do to
work around this?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Eager loading hates single table inheritance

2012-01-09 Thread Michael Bayer

On Jan 9, 2012, at 11:19 AM, Michael Bayer wrote:

 The first thing I note here is, if I were doing a model like this, I'd either 
 use two different association tables between Product-Origin and 
 Product-Food, or I'd make one relationship(), and handle the filtering in 
 Python (which is essentially what you're wishing SQLAlchemy did here).   The 
 ORM wants to know about your table relationships which here is just 
 A-assoc-B.All of the issues here, some of which I consider to be SQLA 
 bugs anyway, would go away if you did it in that manner, subqueryloading 
 would be efficient, etc.


Here's an event that does what you need:

from sqlalchemy.orm import attributes
from sqlalchemy import event

class Product(Base):

__tablename__ = product

id = Column(Integer, autoincrement=True, primary_key=True)

options = relationship(SearchOption, secondary=product_searchoption_table)
origin = relationship(OriginOption, uselist=False,
  secondary=product_searchoption_table)
foods = relationship(FoodOption,
 secondary=product_searchoption_table)

@event.listens_for(Product, load)
def mything(target, context):
if 'options' in target.__dict__:
attributes.set_committed_value(
target, 'foods',
[o for o in target.options if o.discriminator=='food']
)
origin = [o for o in target.options if o.discriminator=='origin']
attributes.set_committed_value(
target, 'origin', 
origin[0] if origin else None
)

# only 2 queries
for row in session.query(Product).options(subqueryload(Product.options)):
print row, row.origin, row.foods

if I added an onload event for individual relationship attributes that would 
make this event a little more targeted.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: 0.7 event migration

2012-01-09 Thread Michael Bayer

On Jan 9, 2012, at 2:30 PM, Kent wrote:

 i guess the patch is interacting with that load_on_pending stuff, which I 
 probably added for you also.  It would be nice to really work up a new 
 SQLAlchemy feature: detached/transientobject loading document that really 
 describes what it is we're trying to do here.If you were to write such a 
 document, what example would you give as the rationale ?I know that's 
 the hard part here, but this is often very valuable, to look at your 
 internal system and genericize it into something universally desirable.
 
 As far as such a document, would you want a trac ticket opened with my
 use case in a generalized form where others may likely have the same
 use case?
 
 Hoping to not upset you here.:
 
 My AttributeImpl.callable_ hack to set a transient state's
 session_id, load the relationship, and then set it back to None works
 for m2o but when it needs to load a collection (or it can't use get()
 I presume), then I am hitting this return None:
 
 class LazyLoader(AbstractRelationshipLoader):
def _load_for_state(self, state, passive):
...
...
lazy_clause = strategy.lazy_clause(state)
 
if pending:
bind_values = sql_util.bind_values(lazy_clause)
if None in bind_values:
return None###  
 
q = q.filter(lazy_clause)

that means some of the columns being linked to the foreign keys on the target 
are None.  If you want your lazyload to work all the attributes need to be 
populated.   If you're hitting the get committed  thing, and the attributes 
are only pending, then that's what that is.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: 0.7 event migration

2012-01-09 Thread Kent Bower

On 1/9/2012 2:33 PM, Michael Bayer wrote:

On Jan 9, 2012, at 2:30 PM, Kent wrote:


i guess the patch is interacting with that load_on_pending stuff, which I probably 
added for you also.  It would be nice to really work up a new SQLAlchemy feature: 
detached/transientobject loading document that really describes what it is we're trying to do 
here.If you were to write such a document, what example would you give as the rationale ?I 
know that's the hard part here, but this is often very valuable, to look at your internal system 
and genericize it into something universally desirable.

As far as such a document, would you want a trac ticket opened with my
use case in a generalized form where others may likely have the same
use case?

Hoping to not upset you here.:

My AttributeImpl.callable_ hack to set a transient state's
session_id, load the relationship, and then set it back to None works
for m2o but when it needs to load a collection (or it can't use get()
I presume), then I am hitting this return None:

class LazyLoader(AbstractRelationshipLoader):
def _load_for_state(self, state, passive):
...
...
lazy_clause = strategy.lazy_clause(state)

if pending:
bind_values = sql_util.bind_values(lazy_clause)
if None in bind_values:
return None###

q = q.filter(lazy_clause)

that means some of the columns being linked to the foreign keys on the target are None.  
If you want your lazyload to work all the attributes need to be populated.   If you're 
hitting the get committed  thing, and the attributes are only pending, then 
that's what that is.


But this changed from 0.6.4?

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: 0.7 event migration

2012-01-09 Thread Michael Bayer

On Jan 9, 2012, at 2:36 PM, Kent Bower wrote:

 that means some of the columns being linked to the foreign keys on the 
 target are None.  If you want your lazyload to work all the attributes need 
 to be populated.   If you're hitting the get committed  thing, and the 
 attributes are only pending, then that's what that is.
 
 But this changed from 0.6.4?

funny story, here's where it was added:  
http://www.sqlalchemy.org/trac/ticket/1910 which is essentially your ticket !   
:)


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: 0.7 event migration

2012-01-09 Thread Kent Bower

On 1/9/2012 5:33 PM, Michael Bayer wrote:

On Jan 9, 2012, at 2:36 PM, Kent Bower wrote:


that means some of the columns being linked to the foreign keys on the target are None.  
If you want your lazyload to work all the attributes need to be populated.   If you're 
hitting the get committed  thing, and the attributes are only pending, then 
that's what that is.


But this changed from 0.6.4?

funny story, here's where it was added:  
http://www.sqlalchemy.org/trac/ticket/1910 which is essentially your ticket !   
:)

Except that my patched version of 0.6.4 (which I was referring to) 
already has that change from that ticket patched in.  It must be 
something else, I'm still looking...


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Strange session.commit behavior

2012-01-09 Thread Pavel Ponomarev
Hi,

Heard lots of good things about sqlalchemy and decided to give it a
try.
But almost immediately was confused by strange session.commit()
behavior, please look through following snippets.

Update is pretty straightforward:

 obj = MyModel.query.first()
 obj.attr
foo
 obj.attr = 'bar'
 session.commit()
 obj.attr
bar

And it works great, but when I need to update bunch of attrs from dict
first thought would be built-in vars function:

 obj.attr
bar
 kwargs = {'attr':'foo'}
 vars(obj).update(kwargs)
 obj.attr
foo
 session.commit()
 obj.attr
bar

Am I missing something?

Sure I could use obj.query.update(kwargs) but vars approach is good
pythonic pattern. Also I'm trying to keep SA away from logic.

Thanks,
Pavel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] SQL Alchemy Closure Table Relationship Definition

2012-01-09 Thread jonstjohn
I recently started working with SQL Alchemy for a project that
involves climbing areas and routes. Areas are hierarchical in that a
single area may contain multiple areas, which in turn may contain
other areas. A route is directly associated with a single area, but is
also associated with that area's parent, etc.

To implement this I chose to use a closure table ala Bill Karwin
(http://karwin.blogspot.com/2010/03/rendering-trees-with-closure-
tables.html). In the closure table implementation, a second table is
created to store the ancestor/descendent information. A self-
referencing row is created when a node is added, as well as a row for
each ancestor in the tree.


The table structure is as follows (simplified):

-- area --
area_id
name

-- area_relationship --
ancestor
descendent

-- route --
route_id
area_id
name


Sample data:

-- area --
1, New River Gorge
2, Kaymoor
3, South Nuttall
4, Meadow River Gorge

-- area_relationship (ancestor, descendent) --
1, 1 (self-referencing)
2, 2 (self-referencing)
1, 2 (Kaymoor is w/i New River Gorge)
3, 3 (self-referencing)
1, 3 (South Nutall is w/i New River Gorge)
4, 4 (self-referencing)

-- route (route_id, area_id, name)
1, 2, Leave it to Jesus
2, 2, Green Piece
3, 4, Fancy Pants


To query for all areas for a given route (up the tree), I can execute:

SELECT area.area_id, area.name
FROM route
INNER JOIN area_relationship ON route.area_id =
area_relationship.descendent
INNER JOIN area ON area.area_id = area_relationship.ancestor
WHERE route.route_id = 1
Similarly, I can query for all routes in a particular area (including
descendent areas) with:

SELECT route.route_id, route.name
FROM area
INNER JOIN area_relationship ON area.area_id =
area_relationship.ancestor
INNER JOIN route ON route.area_id = area_relationship.descendent
WHERE area.area_id = 1


In SQL Alchemy I've created a relationship and two tables to handle
these relationships:

area_relationship_table = Table('area_relationship', Base.metadata,
  Column('ancestor', Integer, ForeignKey('area.area_id')),
  Column('descendent', Integer, ForeignKey('area.area_id'))
)


DbArea class -

class DbArea(Base):

__tablename__ = 'area'

area_id = Column(Integer, primary_key = True)
name = Column(VARCHAR(50))
created = Column(DATETIME)

area_relationship_table.c.ancestor])

descendents = relationship('DbArea', backref = 'ancestors',
secondary =  area_relationship_table,
primaryjoin = area_id == area_relationship_table.c.ancestor,
secondaryjoin = area_id ==
area_relationship_table.c.descendent)


DbRoute class -

class DbRoute(Base):

__tablename__ = 'route'

route_id = Column(Integer, primary_key = True)
area_id = Column(Integer, ForeignKey('area.area_id'))
name = Column(VARCHAR(50))
created = Column(DATETIME)

area = relationship(DbArea)

areas = relationship('DbArea', backref = 'routes',
secondary = area_relationship_table,
primaryjoin = area_id ==
area_relationship_table.c.ancestor,
secondaryjoin = area_id ==
area_relationship_table.c.descendent,
foreign_keys=[area_relationship_table.c.ancestor,
area_relationship_table.c.descendent])


Currently, I am able to determine the areas from the individual route,
using the areas relationship in DbRoute. However, when I try to use
the backref 'routes' in DbArea, I get the following error:

sqlalchemy.exc.StatementError: No column route.area_id is configured
on mapper Mapper|DbArea|area... (original cause: UnmappedColumnError:
No column route.area_id is configured on mapper Mapper|DbArea|area...)
'SELECT route.route_id AS route_route_id, route.area_id AS
route_area_id, route.name AS route_name, route.created AS
route_created \nFROM route, area_relationship \nWHERE %s =
area_relationship.descendent AND route.area_id =
area_relationship.ancestor' [immutabledict({})]

I'm guessing that I likely need to add something to DbArea to
establish the relationship, but after experimenting with some
different options was unable to determine the solution.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Strange session.commit behavior

2012-01-09 Thread Michael Bayer

On Jan 7, 2012, at 6:05 PM, Pavel Ponomarev wrote:

 Hi,
 
 Heard lots of good things about sqlalchemy and decided to give it a
 try.
 But almost immediately was confused by strange session.commit()
 behavior, please look through following snippets.
 
 Update is pretty straightforward:
 
 \And it works great, but when I need to update bunch of attrs from dict
 first thought would be built-in vars function:
 
 obj.attr
 bar
 kwargs = {'attr':'foo'}
 vars(obj).update(kwargs)
 obj.attr


SQLAlchemy uses descriptors (see 
http://docs.python.org/reference/datamodel.html#implementing-descriptors) to 
intercept attribute set/get/delete events.  These events then feed into the 
unit of work implementation and result in SQL statements to emit when the 
pending state is flushed.   This usage of descriptors is mentioned in passing 
at 
http://www.sqlalchemy.org/docs/orm/tutorial.html#create-an-instance-of-the-mapped-class
 

When you use vars(obj), you're essentially dealing with obj.__dict__ directly.  
This bypasses the class in use and any behavior defined on it, essentially 
writing data directly to the underlying storage (arguably not as pythonic, wont 
work with __slots__ for example).   So you need to use setattr() or other 
methods that don't bypass instrumentation when setting attributes.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



RE: [sqlalchemy] Strange session.commit behavior

2012-01-09 Thread Jackson, Cameron
I agree with Mike, I don't think vars(obj).update(kwargs) is the best way to do 
it. Python trusts the programmer enough to let you bypass the middle layers of 
abstraction like this, but it's not always a good idea.

Besides, it only takes 2 lines to do it with setattr() instead:

for key, val in kwargs.iteritems():
setattr(obj, key, val)

SQLAlchemy aside, I think the above is better anyway. To me it's a lot clearer 
at a glance what it is doing, but they may just be me. I've never seen anyone 
do your vars update method.

Cameron Jackson
Engineering Intern
Air Operations
Thales Australia
Thales Australia Centre, WTC Northbank Wharf, Concourse Level,
Siddeley Street, Melbourne, VIC 3005, Australia
Tel: +61 3 8630 4591
cameron.jack...@thalesgroup.com.aumailto:cameron.jack...@thalesgroup.com.au | 
www.thalesgroup.com.auhttp://www.thalesgroup.com.au
From: sqlalchemy@googlegroups.com [mailto:sqlalchemy@googlegroups.com] On 
Behalf Of Michael Bayer
Sent: Tuesday, 10 January 2012 10:59 AM
To: sqlalchemy@googlegroups.com
Subject: Re: [sqlalchemy] Strange session.commit behavior


On Jan 7, 2012, at 6:05 PM, Pavel Ponomarev wrote:


Hi,

Heard lots of good things about sqlalchemy and decided to give it a
try.
But almost immediately was confused by strange session.commit()
behavior, please look through following snippets.

Update is pretty straightforward:

\And it works great, but when I need to update bunch of attrs from dict
first thought would be built-in vars function:


obj.attr
bar

kwargs = {'attr':'foo'}
vars(obj).update(kwargs)
obj.attr


SQLAlchemy uses descriptors (see 
http://docs.python.org/reference/datamodel.html#implementing-descriptors) to 
intercept attribute set/get/delete events.  These events then feed into the 
unit of work implementation and result in SQL statements to emit when the 
pending state is flushed.   This usage of descriptors is mentioned in passing 
at 
http://www.sqlalchemy.org/docs/orm/tutorial.html#create-an-instance-of-the-mapped-class

When you use vars(obj), you're essentially dealing with obj.__dict__ directly.  
This bypasses the class in use and any behavior defined on it, essentially 
writing data directly to the underlying storage (arguably not as pythonic, wont 
work with __slots__ for example).   So you need to use setattr() or other 
methods that don't bypass instrumentation when setting attributes.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


-
DISCLAIMER: This e-mail transmission and any documents, files and 
previous e-mail messages attached to it are private and confidential.  
They may contain proprietary or copyright material or information that 
is subject to legal professional privilege.  They are for the use of 
the intended recipient only.  Any unauthorised viewing, use, disclosure, 
copying, alteration, storage or distribution of, or reliance on, this 
message is strictly prohibited.  No part may be reproduced, adapted or 
transmitted without the written permission of the owner.  If you have 
received this transmission in error, or are not an authorised recipient, 
please immediately notify the sender by return email, delete this 
message and all copies from your e-mail system, and destroy any printed 
copies.  Receipt by anyone other than the intended recipient should not 
be deemed a waiver of any privilege or protection.  Thales Australia 
does not warrant or represent that this e-mail or any documents, files 
and previous e-mail messages attached are error or virus free.  

-

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] SQL Alchemy Closure Table Relationship Definition

2012-01-09 Thread Michael Bayer

On Jan 9, 2012, at 6:51 PM, jonstjohn wrote:

 To implement this I chose to use a closure table ala Bill Karwin
 (http://karwin.blogspot.com/2010/03/rendering-trees-with-closure-
 tables.html). In the closure table implementation, a second table is
 created to store the ancestor/descendent information. A self-
 referencing row is created when a node is added, as well as a row for
 each ancestor in the tree.

This is a schema design I've not seen before.  What's unusual here is that 
you'd like the route table to have something to do with the 
area_relationship table, however area_relationship has no foreign key to 
route, nor vice versa.  This doesn't follow typical normalization rules, 
in that it's very easy to have rows in DbRoute that don't exist in 
area_relationship_table.  SQLAlchemy's ORM can't really navigate around a 
design like that and it's probably not really what you want here.

I checked the blog post you refer to and it does not have this pattern.  It has 
just a node table, which here corresponds to DbArea, and closure, which 
here would be area_relationship_table and in SQLA we call it a 
self-referential many-to-many.   There's no extra table, so we need to figure 
out what you want there.

If it's the case that information such as Leave it to Jesus, Green Peace 
are associated with a particular association between two areas, you would first 
need to promote area_relationship to be a fully mapped class, forming what we 
refer to as an association object 
(http://www.sqlalchemy.org/docs/orm/relationships.html#association-object) - a 
many-to-many table that contains additional information about the association.  
 Based on how you've named things here I think you'd then want to foreign key 
that table to DbRoute, so that DbRoute contains a collection of 
AreaRelationship associations.   You could also do something simpler which is 
just to add a string column to the AreaRelationship association table directly.

Introducing the association object usually leads to the usage of the 
association proxy to hide the middle object in most cases.As this is 
likely to be a lot to take in, and there's also some complexity in getting the 
routes for an area, the attached script illustrates a mapping that seems to 
correspond to the sample data you have.  







 
 
 The table structure is as follows (simplified):
 
 -- area --
 area_id
 name
 
 -- area_relationship --
 ancestor
 descendent
 
 -- route --
 route_id
 area_id
 name
 
 
 Sample data:
 
 -- area --
 1, New River Gorge
 2, Kaymoor
 3, South Nuttall
 4, Meadow River Gorge
 
 -- area_relationship (ancestor, descendent) --
 1, 1 (self-referencing)
 2, 2 (self-referencing)
 1, 2 (Kaymoor is w/i New River Gorge)
 3, 3 (self-referencing)
 1, 3 (South Nutall is w/i New River Gorge)
 4, 4 (self-referencing)
 
 -- route (route_id, area_id, name)
 1, 2, Leave it to Jesus
 2, 2, Green Piece
 3, 4, Fancy Pants
 
 
 To query for all areas for a given route (up the tree), I can execute:
 
 SELECT area.area_id, area.name
 FROM route
INNER JOIN area_relationship ON route.area_id =
 area_relationship.descendent
INNER JOIN area ON area.area_id = area_relationship.ancestor
 WHERE route.route_id = 1
 Similarly, I can query for all routes in a particular area (including
 descendent areas) with:
 
 SELECT route.route_id, route.name
 FROM area
INNER JOIN area_relationship ON area.area_id =
 area_relationship.ancestor
INNER JOIN route ON route.area_id = area_relationship.descendent
 WHERE area.area_id = 1
 
 
 In SQL Alchemy I've created a relationship and two tables to handle
 these relationships:
 
 area_relationship_table = Table('area_relationship', Base.metadata,
  Column('ancestor', Integer, ForeignKey('area.area_id')),
  Column('descendent', Integer, ForeignKey('area.area_id'))
 )
 
 
 DbArea class -
 
 class DbArea(Base):
 
__tablename__ = 'area'
 
area_id = Column(Integer, primary_key = True)
name = Column(VARCHAR(50))
created = Column(DATETIME)
 
area_relationship_table.c.ancestor])
 
descendents = relationship('DbArea', backref = 'ancestors',
secondary =  area_relationship_table,
primaryjoin = area_id == area_relationship_table.c.ancestor,
secondaryjoin = area_id ==
 area_relationship_table.c.descendent)
 
 
 DbRoute class -
 
 class DbRoute(Base):
 
__tablename__ = 'route'
 
route_id = Column(Integer, primary_key = True)
area_id = Column(Integer, ForeignKey('area.area_id'))
name = Column(VARCHAR(50))
created = Column(DATETIME)
 
area = relationship(DbArea)
 
areas = relationship('DbArea', backref = 'routes',
secondary = area_relationship_table,
primaryjoin = area_id ==
 area_relationship_table.c.ancestor,
secondaryjoin = area_id ==
 area_relationship_table.c.descendent,
foreign_keys=[area_relationship_table.c.ancestor,
area_relationship_table.c.descendent])
 
 
 

Re: [sqlalchemy] SQL Alchemy Closure Table Relationship Definition

2012-01-09 Thread Michael Bayer
typo in the DbArea.ancestors attribute:

ancestors = association_proxy(ancestor_rels, ancestor)


more demos:

print s.query(DbArea).filter_by(name=Kaymoor).one().ancestors
print s.query(DbArea).filter_by(name=Kaymoor).one().descendents




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Eager loading hates single table inheritance

2012-01-09 Thread Jimmy Yuen Ho Wong
On 10/1/12 12:19 AM, Michael Bayer wrote:
 A common theme in SQLAlchemy is that, despite all the flak we get for being 
 complicated, SQLAlchemy is actually very simple.   It has a handful of 
 constructs which seek to do exactly the same thing in exactly the same way, 
 as consistently as possible.   So we sometimes get requests for SQLAlchemy 
 should figure out XYZ and it's like, well not really, that would be really 
 complicated.
Ah ha don't get me wrong but I didn't say SA was complicated, I said my
actual queries used in production is complicated. What's I've shown you
was just a simplified version of what I've narrowed down. SA sure had a
large surface area to get started because it does so much more, but I've
grown to love it. This is really the first time I tried to do something
complicated like this with SA and I was testing if SA would be smart
about this. I can't really live without SA now.
 you'd use two joinedload() twice for that. So all of the examples where 
 you're distinguishing xyz_all() from xyz(), all the same thing.   

 It's silently ignored now so http://www.sqlalchemy.org/trac/ticket/2370 is 
 added.  If I can get to it in 0.7 it will emit a warning, will raise an error 
 in 0.8.
Ah, thanks for telling me this. Didn't notice this from the docs.
 In the subqueryload_all()/subqueryload() example, you'll note the second one 
 with two separate subqueryload() calls does in fact correctly do the 
 subqueryload twice.   There's no way SQLA would ever figure out 
 automatically that they're against the same table and then join the two 
 queries together, decisionmaking like would be enormously complicated as well 
 as the mechanics of how to get a single loader to coordinate among two 
 relationships.   If you use just one relationship() to SearchOption then 
 provide filtered accessors, then you get exactly the optimization you're 
 looking for.
Is it possible to include a special flag to the subqueryload() call to
tell SA to optimize for this case? SA already knows about the class
hierarchy to be able to distinguish between the classes and how they map
to the rows returned, changing the model mapping is not always feasible.
In fact, if SA could do this, subqueryload() will be the optimal
solution for this use case. joinedload(), even when it works, it's still
too wasteful. Using events will work for now, but it's not as obvious.
SA already is so smart about things, surely it can be smarter no? :P
 Ticket 2120 calls for at least an option nested_joins=True, specifying that 
 the more correct/efficient system of nesting joins should be used.   This is 
 all 0.8 stuff, as the joined eager loading code would be destabilized by this 
 - if it turns out to be an isolated option it could move to 0.7.   Your set 
 of tests here makes me more interested in the issue though so perhaps we'll 
 see if I have time to try some things out.
Does it mean that the only way to eager load reliably in this
many-to-many single table inheritance use case, depending on the data,
is joinedload(innerjoin=True) for uselist=False relationships for now?
In my case, I can just use innerjoin=True all the way, but there may be
cases where the code has to switch between innerjoin and left join
depending on the relationships.

 Finally the contains_eager version.   SQLAlchemy again expects two distinct 
 sets of columns for each relationship, so you must join to the table twice:

 sa1 = aliased(OriginOption)
 sa2 = aliased(FoodOption)

 p = session.query(Product)\
 .join(sa1, Product.origin)\
 .join(sa2, Product.foods)\
 .options(contains_eager(Product.origin, alias=sa1),
  contains_eager(Product.foods, alias=sa2))\
 .filter(Product.id == 2).one()


 Same theme here, you're hoping SQLAlchemy can figure out something in 
 Python, i.e. that only certain rows being routed to 
 Product.origin/Product.foods should be used for each, but it's not that 
 complicated. It relies upon SQL to present it with the correct data 
 geometry and assumes it is correct.  Second-guessing what it gets back from 
 SQL to check, oh is this some special 1% edge case where I might have to 
 filter extra types that I'm not supposed to receive here? wouldn't be 
 efficient or consistent with how everything else works.
 consistently.
Ah thanks for this solution, but again, self joining for each
relationship is too inefficient. The point is, as a user, I think the
query with a single join already contains all the data needed to
reconstruct collections. Plus, I've already told SA that the results
will contain the rows that it can populate the collections with, why
can't it do that? That's the thought that went through my head.

Again, I think if contains_eager() accepts a flag that tells it to
optimize for this case, the code to handle this will be isolated.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To 

Re: [sqlalchemy] Eager loading hates single table inheritance

2012-01-09 Thread Jimmy Yuen Ho Wong
Ah Thanks! A bit of a hack but certainly works for now. Thanks for
helping out. Really appreciate it!

Jimmy Yuen Ho Wong

On 10/1/12 3:31 AM, Michael Bayer wrote:
 On Jan 9, 2012, at 11:19 AM, Michael Bayer wrote:

 The first thing I note here is, if I were doing a model like this, I'd 
 either use two different association tables between Product-Origin and 
 Product-Food, or I'd make one relationship(), and handle the filtering in 
 Python (which is essentially what you're wishing SQLAlchemy did here).   The 
 ORM wants to know about your table relationships which here is just 
 A-assoc-B.All of the issues here, some of which I consider to be SQLA 
 bugs anyway, would go away if you did it in that manner, subqueryloading 
 would be efficient, etc.

 Here's an event that does what you need:

 from sqlalchemy.orm import attributes
 from sqlalchemy import event

 class Product(Base):

 __tablename__ = product

 id = Column(Integer, autoincrement=True, primary_key=True)

 options = relationship(SearchOption, secondary=product_searchoption_table)
 origin = relationship(OriginOption, uselist=False,
   secondary=product_searchoption_table)
 foods = relationship(FoodOption,
  secondary=product_searchoption_table)

 @event.listens_for(Product, load)
 def mything(target, context):
 if 'options' in target.__dict__:
 attributes.set_committed_value(
 target, 'foods',
 [o for o in target.options if o.discriminator=='food']
 )
 origin = [o for o in target.options if o.discriminator=='origin']
 attributes.set_committed_value(
 target, 'origin', 
 origin[0] if origin else None
 )

 # only 2 queries
 for row in session.query(Product).options(subqueryload(Product.options)):
 print row, row.origin, row.foods

 if I added an onload event for individual relationship attributes that would 
 make this event a little more targeted.





-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Eager loading hates single table inheritance

2012-01-09 Thread Michael Bayer

On Jan 9, 2012, at 8:56 PM, Jimmy Yuen Ho Wong wrote:

 In the subqueryload_all()/subqueryload() example, you'll note the second one 
 with two separate subqueryload() calls does in fact correctly do the 
 subqueryload twice.   There's no way SQLA would ever figure out 
 automatically that they're against the same table and then join the two 
 queries together, decisionmaking like would be enormously complicated as 
 well as the mechanics of how to get a single loader to coordinate among two 
 relationships.   If you use just one relationship() to SearchOption then 
 provide filtered accessors, then you get exactly the optimization you're 
 looking for.
 Is it possible to include a special flag to the subqueryload() call to
 tell SA to optimize for this case?

it's not even the decision itself that is most challenging here, it's the 
implementation, as well as API clutter. Routing the loader into two 
collections within the internals would be a giant knot of almost never used 
code, right in the core of the ORM, for a flag that nobody ever uses or knows 
about.   The event system is how things like this should be done - the event I 
gave you is a starter.There's likely ways to get some other event hooks in 
there as well, there's a hook called append_result that actually should be 
working here but it wasn't doing it when I tried.

 SA already knows about the class
 hierarchy to be able to distinguish between the classes and how they map
 to the rows returned, changing the model mapping is not always feasible.
 In fact, if SA could do this, subqueryload() will be the optimal
 solution for this use case. joinedload(), even when it works, it's still
 too wasteful. Using events will work for now, but it's not as obvious.

I disagree - a flag would be unused, opaque, arbitrary.   This is not in any 
way an obvious or common use case - I've never seen it before. A ten line 
event handler OTOH illustrates exactly what's going on and keeps the core 
simple.   We've had many weird flag based features in the past and the work 
they bring is to take them *out*, and replace them with an open ended and 
user-centric approach.  Examples include:  entity name, mutable=True, 
implicit order by, polymorphic_fetch, Session.load(), all kinds of crap.  
removing bad API is 10x harder than adding it.   We don't do it except for 
features that are of widespread use or are very simple and don't complicate the 
core, like event hooks.

 Ticket 2120 calls for at least an option nested_joins=True, specifying 
 that the more correct/efficient system of nesting joins should be used.   
 This is all 0.8 stuff, as the joined eager loading code would be 
 destabilized by this - if it turns out to be an isolated option it could 
 move to 0.7.   Your set of tests here makes me more interested in the issue 
 though so perhaps we'll see if I have time to try some things out.
 Does it mean that the only way to eager load reliably in this
 many-to-many single table inheritance use case, depending on the data,
 is joinedload(innerjoin=True) for uselist=False relationships for now?
 In my case, I can just use innerjoin=True all the way, but there may be
 cases where the code has to switch between innerjoin and left join
 depending on the relationships.

joinedload() writes out flattened joins so whatever limitations are apparent 
there, yes.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



RE: [sqlalchemy] sql hashed tables

2012-01-09 Thread Jackson, Cameron
Woah, 100 tables of users? That doesn't seem right. How come you don't just 
have 1 table, 'users', with an id column, and then each user is a row in the 
table?

Likewise for your events. Is there a reason you don't have a single 'events' 
table, with date as a column, and then each row in the table is an event?

I don't know about your slave question, perhaps Mike will.


-Original Message-
From: sqlalchemy@googlegroups.com [mailto:sqlalchemy@googlegroups.com] On 
Behalf Of Mason
Sent: Tuesday, 10 January 2012 1:08 PM
To: sqlalchemy
Subject: [sqlalchemy] sql hashed tables

Hi

I have 100 tables that go from user_00 to user_99 (based on the last 2
digits of user id).  In sql, I can create a view (say 'user') and
query 'user' directly.  With sqlalchemy, can I do something similar?
I would like to query the 'user' class directly, without having to
worry about which table to select from.  Also, if i have a sql table
that looks like event_20120109, and a new table is created everyday.
In the old way, i will need to figure out what table to query , do
'select * from event_201200109', and if not enough event, go 1 day in
the past, until i have enough events.  So, is there some way to query
specific table?  I know I can execute sql directly in sqlalchemy, but
want to see if there is smarter way to do it.  Last thing is, we like
to direct the select to a specific mysql salve based on table access.
For example, select * from table1 goes to slave1 and select * from
table2 goes to slave2.  Can sqlachemy be configured to go to specific
slave by examining the select on the fly?

Thanks,
Mason

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



-
DISCLAIMER: This e-mail transmission and any documents, files and 
previous e-mail messages attached to it are private and confidential.  
They may contain proprietary or copyright material or information that 
is subject to legal professional privilege.  They are for the use of 
the intended recipient only.  Any unauthorised viewing, use, disclosure, 
copying, alteration, storage or distribution of, or reliance on, this 
message is strictly prohibited.  No part may be reproduced, adapted or 
transmitted without the written permission of the owner.  If you have 
received this transmission in error, or are not an authorised recipient, 
please immediately notify the sender by return email, delete this 
message and all copies from your e-mail system, and destroy any printed 
copies.  Receipt by anyone other than the intended recipient should not 
be deemed a waiver of any privilege or protection.  Thales Australia 
does not warrant or represent that this e-mail or any documents, files 
and previous e-mail messages attached are error or virus free.  

-

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] sql hashed tables

2012-01-09 Thread Michael Bayer

On Jan 9, 2012, at 9:07 PM, Mason wrote:

 Hi
 
 I have 100 tables that go from user_00 to user_99 (based on the last 2
 digits of user id).  In sql, I can create a view (say 'user') and
 query 'user' directly.  With sqlalchemy, can I do something similar?
 I would like to query the 'user' class directly, without having to
 worry about which table to select from.  Also, if i have a sql table
 that looks like event_20120109, and a new table is created everyday.
 In the old way, i will need to figure out what table to query , do
 'select * from event_201200109', and if not enough event, go 1 day in
 the past, until i have enough events.  So, is there some way to query
 specific table?  I know I can execute sql directly in sqlalchemy, but
 want to see if there is smarter way to do it.  Last thing is, we like
 to direct the select to a specific mysql salve based on table access.
 For example, select * from table1 goes to slave1 and select * from
 table2 goes to slave2.  Can sqlachemy be configured to go to specific
 slave by examining the select on the fly?

take a look at the entity name recipe:

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/EntityName

Or if you just need one name at a time, you can set the name field of Table() 
to anything you want, so the code refers to User or user which then links 
to the name.  Depends on how you want to do it.

then for the slave question that's just vertical partitioning:

http://www.sqlalchemy.org/docs/orm/session.html#vertical-partitioning


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: SQL Alchemy Closure Table Relationship Definition

2012-01-09 Thread jonstjohn
Michael -

I greatly appreciate the time and consideration you put into your
thorough reply.  It has really helped me better understand how SQL
Alchemy handles associations.  In particular, it is now apparent to me
that there is no clear association between area and route, which makes
it not possible to map through.  Although I think I completely
understand your explanation and sample code, it might be helpful for
me to clarify the problem with a simple example in a common problem
domain.

The route and area relationship is similar to the hypothetical problem
of a business location.  Suppose you have a set of business locations,
each in a specific city.  The business must be associated with one and
only one city.  The city is located in ever widening areas, e.g., the
county, region, state, country, planet, etc.  Suppose you want to find
all businesses within a given county, or a state.  I'm not sure I
agree that storing the city id in the business violates normalization,
since the business can have only one city.  And I'm not sure that
storing the business id in every geographical designation (i.e.,
country, region, state, etc) is a better design.  On the contrary, I
think that you would want to store the city id with the business, and
the relationship between geographical entities separately.

As I mentioned in the original post, I can construct an SQL query that
gets at this relationship (sorry if it got buried):

SELECT route.route_id, route.name
FROM area
INNER JOIN area_relationship ON area.area_id =
area_relationship.ancestor
INNER JOIN route ON route.area_id = area_relationship.descendent
WHERE area.area_id = 1

However, I still can't figure out how to create these mappings in SQL
Alchemy.

As a work-around, I queried for the descendents first, then filtered
with an 'in' for all descendents.  This works fine for me since I only
have several levels of nesting, but would probably get inefficient if
I had deeper nestings (and thus a potentially very large 'in'
condition).

Work around:

def get_routes(area_id):
# do imports, init session, etc
area_ids = []
area = session.query(DbArea).filter(DbArea.area_id ==
area_id).one()
for descendent in area.descendents:
 area_ids.append(descendent.area_id)
return
session.query(DbRoute).filter(DbRoute.area_id.in_(area_ids))


Let me know if you see a way to accomplish this w/o doing the
intermediate query for descendents and the 'in' condition.

Thanks again!
Jon


On Jan 9, 5:33 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 typo in the DbArea.ancestors attribute:

     ancestors = association_proxy(ancestor_rels, ancestor)

 more demos:

 print s.query(DbArea).filter_by(name=Kaymoor).one().ancestors
 print s.query(DbArea).filter_by(name=Kaymoor).one().descendents

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: SQL Alchemy Closure Table Relationship Definition

2012-01-09 Thread Michael Bayer

On Jan 9, 2012, at 11:58 PM, jonstjohn wrote:

 The route and area relationship is similar to the hypothetical problem
 of a business location.  Suppose you have a set of business locations,
 each in a specific city.  The business must be associated with one and
 only one city.  The city is located in ever widening areas, e.g., the
 county, region, state, country, planet, etc.  Suppose you want to find
 all businesses within a given county, or a state.  I'm not sure I
 agree that storing the city id in the business violates normalization,
 since the business can have only one city.  

 And I'm not sure that
 storing the business id in every geographical designation (i.e.,
 country, region, state, etc) is a better design.  On the contrary, I
 think that you would want to store the city id with the business, and
 the relationship between geographical entities separately.

I was going to suggest relating the DbRoute directly to DbArea, then I noticed 
that you've actually done this with DbRoute.area_id and DbRoute.area, hadn't 
noticed that before.

DbRoute.areas asks relationship() to do something impossible - you're asking it 
to load DbArea objects but then the relationship is forced to not look at any 
columns that are actually in the  DbArea table.   You have it linking back to 
DbRoute.area_id on both sides.   Hence it tries to link DbRoute.area_id to 
DbArea and fails.

 
 As I mentioned in the original post, I can construct an SQL query that
 gets at this relationship (sorry if it got buried):
 
 SELECT route.route_id, route.name
 FROM area
INNER JOIN area_relationship ON area.area_id =
 area_relationship.ancestor
INNER JOIN route ON route.area_id = area_relationship.descendent
 WHERE area.area_id = 1

This query suggests linking route on one side and area on the other, which is 
more traditional, so you'd just need to link to DbArea.area_id:

areas = relationship('DbArea', backref = 'routes',
   secondary = area_relationship_table,
   primaryjoin = area_id == area_relationship_table.c.ancestor,
   secondaryjoin = DbArea.area_id == area_relationship_table.c.descendent,
   innerjoin=True)

This should produce the equivalent idea, an implicit join when lazily loaded 
and INNER JOIN if joinedload() is used.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.