[sqlalchemy] Circular dependency - Read only database

2014-01-31 Thread Nicola Palumbo
Hi All,

I would like to get all tables of a database that I have read only 
access.The lines below raise CircularDependencyError: Circular dependency 
detected. Cycles:

How can I get around this problem? Any idea?

from sqlalchemy.schema import MetaData
meta = MetaData()
meta.reflect(bind=db_engine)
tables = meta.sorted_tables

#tables = meta.tables
for table in tables:
print table


Thanks

Nicola

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Savepoints and expiry

2014-01-31 Thread Wolfgang Schnerring
Hello list,

pardon the innocent/dumb question: wouldn't it make sense to expire_all()
on begin_nested() and rollback()?

My concrete example goes like this:

  savepoint = session.begin_nested()
  session.delete(stuff)
  session.begin_nested()
  # Now, if I don't call session.expire_all() here, stuff won't be perceived as 
gone
  perform_checks_while_pretending_that_stuff_is_gone()
  savepoint.rollback()
  # And if I don't call session.expire_all() here, stuff won't be perceived back

I'd love to hear your thoughts on this.

Thanks,
Wolfgang

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Circular dependency - Read only database

2014-01-31 Thread Michael Bayer
if you are reflecting tables, and sorting them reveals that there are mutual 
dependent foreign keys (or some other kind of cycle), then the tables can’t be 
delivered in foreign key dependency order, there is no such ordering.

when creating Table objects you can break such a cycle using the use_alter 
flag: 
http://docs.sqlalchemy.org/en/rel_0_9/core/constraints.html?highlight=use_alter#creating-dropping-foreign-key-constraints-via-alter

you could set this flag for an existing set of reflected tables by iterating 
through each table.constraints.

if you’re just looking to drop all tables, its a good idea to drop all the 
constraints ahead of time, see 
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/DropEverything




On Jan 31, 2014, at 5:04 AM, Nicola Palumbo nikpalu...@gmail.com wrote:

 Hi All,
 
 I would like to get all tables of a database that I have read only access.The 
 lines below raise CircularDependencyError: Circular dependency detected. 
 Cycles:
 
 How can I get around this problem? Any idea?
 
 from sqlalchemy.schema import MetaData
 meta = MetaData()
 meta.reflect(bind=db_engine)
 tables = meta.sorted_tables
 
 #tables = meta.tables
 for table in tables:
 print table
 
 
 Thanks
 
 Nicola
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [sqlalchemy] PostgreSQL 9.1 on the horizon, cool new stuff

2014-01-31 Thread Michael Bayer

On Jan 30, 2014, at 9:52 PM, Lucas Taylor ltaylor.vo...@gmail.com wrote:

 Any notion of how one might instruct SQLAlchemy to (conditionally) create 
 tables using UNLOGGED?
 
 I'd like to be able to modify the DDL for all CREATE TABLE statements under 
 certain conditions (dialect=postgresql  testing=True)
 If not under test, then there would be no need to modify CREATE TABLE.
 
 I'm thinking that it might involve some kind of before_create event and 
 modifying the DDL, e.g. something that might start like:
 
 event.listen(
 metadata,
 'before_create',
 DDL('CREATE UNLOGGED TABLE').execute_if(dialect='postgresql', 
 callable_=isUnderTest, state=TESTING)
 
 Does this seem like the right direction?

sorta, though you might also look into using @compiles on top of 
sqlalchemy.schema.CreateTable, see 
http://docs.sqlalchemy.org/en/rel_0_9/core/compiler.html.




signature.asc
Description: Message signed with OpenPGP using GPGMail


[sqlalchemy] group_by and lazy=False relationship problem

2014-01-31 Thread Robert Tasarz
Hi,

I'm almost sure this is a bug, but maybe I'm missing something obvious. 
I've tested it with Python 3.3, SQLAlchemy 0.9.1,  PostgreSQL 9.3 and 
reduced the issue to the following code:

#!/usr/bin/env python3

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class A(Base):
__tablename__ = 'a'
id = Column('id', Integer(), primary_key=True)
_elem = Column('_elem',Integer(), ForeignKey('p.id'))
value = Column(String(32))


class I(Base):
__tablename__ = 'i'
id = Column(Integer, primary_key=True)
_elem = Column('_elem',Integer(), ForeignKey('p.id'))
value = Column(Integer())
elem = relationship('P', backref='items')

class P(Base):
__tablename__ = p
id = Column(Integer(), primary_key=True)
attrs = relationship(A, lazy=False, backref=elem)#,

e = create_engine(postgresql://localhost/test, echo=True)
Base.metadata.drop_all(e)
Base.metadata.create_all(e)

sess = Session(e)

sumq = func.sum(I.value).label('sum1')

# this one produces valid query
sess.query(P,sumq).outerjoin(P.items).group_by(P).order_by(sumq)[:10]

#but calling .all() raises exception
sess.query(P,sumq).outerjoin(P.items).group_by(P).order_by(sumq).all()


Here's most relevant part of the exception:
Traceback (most recent call last):
  File 
/srv/websites/sika/local/lib/python3.3/dist-packages/sqlalchemy/engine/base.py,
 
line 867, in _execute_context
context)
  File 
/srv/websites/sika/local/lib/python3.3/dist-packages/sqlalchemy/engine/default.py,
 
line 388, in do_execute
cursor.execute(statement, parameters)
psycopg2.ProgrammingError: column a_1.id must appear in the GROUP BY 
clause or be used in an aggregate function
LINE 1: SELECT p.id AS p_id, sum(i.value) AS sum1, a_1.id AS a_1_id,...

Where query is:

 'SELECT p.id AS p_id, sum(i.value) AS sum1, a_1.id AS a_1_id, a_1._elem AS 
a_1__elem, a_1.value AS a_1_value \nFROM p LEFT OUTER JOIN i ON p.id = 
i._elem LEFT OUTER JOIN a AS a_1 ON p.id = a_1._elem GROUP BY p.id ORDER BY 
sum1' {}

And in case of call with limits it's built properly:

'FROM (SELECT p.id AS p_id, sum(i.value) AS sum1 
FROM p LEFT OUTER JOIN i ON p.id = i._elem GROUP BY p.id ORDER BY sum1 
 LIMIT %(param_1)s) AS anon_1 LEFT OUTER JOIN a AS a_1 ON anon_1.p_id = 
a_1._elem ORDER BY anon_1.sum1'

So is it bug or some limitation? Any idea how to workaround it?

regards,
  Robert Tasarz


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Re: group_by and lazy=False relationship problem

2014-01-31 Thread Jonathan Vanasco
i believe you need to be explicit on your group_by :

   group_by(P.id)

or 

   group_by(P.attrs)

or

   group_by(P.id,P.attrs)



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] group_by and lazy=False relationship problem

2014-01-31 Thread Michael Bayer

On Jan 31, 2014, at 2:49 PM, Robert Tasarz robert.tas...@gmail.com wrote:

 Hi,
 
 I'm almost sure this is a bug, but maybe I'm missing something obvious. 
 I've tested it with Python 3.3, SQLAlchemy 0.9.1,  PostgreSQL 9.3 and reduced 
 the issue to the following code:
 
 
 
 Here's most relevant part of the exception:
 Traceback (most recent call last):
   File 
 /srv/websites/sika/local/lib/python3.3/dist-packages/sqlalchemy/engine/base.py,
  line 867, in _execute_context
 context)
   File 
 /srv/websites/sika/local/lib/python3.3/dist-packages/sqlalchemy/engine/default.py,
  line 388, in do_execute
 cursor.execute(statement, parameters)
 psycopg2.ProgrammingError: column a_1.id must appear in the GROUP BY clause 
 or be used in an aggregate function
 LINE 1: SELECT p.id AS p_id, sum(i.value) AS sum1, a_1.id AS a_1_id,…

When joined eager loading is used, if LIMIT is also applied as you have in that 
[:10], SQLAlchemy wraps the query to be limited inside of a subquery, so that 
the LEFT OUTER JOIN for the eager loading can safely load all related rows 
without being subject to the LIMIT.   This wrapping doesn’t occur when GROUP BY 
is used - GROUP BY is not usually used in conjunction with loads of a full 
entity, as this is typically inefficient - it is usually used with a query that 
is only loading individual columns, and then if the query overall is to return 
entities, a JOIN against the GROUP BY as a subquery is used (see 
http://weblogs.sqlteam.com/jeffs/archive/2005/12/14/8546.aspx for a description 
of this).

So in that sense, you’re getting that subquery behavior out of the box due to 
the LIMIT, but with plain all(), this isn’t applied.  you can call from_self() 
to produce the self-wrapping effect:

sess.query(P,sumq).outerjoin(P.items).group_by(P).order_by(sumq).from_self().all()


perhaps the presence of GROUP BY should be added to the list of things that 
cause the automatic wrapping with joined eager loading to occur, though the 
current behavior has been this way for nearly 8 years and nobody’s asked for it 
before.




signature.asc
Description: Message signed with OpenPGP using GPGMail


[sqlalchemy] Relationships using max

2014-01-31 Thread Josh Kuhn
I've got a two tables I'd like to create a relationship for. One is the
object, and another tracks versions.

Here's a gist with the setup:
https://gist.github.com/deontologician/8744532

Basically, the object doesn't have a direct reference to the current
version stored in the table. Instead, the current version is defined as the
maximum version that points to that object.

I'd like to have a one-to-one current_version relationship, but this has
proven difficult (at least in 0.8.4). The primary goal is to allow using
the joinedload options to control populating the current_version field, but
that only works when a relationship is defined and is non-dynamic.

Any hints as to how to get this to work like I want?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] PostgreSQL 9.1 on the horizon, cool new stuff

2014-01-31 Thread Lucas Taylor


On Friday, January 31, 2014 7:45:07 AM UTC-7, Michael Bayer wrote:


 On Jan 30, 2014, at 9:52 PM, Lucas Taylor ltaylo...@gmail.comjavascript: 
 wrote:

 Any notion of how one might instruct SQLAlchemy to (conditionally) create 
 tables using UNLOGGED?

 I'd like to be able to modify the DDL for all CREATE TABLE statements 
 under certain conditions (dialect=postgresql  testing=True)
 If not under test, then there would be no need to modify CREATE TABLE.

 I'm thinking that it might involve some kind of before_create event and 
 modifying the DDL, e.g. something that might start like:

 event.listen(
 metadata,
 'before_create',
 DDL('CREATE UNLOGGED TABLE').execute_if(dialect='postgresql', 
 callable_=isUnderTest, state=TESTING)

 Does this seem like the right direction?


 sorta, though you might also look into using @compiles on top of 
 sqlalchemy.schema.CreateTable, see 
 http://docs.sqlalchemy.org/en/rel_0_9/core/compiler.html.


Thanks! I took a stab at it using @compiles and came up with this:

@compiles(CreateTable, 'postgresql')
def compile_unlogged(create, compiler, **kwargs):
if unittests and 'UNLOGGED' not in create.element._prefixes:
create.element._prefixes.append('UNLOGGED')
return compiler.visit_create_table(create)

This subsection of the compiler docs was helpful:
http://docs.sqlalchemy.org/en/rel_0_9/core/compiler.html#changing-the-default-compilation-of-existing-constructs

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] StaleDataError/ObjectDeletedError

2014-01-31 Thread Michael Bayer

On Jan 31, 2014, at 5:24 PM, lars van gemerden l...@rational-it.com wrote:

 Hi, all
 
 I am running into these 2 errors and have run out of ideas what to do about 
 it (also because i don't what they mean); They seem to happen in exactly the 
 same circumstances.
 
 mapper, table, update)
   File build\bdist.win32\egg\sqlalchemy\orm\persistence.py, line 514, in 
 _emit_update_statements
 (table.description, len(update), rows))
 StaleDataError: UPDATE statement on table 'Company' expected to update 1 
 row(s); 0 were matched.

this means an object was meant to be UPDATEed via the ORM, however the row 
which is the target of the UPDATE is missing.  Either the primary key of this 
row changed somehow, or the row was deleted, *or* the row is not visible to 
your transaction (this seems to be your case).

Here’s the DELETE case:

sess = Session()

m1 = MyObject()
sess.add(m1)
sess.flush()

# out of band DELETE, ORM has no clue
sess.execute(“DELETE FROM my_object WHERE id=:id”, {“id”: m1.id})

# modify object
m1.foo = ‘bar’

# row is gone, boom
sess.flush()


 
 File 
 C:\python27\lib\site-packages\sqlalchemy-0.8.3-py2.7-win32.egg\sqlalchemy\orm\loading.py,
  line 606, in load_scalar_attributes
 sqlalchemy.orm.exc.ObjectDeletedError: Instance 'Company at 0x5e0d550' has 
 been deleted, or its row is otherwise not present.


same idea, object is expired:

m1 = MyObject()
sess.add(m1)

# flush m1, also expire it
sess.commit()

# out of band DELETE, ORM has no clue
sess.execute(“DELETE FROM my_object WHERE id=:id”, {“id”: m1.id})

# row is gone, boom
print m1.foo


 
 What i do is:
 1 make a new object (mapped)- obj1
 2 add it to a session - session1 

 3 start another session - session2 
 4 do a get(id) on session2 (which should result in access to the database, 
 since i just started the session) - obj2 
 5 close session2
 6 do obj1.someattr.append(obj2)
 7 do session1.commit()
 8 get the first ERROR above 

this description isn’t specific enough to understand the issue.  What does “add 
it to a session” mean, did you flush that session?  was an INSERT emitted?  did 
you commit the transaction?  When you attempt to use “obj2” with obj1, you mean 
you are using the detached obj2 ?   What is the “id” you’re using get() on, is 
that the “id” that you know was created in #2 ?

Basically your problem comes down to sharing rows between transactions where 
those transactions aren’t aware of the rows you’re referring to (which is due 
to transaction isolation, see 
http://en.wikipedia.org/wiki/Transaction_isolation).  When you move an object 
between sessions, you should generally use merge() which will emit a SELECT for 
that row first.



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [sqlalchemy] Relationships using max

2014-01-31 Thread Michael Bayer
well I can get it to work for lazy loading like this:

expr = select([func.max(Version.id)]).\
where(Version.object_id == Object.id).\
correlate_except(Version).as_scalar()

Object.current_version = relationship(Version,
   primaryjoin=and_(
expr == Version.id,
Version.object_id == 
Object.id
)
   )


but for joined load, the criteria needs to fit into a LEFT OUTER JOIN ON 
clause.  Both SQLite and Postgresql reject an aggregate function in the ON 
clause.So I didn’t really know how to get that because you have to think in 
terms of the SQL….but then the usual approach is that you need to JOIN to a 
subquery that has the aggregate inside of it.  So I use instead the pattern you 
see here: 
http://docs.sqlalchemy.org/en/rel_0_9/orm/tutorial.html#using-subqueries  which 
is the subquery of “X.foo_id, func.AGGREGATE(X.id)” that then joins to the 
parent table, and then I go with “non primary mapper”, a use case that I 
recently added to the documentation at 
http://docs.sqlalchemy.org/en/rel_0_9/orm/relationships.html#relationship-to-non-primary-mapper,
 even though this has been around for years.

and it looks like……

expr = select([
func.max(Version.id).label(max_id),
Version.object_id
]).group_by(Version.object_id).alias()

stmt = select([Version]).\
select_from(join(Version, expr, Version.id == expr.c.max_id)).\
alias()

current_version = mapper(Version, stmt, non_primary=True)

Object.current_version = relationship(current_version)

I think I might have actually written a mapping like this as an example back in 
SQLAlchemy 0.1 even, this was the goofy kind of thing I thought everyone would 
be doing all the time.

works with joinedload.  Query is not too efficient, but is like:

SELECT objects.id AS objects_id, anon_1.id AS anon_1_id, anon_1.object_id AS 
anon_1_object_id 
FROM objects LEFT OUTER JOIN (SELECT versions.id AS id, versions.object_id AS 
object_id 
FROM versions JOIN (SELECT max(versions.id) AS max_id, versions.object_id AS 
object_id 
FROM versions GROUP BY versions.object_id) AS anon_2 ON versions.id = 
anon_2.max_id) AS anon_1 ON objects.id = anon_1.object_id 
WHERE objects.id = %(id_1)s




On Jan 31, 2014, at 5:35 PM, Josh Kuhn deontologic...@gmail.com wrote:

 I've got a two tables I'd like to create a relationship for. One is the 
 object, and another tracks versions.
 
 Here's a gist with the setup:
 https://gist.github.com/deontologician/8744532
 
 Basically, the object doesn't have a direct reference to the current version 
 stored in the table. Instead, the current version is defined as the maximum 
 version that points to that object.
 
 I'd like to have a one-to-one current_version relationship, but this has 
 proven difficult (at least in 0.8.4). The primary goal is to allow using the 
 joinedload options to control populating the current_version field, but that 
 only works when a relationship is defined and is non-dynamic.
 
 Any hints as to how to get this to work like I want?
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [sqlalchemy] StaleDataError/ObjectDeletedError

2014-01-31 Thread lars van gemerden
Hi Michael,

With some more detail:

What i do is:
1 make a new object (mapped)- obj1
2 create a scoped session (with context manager)- session1
3 do session1.add(obj) 
4 create another scoped session - session2 
5 do session2.query(someclass).get(some_id)-obj2 
6 close session2, no commit, no flush - obj2 is detached (right?)
7 do obj1.someattr.append(obj2)
8 do session1.commit()
9 get the first ERROR above 

basically i use

def Session( objs):
session = session_maker()
for obj in objs:
if object_session(obj) is None:
session.add(obj)
else:
session.merge(obj)
return session

@contextmanager
def scoped_session(objs = [], commit = True):
session = Session(objs)
try:
yield session
if commit:
session.commit()
except:
session.rollback()
raise
finally:
session.close()

and essentially code description (1-8) above comes down to:

obj1 = cls1()

with scoped_session([obj1]) as session1:
obj1.somefield = somevalue
with scoped_session(commit = False) as session2:
obj2 = session2.query(cls2).get(some_id)
obj1.someattr.append(obj2)

if i just do:

with scoped_session([obj1]) as session1:
obj1.somefield = somevalue

there is no problem.

Also:

this means an object was meant to be UPDATEed via the ORM, however the row 
which is the target of the UPDATE is missing.  Either the primary key of 
this row changed somehow, or the row was deleted, *or* the row is not 
visible to your transaction (this seems to be your case).

-  could the error also occur when the object was never committed to the 
database (which seems to be the case; the commit where the error occurs 
should be the first time the Company object is committed to the database)?
-  this seems to suggest that it is possible that a row is in the database, 
but that it is not visible to a transaction; is that possible?

As far as i know in the code that causes the problem, i do not do any 
deletes and i do not call flush myself.

Doing some more testing, now i get more of the second error in:

def __str__(self): #in mapped class
print object_session(self) is not None, has_identity(self) # True, 
True, = OK
print self.id #= ERROR
..

with trace:
File d:\Documents\Code\python\floware\models\flow\processes.py, line 333, 
in run
  self.execute(input, output)
File d:\Documents\Code\python\floware\toolshed\logs.py, line 55, in 
wrapper
  f_result = func(*v, **k)
File d:\Documents\Code\python\floware\models\flow\libraries\basic.py, 
line 159, in execute
  print %s %s % (self.cursor, str(i.item))
File d:\Documents\Code\python\floware\models\data\database.py, line 281, 
in __str__
  print object_session(self), has_identity(self), self.id
File 
C:\python27\lib\site-packages\sqlalchemy-0.8.3-py2.7-win32.egg\sqlalchemy\orm\attributes.py,
 
line 316, in __get__
File 
C:\python27\lib\site-packages\sqlalchemy-0.8.3-py2.7-win32.egg\sqlalchemy\orm\attributes.py,
 
line 611, in get
File 
C:\python27\lib\site-packages\sqlalchemy-0.8.3-py2.7-win32.egg\sqlalchemy\orm\state.py,
 
line 380, in __call__
File 
C:\python27\lib\site-packages\sqlalchemy-0.8.3-py2.7-win32.egg\sqlalchemy\orm\loading.py,
 
line 606, in load_scalar_attributes

sqlalchemy.orm.exc.ObjectDeletedError: Instance 'Company at 0x5e4a3f0' 
has been deleted, or its row is otherwise not present.

CL




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] StaleDataError/ObjectDeletedError

2014-01-31 Thread Michael Bayer

On Jan 31, 2014, at 8:11 PM, lars van gemerden l...@rational-it.com wrote:

 
 this means an object was meant to be UPDATEed via the ORM, however the row 
 which is the target of the UPDATE is missing.  Either the primary key of this 
 row changed somehow, or the row was deleted, *or* the row is not visible to 
 your transaction (this seems to be your case).
 
 -  could the error also occur when the object was never committed to the 
 database (which seems to be the case; the commit where the error occurs 
 should be the first time the Company object is committed to the database)?

sure

 -  this seems to suggest that it is possible that a row is in the database, 
 but that it is not visible to a transaction; is that possible?

absolutely, the link http://en.wikipedia.org/wiki/Transaction_isolation is 
relevant here




signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [sqlalchemy] Relationships using max

2014-01-31 Thread Michael Bayer

On Jan 31, 2014, at 8:51 PM, Josh Kuhn deontologic...@gmail.com wrote:

 This is pretty amazing. I get it to *almost* work. The issue seems to be that 
 I don't use the database column names as my model attribute names. The db 
 column names are really obfuscated, so it's more like:
 
 class Version(Base):
 id = Column('vrsn_nbr', Integer, primary_key=True)
 date = Column('dt', DateTime)
 
 etc..
 
 It seems when this relationship is populated, it creates a Version object, 
 but it sets all of the database column names as attributes, and the defined 
 Column names are all None. Is there a way to get around that?

well if you map to a select() then it uses the column names, as in 
http://docs.sqlalchemy.org/en/rel_0_9/orm/mapper_config.html#naming-columns-distinctly-from-attribute-names
 you can rename with properties={“whatever_name”: stmt.c.xyz_column}, etc.  
same thing as saying “id = Column(‘vrsn_nbr’)” on your declarative mapping.




 
 
 On Fri, Jan 31, 2014 at 6:38 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 well I can get it to work for lazy loading like this:
 
 expr = select([func.max(Version.id)]).\
 where(Version.object_id == Object.id).\
 correlate_except(Version).as_scalar()
 
 Object.current_version = relationship(Version,
primaryjoin=and_(
 expr == Version.id,
 Version.object_id == 
 Object.id
 )
)
 
 
 but for joined load, the criteria needs to fit into a LEFT OUTER JOIN ON 
 clause.  Both SQLite and Postgresql reject an aggregate function in the ON 
 clause.So I didn’t really know how to get that because you have to think 
 in terms of the SQL….but then the usual approach is that you need to JOIN to 
 a subquery that has the aggregate inside of it.  So I use instead the pattern 
 you see here: 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/tutorial.html#using-subqueries  
 which is the subquery of “X.foo_id, func.AGGREGATE(X.id)” that then joins to 
 the parent table, and then I go with “non primary mapper”, a use case that I 
 recently added to the documentation at 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/relationships.html#relationship-to-non-primary-mapper,
  even though this has been around for years.
 
 and it looks like……
 
 expr = select([
 func.max(Version.id).label(max_id),
 Version.object_id
 ]).group_by(Version.object_id).alias()
 
 stmt = select([Version]).\
 select_from(join(Version, expr, Version.id == expr.c.max_id)).\
 alias()
 
 current_version = mapper(Version, stmt, non_primary=True)
 
 Object.current_version = relationship(current_version)
 
 I think I might have actually written a mapping like this as an example back 
 in SQLAlchemy 0.1 even, this was the goofy kind of thing I thought everyone 
 would be doing all the time.
 
 works with joinedload.  Query is not too efficient, but is like:
 
 SELECT objects.id AS objects_id, anon_1.id AS anon_1_id, anon_1.object_id AS 
 anon_1_object_id 
 FROM objects LEFT OUTER JOIN (SELECT versions.id AS id, versions.object_id AS 
 object_id 
 FROM versions JOIN (SELECT max(versions.id) AS max_id, versions.object_id AS 
 object_id 
 FROM versions GROUP BY versions.object_id) AS anon_2 ON versions.id = 
 anon_2.max_id) AS anon_1 ON objects.id = anon_1.object_id 
 WHERE objects.id = %(id_1)s
 
 
 
 
 On Jan 31, 2014, at 5:35 PM, Josh Kuhn deontologic...@gmail.com wrote:
 
 I've got a two tables I'd like to create a relationship for. One is the 
 object, and another tracks versions.
 
 Here's a gist with the setup:
 https://gist.github.com/deontologician/8744532
 
 Basically, the object doesn't have a direct reference to the current version 
 stored in the table. Instead, the current version is defined as the maximum 
 version that points to that object.
 
 I'd like to have a one-to-one current_version relationship, but this has 
 proven difficult (at least in 0.8.4). The primary goal is to allow using the 
 joinedload options to control populating the current_version field, but that 
 only works when a relationship is defined and is non-dynamic.
 
 Any hints as to how to get this to work like I want?
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to