[sqlalchemy] support for postgres inheritance in sqlalchemy

2012-02-23 Thread Chris Withers

Hi All,

Where can I find good example of declaratively mapping model classes to 
postgres tables where inheritance is involved?


(eg: create table foo_2012_02_23 inherits foo)

Also, if I reflect a database containing tables in this sructure, what 
will be reflected?


Will all parent and child tables be reflected?

cheers,

Chris

--
Simplistix - Content Management, Batch Processing  Python Consulting
- http://www.simplistix.co.uk

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: problem with dynamic tables/classes and inheritance

2012-02-23 Thread lars van gemerden
Seems a finally got what I wanted to achieve. Changed a lot of things,
so I cannot pinpoint the error(s) in the code above. If anyone is
interested I will help/share the solution I found.

Cheers, Lars

On Feb 22, 8:35 pm, lars van gemerden l...@rational-it.com wrote:
 I am trying to generate tables/classes dynamically. The code below is
 my latest attempt, but I cannot get it to work.

 -
  class TableName(object):
     @declared_attr
     def __tablename__(cls): return cls.__name__

 class Inherit(object):
     @declared_attr
     def id(cls): #= is not called for S
         base = cls.__bases__[len(cls.__bases__) - 1]
         print class, base:, cls.__name__, base.__name__
         return Column(Integer, ForeignKey(base.__name__ + '.id'),
 primary_key = True)
     @declared_attr
     def __mapper_args__(cls):
         return {'polymorphic_identity': cls.__name__}

 class Object(Base, TableName):

     association_tables = {}

     id = Column(Integer, primary_key = True)
     type_name = Column(String(50),  nullable = False)
     __mapper_args__ = {'polymorphic_on': type_name}

 if __name__ == '__main__':
     session = setup(engine)

     T = type('T', (Inherit, Object), {'Tdata': Column(String(50))})
     S = type('S', (T,), {'Sdata': Column(String(50))}) #= Error
     session.commit()
     print S.__table__.c
 -
 the output is:
 -
 class, base: T Object
 class, base: T Object
 class, base: T Object
 Traceback (most recent call last):
   File D:\Documents\Code\Eclipse\workspace\SQLAdata\src\test4.py,
 line 55, in module
     S = type('S', (T,), {'Sdata': Column(String(50))})
   File C:\Python27\lib\site-packages\sqlalchemy\ext\declarative.py,
 line 1336, in __init__
     _as_declarative(cls, classname, cls.__dict__)
   File C:\Python27\lib\site-packages\sqlalchemy\ext\declarative.py,
 line 1329, in _as_declarative
     **mapper_args)
   File C:\Python27\lib\site-packages\sqlalchemy\orm\__init__.py,
 line 1116, in mapper
     return Mapper(class_, local_table, *args, **params)
   File C:\Python27\lib\site-packages\sqlalchemy\orm\mapper.py, line
 197, in __init__
     self._configure_inheritance()
   File C:\Python27\lib\site-packages\sqlalchemy\orm\mapper.py, line
 473, in _configure_inheritance
     self.local_table)
   File C:\Python27\lib\site-packages\sqlalchemy\sql\util.py, line
 303, in join_condition
     between '%s' and '%s'.%s % (a.description, b.description, hint))
 sqlalchemy.exc.ArgumentError: Can't find any foreign key relationships
 between 'T' and 'S'.
 -
 What is wrong with this approach. Is there a good way to approach this
 problem (I have tried a couple already).

 Also: Why is Inherit.id() called 3 times for T

 Please help!

 Lars

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] support for postgres inheritance in sqlalchemy

2012-02-23 Thread Michael Bayer
We only have a recipe where we assume INHERITS is used like the PG docs say in 
http://www.postgresql.org/docs/9.0/static/ddl-partitioning.html#DDL-PARTITIONING-IMPLEMENTATION
 .  That is, child tables don't have any extra columns.

In this scenario, SQLAlchemy doesn't need to know about the child tables at 
all, pretty much.

Here's the recipe:

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/PostgreSQLInheritance

SQLA has no facilities to reflect the INHERITS clause itself.




On Feb 23, 2012, at 5:17 AM, Chris Withers wrote:

 Hi All,
 
 Where can I find good example of declaratively mapping model classes to 
 postgres tables where inheritance is involved?
 
 (eg: create table foo_2012_02_23 inherits foo)
 
 Also, if I reflect a database containing tables in this sructure, what will 
 be reflected?
 
 Will all parent and child tables be reflected?
 
 cheers,
 
 Chris
 
 -- 
 Simplistix - Content Management, Batch Processing  Python Consulting
- http://www.simplistix.co.uk
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Something in Pyramid is preventing gc on SQLAlchemy objects

2012-02-23 Thread Vlad K.


/facepalm


Okay, debugtoolbar does that. If I disable it from config, everything 
works fine. My fault. Been hunting for this memory leak for two days 
straight and it never occurred to me that even the command line 
bootstrapped Pyramid, when using development.ini, will have the debug 
toolbar on.


Sorry for wasting your time, folks.


I'm sending this email to the SQLAlchemy list too, where I asked about 
this problem, for further reference in case anyone else has the same 
problem



.oO V Oo.


On 02/23/2012 02:24 PM, Vlad K. wrote:


Hi all.

I have a Pyramid bootstrapped script that has to process thousands of 
rows (on relatively complex model relationships) per call from command 
line. Pyramid is bootstrapped to reuse models and helper functions 
from the main web application this script is part of.


The problem I'm having is that in each iteration and/or individual 
transaction, the memory keeps growing, seems like SQLAlchemy objects 
are not released. A self-sufficient example script is here:



https://gist.github.com/d669e958c54869c69831


Without bootstrapping Pyramid (comment out line 266 and set env=None 
or something), the gcdelta is 0 or negative. With Pyramid bootstrapped 
(note that nothing else touches the application or request, only 
bootstrap is called) the gcdelta is in thousands per 200 iterations.



Fedora 16 x86_64
PostgreSQL 9.1

Pyramid 1.2.1 (except sqlalchemy and psycopg2, all dependencies are 
default given with 1.2.1 installation)

SQLAlchemy 0.7.5
Psycopg2 2.4.4
Transaction 1.1.1
Zope.Sqlalchemy 0.6.1



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Working with large IN lists

2012-02-23 Thread Michael Bayer
I'd note that those subqueryloads() render the effectiveness of yield_per() to 
be almost nil.

This is how subqueryload works:


query(Parent).options(subqueryload(Parent.child)).all()

will give you:


SELECT * FROM parent

then, the moment a parent row is fetched, the first child collection is 
referenced to be loaded, then the query emitted is:

SELECT * FROM child JOIN (SELECT * FROM parent) AS a on a.id=child.id_a

that is, the second query loads all child rows for all parents in the entire 
result.   

So let's say we do yield_per(), so that SQLAlchemy only processes the first 100 
rows before handing them out.   As soon as you hit either of those two 
subqueryloaded collections, the yield_per() is mostly thrown out the window - 
they will each load the entire list of child objects for the whole result, 
which right there will grow memory to be as big as your entire result.

The next thing I'd look at is that second query for ModelC.   You can also load 
those upfront so that you don't need to do a query each time:

modelcs = dict(
sess.query(ModelC.id_a, 
ModelC).join(ModelC.a).filter(ModelA.parent_id=same integer you're using 
against ModelA)
)

then you have a dictionary of id_a-ModelC as you loop through your ModelA 
records.   All the ModelC's are already in the session, so you don't need to 
use merge(), which is also not a super quick operation.  If an id_a is not in 
the dictionary then you know to create a new ModelC and use Session.add() to 
put it in.   If you're limiting the ModelA rows using a LIMIT or window recipe 
like I mentioned earlier, you'd apply that same criteria to the loading of the 
modelcs.

Later on you're doing something with query(ModelC).first() in a loop which is 
also something I hope isn't in the real application - looking there is seems 
like you'd only need to say query(ModelC).delete().

Watching the SQL emitted with echo=True, and in some cases also seeing how 
large the results coming in are using echo='debug', is something I strongly 
recommend when first profiling an application.





On Feb 23, 2012, at 8:18 AM, Vlad K. wrote:

 
 And yet again the problem is not in SQLAlchemy but in Pyramid Sorry, from 
 now on, I'll first try writing test scripts without Pyramid to see where the 
 problem is, I just don't have time for that kind of debugging so I'm wasting 
 yours. :)
 
 
 Thanks. I'll write to Pylons Discuss list, but the test script is here 
 nevertheless. Comment out line 266 (and remove imports) to disable Pyramid 
 and see it work okay. Without Pyramid (but even with Transaction and 
 ZopeTransactionExtension), the gcdelta is 0 or negative per batch of 200 
 iterations. With Pyramid bootstrapped, gcdelta is in thousands per batch of 
 200 iterations.
 
 https://gist.github.com/d669e958c54869c69831
 
 
 .oO V Oo.
 
 
 On 02/23/2012 02:41 AM, Michael Bayer wrote:
 On Feb 22, 2012, at 6:36 PM, Vlad K. wrote:
 
 Okay, thanks to this article:
 
 http://neverfear.org/blog/view/155/Investigating_memory_leaks_in_Python
 
 
 I made similar plot of object counts in time, showing top 50 types. The 
 resulting PDF is here (you might wish to download it first, Google messes 
 it up for me):
 
 https://docs.google.com/open?id=0ByLiBlA59qDwYTY1MGIzYWEtYjMxZi00ZDVlLTk0OTEtOGI2ZjA3NDgyM2Y3
 
 
 Everything seems to linearly grow in count. Something is keeping all those 
 objects reference somewhere. What could possibly be the cause?
 
 can you provide a self-contained, single file test case that illustrates the 
 memory growth ?
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Working with large IN lists

2012-02-23 Thread Vlad K.


.oO V Oo.


On 02/23/2012 02:50 PM, Michael Bayer wrote:

I'd note that those subqueryloads() render the effectiveness of yield_per() to 
be almost nil.


I know. I've replicated the real use case in the application which has 
yet to see proper optimization which includes better query planning and 
reduction of unnecessary joins and relationships. I'd rather investigate 
in implementing prepared statements and do basically lazy=select 
instead of subqueries.




The next thing I'd look at is that second query for ModelC.   You can also load 
those upfront so that you don't need to do a query each time:

modelcs = dict(
 sess.query(ModelC.id_a, ModelC).join(ModelC.a).filter(ModelA.parent_id=same 
integer you're using against ModelA)
)

then you have a dictionary of id_a-ModelC as you loop through your ModelA records.   All the 
ModelC's are already in the session, so you don't need to use merge(), which is also not a super quick 
operation.  If an id_a is not in the dictionary then you know to create a new ModelC and use 
Session.add() to put it in.   If you're limiting the ModelA rows using a LIMIT or window recipe like I mentioned 
earlier, you'd apply that same criteria to the loading of the modelcs.

Later on you're doing something with query(ModelC).first() in a loop which is 
also something I hope isn't in the real application - looking there is seems 
like you'd only need to say query(ModelC).delete().


Might not be visible from this test script, but the scenario is this. 
ModelA represents certain data that has to be exported to external 
services (XMLRPC, REST and similar). In an ideal situation I just select 
all ModelA that has to be exported (by looking at timestamp of last 
modification vs timestamp of process run), but I can't do that because 
if such a transaction fails, it has to remain remembered for next 
batch run. So I use ModelC table which logs these pending transactions.


So the first phase selects rows from ModelA that are up for export and 
creates transaction logs in ModelC.


The second phase then loads and exports one by one row from ModelC 
(joined with ModelA and everything else required for the export). 
However, if single transaction fails, the entire script exist and 
continues when called next time. This I have to do for other reasons 
(preventing overload on possibly downed external service etc.., so I 
can't skip that row and fetch next).


It may happen, and does regularly, that on subsequent runs of the 
process there are no new ModelA rows to load, but there are ModelC rows 
that failed from last call (since they're in the table, it means they 
were not processed)


And that's the logic in query(ModelC).first(), processing, and then 
delete(). Also note that each row has to be an individual transaction 
(load, send to external service, remove from ModelC table), which means 
I can't rely on session/identity caching by pre-loading data instead 
of joins and subqueries.




Watching the SQL emitted with echo=True, and in some cases also seeing how 
large the results coming in are using echo='debug', is something I strongly 
recommend when first profiling an application.


Yes, I use logging and see all the SQL emitted.



Thanks for your input, I appreciate all the help and advice I can get. 
Still a ton of stuff to learn about SQLA.




V

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] order of initialization __mapper_args__ 'inherit_condition' and primary key

2012-02-23 Thread lars van gemerden
Hi all,

I reproduced an error I was having with the code below (basically
pasted from the tutorial for joined inheritance) and added an
'inherit_condition' to __mapper_args__ of the subclass. At that point
the code started throwing a sqlalchemy.exc.InterfaceError. The code
is:

-

from sqlalchemy import *
from sqlalchemy.orm.session import sessionmaker
from sqlalchemy.ext.declarative import declarative_base

engine = create_engine('sqlite:///:memory:', echo=False)
Base = declarative_base(bind = engine)
Session = sessionmaker(bind = engine)


def setup(engine):
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
return Session()

class Person(Base):
__tablename__ = 'Person'
id = Column(Integer, primary_key=True)
discriminator = Column('type', String(50))
__mapper_args__ = {'polymorphic_on': discriminator}


class Engineer(Person):
__tablename__ = 'Engineer'
__mapper_args__ = {'polymorphic_identity': 'Engineer',
'inherit_condition': (id == Person.id)}
id = Column(Integer, ForeignKey('Person.id'), primary_key=True)

if __name__ == '__main__':
session = setup(engine)
e = Engineer()
session.add_all([e])
session.commit()
print e.discriminator

and the error:


Traceback (most recent call last):
  File D:\Documents\Code\Eclipse\workspace\SQLAdata\src\test5.py,
line 38, in module
print e.discriminator
  File C:\Python27\lib\site-packages\sqlalchemy\orm\attributes.py,
line 168, in __get__
return self.impl.get(instance_state(instance),dict_)
  File C:\Python27\lib\site-packages\sqlalchemy\orm\attributes.py,
line 451, in get
value = callable_(passive)
  File C:\Python27\lib\site-packages\sqlalchemy\orm\state.py, line
285, in __call__
self.manager.deferred_scalar_loader(self, toload)
  File C:\Python27\lib\site-packages\sqlalchemy\orm\mapper.py, line
1701, in _load_scalar_attributes
only_load_props=attribute_names)
  File C:\Python27\lib\site-packages\sqlalchemy\orm\query.py, line
2378, in _load_on_ident
return q.one()
  File C:\Python27\lib\site-packages\sqlalchemy\orm\query.py, line
2050, in one
ret = list(self)
  File C:\Python27\lib\site-packages\sqlalchemy\orm\query.py, line
2093, in __iter__
return self._execute_and_instances(context)
  File C:\Python27\lib\site-packages\sqlalchemy\orm\query.py, line
2108, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
  File C:\Python27\lib\site-packages\sqlalchemy\engine\base.py, line
1405, in execute
params)
  File C:\Python27\lib\site-packages\sqlalchemy\engine\base.py, line
1538, in _execute_clauseelement
compiled_sql, distilled_params
  File C:\Python27\lib\site-packages\sqlalchemy\engine\base.py, line
1646, in _execute_context
context)
  File C:\Python27\lib\site-packages\sqlalchemy\engine\base.py, line
1639, in _execute_context
context)
  File C:\Python27\lib\site-packages\sqlalchemy\engine\default.py,
line 330, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.InterfaceError: (InterfaceError) Error binding
parameter 0 - probably unsupported type. u'SELECT Person.type AS
Person_type, Engineer.id AS Engineer_id, Person.id AS
Person_id \nFROM Person JOIN Engineer ON Person.id = ? \nWHERE
Person.id = ?' (built-in function id, 1)
-
I fixed the error by moving the id Column in Engineer above the
__mapper_args__ :

-
class Engineer(Person):
__tablename__ = 'Engineer'
id = Column(Integer, ForeignKey('Person.id'), primary_key=True)
__mapper_args__ = {'polymorphic_identity': 'Engineer',
'inherit_condition': (id == Person.id)}
-

Now we get to the problem: In my code I dynamically generate tables
with type(name, (base,), dicty). In dicty are the __mapper_args__  and
id Column. Because of the dictionary used I have no control over the
order of the arguments and that seems to cause my exception. If I add
the __mapper_args__  with 'inherit_condition' to the class later, I
get another exception (ArgumentError: Can't determine join between
'person' and 'child'; tables have more than one foreign key constraint
relationship between them. Please specify the 'onclause' of this join
explicitly.)

Could the order in 'dicty' be the problem? Can anyone help me with a
solution or workaround for this problem.

Cheers, Lars

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] order of initialization __mapper_args__ 'inherit_condition' and primary key

2012-02-23 Thread Michael Bayer

On Feb 23, 2012, at 9:56 AM, lars van gemerden wrote:

 Hi all,
 
 I reproduced an error I was having with the code below (basically
 pasted from the tutorial for joined inheritance) and added an
 'inherit_condition' to __mapper_args__ of the subclass. At that point
 the code started throwing a sqlalchemy.exc.InterfaceError. The code
 is:
 
 class Person(Base):
__tablename__ = 'Person'
id = Column(Integer, primary_key=True)
discriminator = Column('type', String(50))
__mapper_args__ = {'polymorphic_on': discriminator}
 
 
 class Engineer(Person):
__tablename__ = 'Engineer'
__mapper_args__ = {'polymorphic_identity': 'Engineer',
 'inherit_condition': (id == Person.id)}
id = Column(Integer, ForeignKey('Person.id'), primary_key=True)

Well OK, right here, when you say id, that's the Python built-in function 
id().   This because the name id has not yet been assigned to point to your 
new Column object.   The class Engineer(Person): is just another Python 
block, just like an if condition: statement.   So yes, in that case you 
need to ensure id is pointing to that Column before you stick it into the 
inherit_condition.This has nothing to do with the order of things being 
looked at in the actual dict of the class, just the order of how Python is 
assigning meaning to the name id within the class: block.

The declarative tutorial doesn't have this bug, but yes I can see we might be 
better off illustrating __mapper_args__ at the bottom of the class declaration 
just to make this kind of error less likely.


 
 Now we get to the problem: In my code I dynamically generate tables
 with type(name, (base,), dicty). In dicty are the __mapper_args__  and
 id Column. Because of the dictionary used I have no control over the
 order of the arguments and that seems to cause my exception.

I'm not getting how you'd have this issue there - you of course have total 
control over what goes into a dict() to use with type(), and in what order 
(it's only on the taking things out part of a dict that is not ordered).  You 
should generate the Column object, then you put it in the dictionary.   The 
id builtin function never gets involved:


id = Column(...)
dict = {
'id':id,
   '__mapper_args__':{'inherit_condition':id==Person.id}
}


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: order of initialization __mapper_args__ 'inherit_condition' and primary key

2012-02-23 Thread lars van gemerden
OK, thanks, the order wasn't the problem, I am using this for the
initialization of my classes:

dicty= {'__tablename__': name,
   'id': Column(Integer, ForeignKey(basename + '.id'),
primary_key = True),
   '__mapper_args__': {'polymorphic_identity': name,
'inherit_condition': (id == classes[basename].id)}}

and as you said 'id' in the last line refers to the id() function. I
think I fixed it by changing the code to:

id_ = Column(Integer, ForeignKey(basename + '.id'), primary_key =
True)
out = {'__tablename__': name,
   'id': id_,
   '__mapper_args__': {'polymorphic_identity': name,
'inherit_condition': (id_ == classes[basename].id)}}

But I am getting a (maybe unrelated) error. Should this solution work?

Cheers, Lars


On Feb 23, 4:07 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 23, 2012, at 9:56 AM, lars van gemerden wrote:









  Hi all,

  I reproduced an error I was having with the code below (basically
  pasted from the tutorial for joined inheritance) and added an
  'inherit_condition' to __mapper_args__ of the subclass. At that point
  the code started throwing a sqlalchemy.exc.InterfaceError. The code
  is:

  class Person(Base):
     __tablename__ = 'Person'
     id = Column(Integer, primary_key=True)
     discriminator = Column('type', String(50))
     __mapper_args__ = {'polymorphic_on': discriminator}

  class Engineer(Person):
     __tablename__ = 'Engineer'
     __mapper_args__ = {'polymorphic_identity': 'Engineer',
  'inherit_condition': (id == Person.id)}
     id = Column(Integer, ForeignKey('Person.id'), primary_key=True)

 Well OK, right here, when you say id, that's the Python built-in function 
 id().   This because the name id has not yet been assigned to point to 
 your new Column object.   The class Engineer(Person): is just another 
 Python block, just like an if condition: statement.   So yes, in that 
 case you need to ensure id is pointing to that Column before you stick it 
 into the inherit_condition.    This has nothing to do with the order of 
 things being looked at in the actual dict of the class, just the order of 
 how Python is assigning meaning to the name id within the class: block.

 The declarative tutorial doesn't have this bug, but yes I can see we might be 
 better off illustrating __mapper_args__ at the bottom of the class 
 declaration just to make this kind of error less likely.



  Now we get to the problem: In my code I dynamically generate tables
  with type(name, (base,), dicty). In dicty are the __mapper_args__  and
  id Column. Because of the dictionary used I have no control over the
  order of the arguments and that seems to cause my exception.

 I'm not getting how you'd have this issue there - you of course have total 
 control over what goes into a dict() to use with type(), and in what order 
 (it's only on the taking things out part of a dict that is not ordered).  
 You should generate the Column object, then you put it in the dictionary.   
 The id builtin function never gets involved:

 id = Column(...)
 dict = {
     'id':id,
    '__mapper_args__':{'inherit_condition':id==Person.id}







 }

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: order of initialization __mapper_args__ 'inherit_condition' and primary key

2012-02-23 Thread Michael Bayer

On Feb 23, 2012, at 10:47 AM, lars van gemerden wrote:

 OK, thanks, the order wasn't the problem, I am using this for the
 initialization of my classes:
 
dicty= {'__tablename__': name,
   'id': Column(Integer, ForeignKey(basename + '.id'),
 primary_key = True),
   '__mapper_args__': {'polymorphic_identity': name,
 'inherit_condition': (id == classes[basename].id)}}
 
 and as you said 'id' in the last line refers to the id() function. I
 think I fixed it by changing the code to:
 
id_ = Column(Integer, ForeignKey(basename + '.id'), primary_key =
 True)
out = {'__tablename__': name,
   'id': id_,
   '__mapper_args__': {'polymorphic_identity': name,
 'inherit_condition': (id_ == classes[basename].id)}}
 
 But I am getting a (maybe unrelated) error. Should this solution work?

that approach seems fine though classes[basename] must be already established, 
I might pull it just from the superclass that you're passing into type().

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: order of initialization __mapper_args__ 'inherit_condition' and primary key

2012-02-23 Thread lars van gemerden
Yep, they are the same class (i don't create any class of which i
don't have the superclass yet).

Thanks for your help; on to the next bug ...

On Feb 23, 5:32 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 23, 2012, at 10:47 AM, lars van gemerden wrote:









  OK, thanks, the order wasn't the problem, I am using this for the
  initialization of my classes:

     dicty= {'__tablename__': name,
            'id': Column(Integer, ForeignKey(basename + '.id'),
  primary_key = True),
            '__mapper_args__': {'polymorphic_identity': name,
  'inherit_condition': (id == classes[basename].id)}}

  and as you said 'id' in the last line refers to the id() function. I
  think I fixed it by changing the code to:

     id_ = Column(Integer, ForeignKey(basename + '.id'), primary_key =
  True)
     out = {'__tablename__': name,
            'id': id_,
            '__mapper_args__': {'polymorphic_identity': name,
  'inherit_condition': (id_ == classes[basename].id)}}

  But I am getting a (maybe unrelated) error. Should this solution work?

 that approach seems fine though classes[basename] must be already 
 established, I might pull it just from the superclass that you're passing 
 into type().

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Sequences, Primary Keys, Relations

2012-02-23 Thread Adam Tauno Williams
I have a database where multiple objects use the same sequence to
generate primary keys -

class ProjectInfo(Base, KVC):
__tablename__   = 'project_info'
object_id   = Column(project_info_id, Integer,
 Sequence('key_generator'),
 primary_key=True)
project_id  = Column(project_id,
ForeignKey('project.project_id'), )
...

class Project(Base, KVC):
 An OpenGroupware Project object 
__tablename__   = 'project'
object_id   = Column(project_id,
Sequence('key_generator'),
ForeignKey('project_info.project_id'),
...

Project.info = relation(ProjectInfo, uselist=False,
back_populates=project,
primaryjoin=(ProjectInfo.project_id==Project.object_id))

ProjectInfo.project = relation(Project, uselist=False, backref=info,
primaryjoin=(ProjectInfo.project_id==Project.object_id)) 

This works fine.  But if I create a Project object I can't relate it to
a ProjectInfo object within the same transaction without calling flush()
first.  Is there some way to encourage SQLalchemy to allocate a value
from the sequence when the object is created?

Basically a ProjectInfo should be created for every Project that is
created; this relationship is one-to-one.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Sequences, Primary Keys, Relations

2012-02-23 Thread Claudio Freire
On Thu, Feb 23, 2012 at 4:49 PM, Adam Tauno Williams
awill...@whitemice.org wrote:
 This works fine.  But if I create a Project object I can't relate it to
 a ProjectInfo object within the same transaction without calling flush()
 first.  Is there some way to encourage SQLalchemy to allocate a value
 from the sequence when the object is created?

If you just use the relation (project.info = project_info), alchemy
will do everything for you.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Sequences, Primary Keys, Relations

2012-02-23 Thread Michael Bayer

On Feb 23, 2012, at 2:49 PM, Adam Tauno Williams wrote:

 I have a database where multiple objects use the same sequence to
 generate primary keys -
 
 class ProjectInfo(Base, KVC):
__tablename__   = 'project_info'
object_id   = Column(project_info_id, Integer,
 Sequence('key_generator'),
 primary_key=True)
project_id  = Column(project_id,
 ForeignKey('project.project_id'), )
 ...
 
 class Project(Base, KVC):
 An OpenGroupware Project object 
__tablename__   = 'project'
object_id   = Column(project_id,
Sequence('key_generator'),
ForeignKey('project_info.project_id'),
 ...
 
 Project.info = relation(ProjectInfo, uselist=False,
 back_populates=project,
 primaryjoin=(ProjectInfo.project_id==Project.object_id))
 
 ProjectInfo.project = relation(Project, uselist=False, backref=info,
 primaryjoin=(ProjectInfo.project_id==Project.object_id)) 
 
 This works fine.  But if I create a Project object I can't relate it to
 a ProjectInfo object within the same transaction without calling flush()
 first.  Is there some way to encourage SQLalchemy to allocate a value
 from the sequence when the object is created?
 
 Basically a ProjectInfo should be created for every Project that is
 created; this relationship is one-to-one.

like someone else said, relationship() allows this to all be worked out for you 
- if you associate a Project and ProjectInfo together using Project.info = 
ProjectInfo(), the flush process inserts the row for the parent first, then 
applies the new primary key value to the child.  The application code deals 
only with the collections and object references, whereas details about primary 
and foreign key columns are handled transparently.

The above mapping looks a little off, I see ProjectInfo.project_id refers to 
Project.project_id but that column isn't pictured on Project.  Also Project has 
its own object_id, which is fine, but I see that it has both a Sequence as well 
as a ForeignKey together.  If that's the actual code, one or the other 
shouldn't be there.  A ForeignKey indicates this Column is always referring to 
a value that was created elsewhere, and stored in the referenced column - so it 
would never have a Sequence of it's own.

If in fact Project.object_id is a foreign key to ProjectInfo, and 
ProjectInfo.project_id is a foreign key to Project, then this is the mutual 
reference pattern.  For this pattern there's a relationship() flag called 
post_update that handles this, emitting an UPDATE statement for one of those 
foreign keys: 
http://docs.sqlalchemy.org/en/latest/orm/relationships.html#rows-that-point-to-themselves-mutually-dependent-rows

Finally, f you want to invoke the sequence manually, you can say:

seq = Sequence('key_generator')
nextvalue = Session.execute(seq.next_value())


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] support for postgres inheritance in sqlalchemy

2012-02-23 Thread Chris Withers

On 23/02/2012 13:05, Michael Bayer wrote:

We only have a recipe where we assume INHERITS is used like the PG docs say in 
http://www.postgresql.org/docs/9.0/static/ddl-partitioning.html#DDL-PARTITIONING-IMPLEMENTATION
 .  That is, child tables don't have any extra columns.

In this scenario, SQLAlchemy doesn't need to know about the child tables at 
all, pretty much.

Here's the recipe:

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/PostgreSQLInheritance


*nods*

So I guess if I actually want to map a child table, I just need to do it 
myself?



SQLA has no facilities to reflect the INHERITS clause itself.


Okay, but if I reflect with I get all the tables, parents and children, 
or just the parent? (I'm hoping for the former ;-) )


cheers,

Chris

--
Simplistix - Content Management, Batch Processing  Python Consulting
- http://www.simplistix.co.uk

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] alembic questions/comments

2012-02-23 Thread Chris Withers

On 21/02/2012 18:41, Michael Bayer wrote:


The current release does this:

http://alembic.readthedocs.org/en/latest/tutorial.html#the-migration-environment
http:/alembic.readthedocs.org/en/latest/tutorial.html#editing-the-ini-file (see 
file_template)


*goes to learn hg again* I thought I was looking at current tip :-/


Why not? The sql will still eventually be executed in the context of a database 
connection, right?


er well in the context of a connection, but not one where there is any application code 
prepared to receive result rows. Think cat myscript.sql | psql.


Yep, that makes sense, glad there's no reason not to do insert ... 
select stuff and sql that computes column values from other column 
values and the like (ie: oh, I just had an email column in the user 
table, now I realise I need a contacts table with an email type, as 
well as phone, etc.



- Is there any support for just adding raw lumps of sql as migration steps?


sure just use execute():

op.execute(my raw lump of SQL)


What would be great would be to have .sql files and .sqli (mako 
templates with some context provided by the env.py) in addition to .py 
files. How hard could that be? ;-)


cheers,

Chris

--
Simplistix - Content Management, Batch Processing  Python Consulting
- http://www.simplistix.co.uk

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.