[sqlalchemy] ORM relationships and (PostgreSQL) partition constraint-exclusion

2013-12-05 Thread Adrian
Hi All,

I have a few partitioned tables in my PostgreSQL database but I do not know 
yet how to make the ORM relationship() with partition 
constraint-exclusionhttp://www.postgresql.org/docs/9.3/static/ddl-partitioning.html#DDL-PARTITIONING-CONSTRAINT-EXCLUSION
 on 
the instance level. Constraint-exclusion does not work with joins and 
requires scalar values - the problem is that I would need to add an 
additional WHERE clause to the primaryjoin (which adds the partition key) 
if the relationship is accessed from the* instance level*, e.g. 
user.addresses. Is there a mechanism in relationship() to distinguish 
between class-based joins (User.addresses) and instance-level access?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] ORM relationships and (PostgreSQL) partition constraint-exclusion

2013-12-05 Thread Adrian Schreyer
Never mind,

the problem was that I specified the clause in a secondaryjoin and not in
the primaryjoin of the relationship().


On Thu, Dec 5, 2013 at 10:44 AM, Adrian adrian.schre...@gmail.com wrote:

 Hi All,

 I have a few partitioned tables in my PostgreSQL database but I do not
 know yet how to make the ORM relationship() with partition
 constraint-exclusionhttp://www.postgresql.org/docs/9.3/static/ddl-partitioning.html#DDL-PARTITIONING-CONSTRAINT-EXCLUSION
  on
 the instance level. Constraint-exclusion does not work with joins and
 requires scalar values - the problem is that I would need to add an
 additional WHERE clause to the primaryjoin (which adds the partition key)
 if the relationship is accessed from the* instance level*, e.g.
 user.addresses. Is there a mechanism in relationship() to distinguish
 between class-based joins (User.addresses) and instance-level access?

 --
 You received this message because you are subscribed to a topic in the
 Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/sqlalchemy/ov-mYWA7XAM/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] fake models ?

2013-12-05 Thread Richard Gerd Kuesters

ok, let's see if I can explain the scenario better :D

first, i'm using schematics http://schematics.readthedocs.org/ to 
define arbitrary models for message exchange between processes, ui, 
events and so on. based on a json schema (that the schematics model 
generates), i'm able to store data on the database using a relatively 
sparse system of tables that compose my eav system -- not a good system 
(eav), but sometimes necessary (and i have bad experiences with nosql 
databases).


so, given a composed model with data, i can populate the database and 
retrieve data from it (i'm using postgres btw). so far so good.


now, i would like to query them :) let's suppose i have the following 
(simple) model (from schematics example):

*
* *class Person(Model):**
**name = StringType(required=True)**
**website = URLType()*

with that, i know that StringType should go to StringDataTable, URLType 
to StringDataTable, Boolean to BooleanTable, and so on. using some sql 
filtering, i can retrieve all values that represents the field name 
from the model Person, but that's quite not viable if I want my devs 
to work with eav when needed instead of going nuts because someone 
created a hundred tables to store just lines of data on each one.


so, if could extend Model or create a extension that maps my class and 
then can be used in the sqlalchemy way of querying, it would be 
awesome. let's say:

*
**session.query(Person).filter(Person.website == None).all()*

i know it'll give me some hard work to do it, but I'm willing to give it 
a try :)



my best regards,
richard.



On 12/04/2013 05:51 PM, Michael Bayer wrote:
you’d need to show me a minimal case of the composed model in question 
and what kind of operation you want to do.


you basically have to write your own comparators and such that produce 
the correct SQL.   it tends to not be that easy of a task.


On Dec 4, 2013, at 2:07 PM, Richard Gerd Kuesters 
rich...@humantech.com.br mailto:rich...@humantech.com.br wrote:



thanks Mike!

it is good to know this :) can you give me a hint from where do i to 
start? :)


best regards,
richard.


On 12/04/2013 04:38 PM, Michael Bayer wrote:
sure it can.   you’d just need to be defining accessors and methods 
for all the things it needs to do in each case - these accessors 
would use the components to produce the appropriate SQL constructs 
as needed.




On Dec 4, 2013, at 1:23 PM, Richard Gerd Kuesters 
rich...@humantech.com.br mailto:rich...@humantech.com.br wrote:



hi all!

i was wondering if there is a way to create fake models, in which 
i can query the way I want and map methods the way i want.


let's say i have a common object that is made in pieces to the 
database (similar to the eav pattern), stored in multiple tables. 
now, backwards, i have to make a whole mess of code to bring the 
object with data again, but it works fine, so far.


what I really wanted was to use this class like a mapped model, 
using session.query, filters and stuff, abstracting its keys to 
other attributes and conditionals for the *real* table structure.


the problem is: can it be made?


thanks in advance,
richard.

--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.





--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Re: PostgreSQL Tsvector type

2013-12-05 Thread Noufal Ibrahim
Hello Michael,

Michael Bayer mike_mp at zzzcomputing.com writes:

 
 the TsVector type looks perfect!
 
 if you were to package that up for inclusion, it would likely subclass 
TypeEngine and you’d implement
 PGTypeCompiler.visit_TSVECTOR.we appear to have a little bit of 
“tsvector” code already (we
 implement the “match” operator as %s  at  at  to_tsquery(%s)” ) but 
not the full type.

I've implemented this as per your suggestions and sent you a pull request 
here https://bitbucket.org/zzzeek/sqlalchemy/pull-request/8/implements-
tsvector-type-for-the/diff

Let me know what you think. 


Thanks!


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] postgres schema per model

2013-12-05 Thread Richard Gerd Kuesters

hi all!

another question: i have a postgres database, and i would like to work 
with schemas for module models. so far so good, but i didn't find much 
information besides it is available in sa docs.


so, i came into this: 
http://stackoverflow.com/questions/9298296/sqlalchemy-support-of-postgres-schemas


but, i would like to declare it at the model level, not the metadata 
level. is it possible?


i tried using *__table_args__ = {'schema': 'foo'}* in *__abstract__*, 
with and without *declared_attr* decorator and also without 
*__abstract__*, neither worked and all tables were created on public schema.


any tips? :)

i'm asking this because i have a LOT of tables, and declare 
*__table_args__* in all of them just because the schema seems kinda 
weird, since we can mixin almost everything in sa.



thanks in advance!
richard.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] ORM relationships and (PostgreSQL) partition constraint-exclusion

2013-12-05 Thread Michael Bayer

On Dec 5, 2013, at 6:57 AM, Adrian Schreyer adrian.schre...@gmail.com wrote:

 Actually that was a bit too early but I tracked the problem down to the 
 many-to-many relationship. Parameters are only interpolated (e.g. 
 %(param_1)s) for the primaryjoin to the secondary table. Is there a technique 
 to force relationship() to interpolate a parameter between the 1st and 3rd 
 table instead of using only table.column=table.column?

there’s no reason why that would be the case can you provide more specifics?




 
 
 On Thu, Dec 5, 2013 at 10:58 AM, Adrian Schreyer adrian.schre...@gmail.com 
 wrote:
 Never mind,
 
 the problem was that I specified the clause in a secondaryjoin and not in the 
 primaryjoin of the relationship().
 
 
 On Thu, Dec 5, 2013 at 10:44 AM, Adrian adrian.schre...@gmail.com wrote:
 Hi All,
 
 I have a few partitioned tables in my PostgreSQL database but I do not know 
 yet how to make the ORM relationship() with partition constraint-exclusion on 
 the instance level. Constraint-exclusion does not work with joins and 
 requires scalar values - the problem is that I would need to add an 
 additional WHERE clause to the primaryjoin (which adds the partition key) if 
 the relationship is accessed from the instance level, e.g. user.addresses. Is 
 there a mechanism in relationship() to distinguish between class-based joins 
 (User.addresses) and instance-level access?
 
 -- 
 You received this message because you are subscribed to a topic in the Google 
 Groups sqlalchemy group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/sqlalchemy/ov-mYWA7XAM/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [sqlalchemy] postgres schema per model

2013-12-05 Thread Michael Bayer
here’s an example, works on this end:

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class AltSchema(object):
__table_args__ = {schema: test_schema}

class A(AltSchema, Base):
__tablename__ = 'a'

id = Column(Integer, primary_key=True)
bs = relationship(B)

class B(AltSchema, Base):
__tablename__ = 'b'

id = Column(Integer, primary_key=True)
a_id = Column(Integer, ForeignKey('test_schema.a.id'))

e = create_engine(postgresql://scott:tiger@localhost/test, echo=True)
Base.metadata.create_all(e)

echo=True will tell all here...



On Dec 5, 2013, at 8:20 AM, Richard Gerd Kuesters rich...@humantech.com.br 
wrote:

 hi all!
 
 another question: i have a postgres database, and i would like to work with 
 schemas for module models. so far so good, but i didn't find much information 
 besides it is available in sa docs.
 
 so, i came into this: 
 http://stackoverflow.com/questions/9298296/sqlalchemy-support-of-postgres-schemas
 
 but, i would like to declare it at the model level, not the metadata level. 
 is it possible?
 
 i tried using __table_args__ = {'schema': 'foo'} in __abstract__, with and 
 without declared_attr decorator and also without __abstract__, neither worked 
 and all tables were created on public schema.
 
 any tips? :)
 
 i'm asking this because i have a LOT of tables, and declare __table_args__ in 
 all of them just because the schema seems kinda weird, since we can mixin 
 almost everything in sa.
 
 
 thanks in advance!
 richard.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.



signature.asc
Description: Message signed with OpenPGP using GPGMail


[sqlalchemy] Re: postgres schema per model

2013-12-05 Thread Jonathan Vanasco
this is a really great idea!  thanks for asking this question.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] postgres schema per model

2013-12-05 Thread Richard Gerd Kuesters

ha! thanks Mike!

i must have been stupid somewhere. i could say that i did that but, 
since it didn't worked then, i can say that my code was wrong somehow :)



my best regards,
richard.


On 12/05/2013 01:37 PM, Michael Bayer wrote:

here’s an example, works on this end:

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class AltSchema(object):
__table_args__ = {schema: test_schema}

class A(AltSchema, Base):
__tablename__ = 'a'

id = Column(Integer, primary_key=True)
bs = relationship(B)

class B(AltSchema, Base):
__tablename__ = 'b'

id = Column(Integer, primary_key=True)
a_id = Column(Integer, ForeignKey('test_schema.a.id'))

e = create_engine(postgresql://scott:tiger@localhost/test, echo=True)
Base.metadata.create_all(e)

echo=True will tell all here...



On Dec 5, 2013, at 8:20 AM, Richard Gerd Kuesters 
rich...@humantech.com.br mailto:rich...@humantech.com.br wrote:



hi all!

another question: i have a postgres database, and i would like to 
work with schemas for module models. so far so good, but i didn't 
find much information besides it is available in sa docs.


so, i came into this: 
http://stackoverflow.com/questions/9298296/sqlalchemy-support-of-postgres-schemas


but, i would like to declare it at the model level, not the metadata 
level. is it possible?


i tried using *__table_args__ = {'schema': 'foo'}* in *__abstract__*, 
with and without *declared_attr* decorator and also without 
*__abstract__*, neither worked and all tables were created on public 
schema.


any tips? :)

i'm asking this because i have a LOT of tables, and declare 
*__table_args__* in all of them just because the schema seems kinda 
weird, since we can mixin almost everything in sa.



thanks in advance!
richard.

--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Re: postgres schema per model

2013-12-05 Thread Richard Gerd Kuesters

well ... i'm glad i helped you somehow, even though it was a question :D

cheers,
richard.



On 12/05/2013 01:40 PM, Jonathan Vanasco wrote:

this is a really great idea!  thanks for asking this question.
--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com.

To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] ORM events order

2013-12-05 Thread Tim Kersten
Hi Folks,

Is the order ORM events ( 
http://docs.sqlalchemy.org/en/rel_0_9/orm/events.html ) are fired in 
deterministic and guaranteed to be the same every time? I've searched the 
docs and google but couldn't anything relating to their relative order. 

Cheers,
Tim

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] fake models ?

2013-12-05 Thread Michael Bayer

On Dec 5, 2013, at 6:16 AM, Richard Gerd Kuesters rich...@humantech.com.br 
wrote:

 ok, let's see if I can explain the scenario better :D
 
 first, i'm using schematics http://schematics.readthedocs.org/ to define 
 arbitrary models for message exchange between processes, ui, events and so 
 on. based on a json schema (that the schematics model generates), i'm able to 
 store data on the database using a relatively sparse system of tables that 
 compose my eav system -- not a good system (eav), but sometimes necessary 
 (and i have bad experiences with nosql databases).
 
 so, given a composed model with data, i can populate the database and 
 retrieve data from it (i'm using postgres btw). so far so good.
 
 now, i would like to query them :) let's suppose i have the following 
 (simple) model (from schematics example):
 
 class Person(Model):
 name = StringType(required=True)
 website = URLType()
 
 with that, i know that StringType should go to StringDataTable, URLType to 
 StringDataTable, Boolean to BooleanTable, and so on. using some sql 
 filtering, i can retrieve all values that represents the field name from 
 the model Person, but that's quite not viable if I want my devs to work 
 with eav when needed instead of going nuts because someone created a hundred 
 tables to store just lines of data on each one.
 
 so, if could extend Model or create a extension that maps my class and then 
 can be used in the sqlalchemy way of querying, it would be awesome. let's 
 say:
 
 session.query(Person).filter(Person.website == None).all()
 
 i know it'll give me some hard work to do it, but I'm willing to give it a 
 try :)


well we have an approach for this demonstrated in the “vertical attribute 
mapping” example 
(http://docs.sqlalchemy.org/en/rel_0_9/orm/examples.html#module-examples.vertical)
 that uses any(), which produces an “EXISTS”.

though what I’m not sure about here is that it sounds like there isn’t actually 
a “person” table.  what links the “name” and “website” tables together then 
with Person for a particular row?

e.g., what SQL does your query() need to render?







 
 
 my best regards,
 richard.
 
 
 
 On 12/04/2013 05:51 PM, Michael Bayer wrote:
 you’d need to show me a minimal case of the composed model in question and 
 what kind of operation you want to do.
 
 you basically have to write your own comparators and such that produce the 
 correct SQL.   it tends to not be that easy of a task.
 
 On Dec 4, 2013, at 2:07 PM, Richard Gerd Kuesters rich...@humantech.com.br 
 wrote:
 
 thanks Mike!
 
 it is good to know this :) can you give me a hint from where do i to start? 
 :)
 
 best regards,
 richard.
 
 
 On 12/04/2013 04:38 PM, Michael Bayer wrote:
 sure it can.   you’d just need to be defining accessors and methods for 
 all the things it needs to do in each case - these accessors would use the 
 components to produce the appropriate SQL constructs as needed.
 
 
 
 On Dec 4, 2013, at 1:23 PM, Richard Gerd Kuesters 
 rich...@humantech.com.br wrote:
 
 hi all!
 
 i was wondering if there is a way to create fake models, in which i can 
 query the way I want and map methods the way i want.
 
 let's say i have a common object that is made in pieces to the database 
 (similar to the eav pattern), stored in multiple tables. now, backwards, 
 i have to make a whole mess of code to bring the object with data again, 
 but it works fine, so far.
 
 what I really wanted was to use this class like a mapped model, using 
 session.query, filters and stuff, abstracting its keys to other 
 attributes and conditionals for the *real* table structure.
 
 the problem is: can it be made?
 
 
 thanks in advance,
 richard.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Expected behaviour for Oracle's lost contact

2013-12-05 Thread Mariano Mara
Hi there, in one application I am using sqlalchemy 0.7.8 against Oracle 
11 release 1 and I am hitting Oracle's lost contact more times than I 
would like to. Since the app seems unable to recover from the error, I 
have to restart it completely (it is a pyramid app).
According to this changeset[1], the error ORA-03135 (connection lost 
contact) is included in the expected disconnection errors from oracle 
since 0.7.5  and my expectation would be that sqlalchemy would be able 
to recover from such error gracefully. Did I misunderstand how it works? 
Should I take actions im my app when I somehow detects this situation?


TIA,
Mariano

[1] 
http://www.sqlalchemy.org/trac/changeset/dfb0b6aa622006e1e71a5296c3fa17667e24aac1/


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] ORM events order

2013-12-05 Thread Michael Bayer

On Dec 5, 2013, at 10:51 AM, Tim Kersten t...@io41.com wrote:

 Hi Folks,
 
 Is the order ORM events ( 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/events.html ) are fired in 
 deterministic and guaranteed to be the same every time? I've searched the 
 docs and google but couldn't anything relating to their relative order. 


the events are ordered.   when you do an event.listen it appends the event 
listener to a list of listeners.   the events are fired from the beginning of 
the list on forward.there’s actually an undocumented argument to 
event.listen() “insert=True” that will cause the listener to be inserted at 
position zero rather than appended.

the reason the order of events is not really mentioned much is because there’s 
complex cases where the order of listener application has not been evaluated or 
tested.  When you make use of mapper or instrumentation events against 
un-mapped base classes and such, the actual append() operation doesn’t occur 
until later, when classes are actually mapped, and this works by shuttling the 
event listener functions around to those classes.  In these cases we don’t as 
yet have guarantees in place as to the order of the listeners being first 
applied, e.g. if you had a class that is a product of two mixins, and each 
mixin has listeners applied to it, that sort of thing.

however, the order of listeners once applied should definitely be the same each 
time assuming no changes to the listener collections.




signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [sqlalchemy] ORM relationships and (PostgreSQL) partition constraint-exclusion

2013-12-05 Thread Adrian Schreyer
Given the three mappings *First*, *Second* and *Partitioned*, I want to
declare a relationship between *First* and *Partitioned*. The problem is
that *Partitioned* is partitioned by partition_key which is a column in
*First* but not in *Second*. *Second* however contains the identifier that
actually links *First* to specific rows in the partitioned table.

So far the mapping looks like this mock example:

partitioned = relationship(Partitioned,
secondary=Base.metadata.tables['schema.seconds'],
primaryjoin=and_(First.first_id==Second.first_id,
First.partition_key==Partitioned.partition_key),
secondaryjoin=Second.other_id==Partitioned.other_id,
foreign_keys=[Second.first_id, Partitioned.partition_key,
Partitioned.other_id],
uselist=True, innerjoin=True, lazy='dynamic')

It works, but it only interpolates the First.first_id with the actual value
which normally makes sense but to make the PostgreSQL constraint-exclusion
work the First.partition_key would need to be interpolated with the proper
value as well. Right now it is only given as
First.partition_key==Partitioned.partition_key.

Does that make sense? I am not sure if my relationship configuration is
wrong or if this kind of mapping is simply not supported.


On Thu, Dec 5, 2013 at 3:31 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Dec 5, 2013, at 6:57 AM, Adrian Schreyer adrian.schre...@gmail.com
 wrote:

 Actually that was a bit too early but I tracked the problem down to the
 many-to-many relationship. Parameters are only interpolated (e.g.
 %(param_1)s) for the primaryjoin to the secondary table. Is there a
 technique to force relationship() to interpolate a parameter between the
 1st and 3rd table instead of using only table.column=table.column?


 there’s no reason why that would be the case can you provide more
 specifics?






 On Thu, Dec 5, 2013 at 10:58 AM, Adrian Schreyer 
 adrian.schre...@gmail.com wrote:

 Never mind,

 the problem was that I specified the clause in a secondaryjoin and not in
 the primaryjoin of the relationship().


 On Thu, Dec 5, 2013 at 10:44 AM, Adrian adrian.schre...@gmail.comwrote:

 Hi All,

 I have a few partitioned tables in my PostgreSQL database but I do not
 know yet how to make the ORM relationship() with partition
 constraint-exclusionhttp://www.postgresql.org/docs/9.3/static/ddl-partitioning.html#DDL-PARTITIONING-CONSTRAINT-EXCLUSION
  on
 the instance level. Constraint-exclusion does not work with joins and
 requires scalar values - the problem is that I would need to add an
 additional WHERE clause to the primaryjoin (which adds the partition key)
 if the relationship is accessed from the* instance level*, e.g.
 user.addresses. Is there a mechanism in relationship() to distinguish
 between class-based joins (User.addresses) and instance-level access?

 --
 You received this message because you are subscribed to a topic in the
 Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/sqlalchemy/ov-mYWA7XAM/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.




 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.

 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Expected behaviour for Oracle's lost contact

2013-12-05 Thread Michael Bayer

On Dec 5, 2013, at 10:54 AM, Mariano Mara mariano.m...@gmail.com wrote:

 Hi there, in one application I am using sqlalchemy 0.7.8 against Oracle 11 
 release 1 and I am hitting Oracle's lost contact more times than I would like 
 to. Since the app seems unable to recover from the error, I have to restart 
 it completely (it is a pyramid app).
 According to this changeset[1], the error ORA-03135 (connection lost contact) 
 is included in the expected disconnection errors from oracle since 0.7.5  and 
 my expectation would be that sqlalchemy would be able to recover from such 
 error gracefully. Did I misunderstand how it works? Should I take actions im 
 my app when I somehow detects this situation?
 
 TIA,
 Mariano
 
 [1] 
 http://www.sqlalchemy.org/trac/changeset/dfb0b6aa622006e1e71a5296c3fa17667e24aac1/

it will recover in that when you catch this exception, then continue to use 
that Connection, it will know that it needs to reconnect with a fresh DBAPI 
connection.   the underlying connection pool is also dumped when this exception 
is encountered.

it’s not possible for the Connection to transparently retry the operation with 
a new DBAPI connection without raising an error because connection session 
state is lost.   e.g. if it occurs in the middle of a result set, the results 
are gone, occurs in the middle of a transaction, everything in the transaction 
is gone, etc.






signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [sqlalchemy] ORM relationships and (PostgreSQL) partition constraint-exclusion

2013-12-05 Thread Michael Bayer
oh, you want to refer to the tertiary table in both the primary and secondary 
join.so right this pattern does not correspond to the A-secondary-B 
pattern and isn’t really a classic many-to-many.

a quick way to map these are to use non primary mappers (was going to just 
paraphrase, but let me just try it out b.c. these are fun anyway, and I want to 
see the new joining behavior we have in 0.9…):

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class First(Base):
__tablename__ = 'first'

first_id = Column(Integer, primary_key=True)
partition_key = Column(String)

def __repr__(self):
return (First(%s, %s) % (self.first_id, self.partition_key))

class Second(Base):
__tablename__ = 'second'

id = Column(Integer, primary_key=True)
first_id = Column(Integer)
other_id = Column(Integer)

class Partitioned(Base):
__tablename__ = 'partitioned'

id = Column(Integer, primary_key=True)
partition_key = Column(String)
other_id = Column(Integer)

def __repr__(self):
return (Partitioned(%s, %s) % (self.partition_key, self.other_id))


j = join(Partitioned, Second, Partitioned.other_id == Second.other_id)

partitioned_second = mapper(Partitioned, j, non_primary=True, properties={
# note we need to disambiguate columns here - the join()
# will provide them as j.c.tablename_colname for access,
# but they retain their real names in the mapping
id: j.c.partitioned_id,
other_id: [j.c.partitioned_other_id, j.c.second_other_id],
secondary_id: j.c.second_id
})

First.partitioned = relationship(
partitioned_second,
primaryjoin=and_(
First.partition_key == 
partitioned_second.c.partition_key,
First.first_id == 
foreign(partitioned_second.c.first_id)
), innerjoin=True)

e = create_engine(postgresql://scott:tiger@localhost/test, echo=True)
Base.metadata.drop_all(e)
Base.metadata.create_all(e)
s = Session(e)
s.add_all([
First(first_id=1, partition_key='p1'),
First(first_id=2, partition_key='p1'),
First(first_id=3, partition_key='p2'),
Second(first_id=1, other_id=1),
Second(first_id=2, other_id=1),
Second(first_id=3, other_id=2),
Partitioned(partition_key='p1', other_id=1),
Partitioned(partition_key='p1', other_id=2),
Partitioned(partition_key='p2', other_id=2),
])
s.commit()

for row in s.query(First, Partitioned).join(First.partitioned):
print(row)

for f in s.query(First):
for p in f.partitioned:
print(f.partition_key, p.partition_key)


I mapped to a join directly, and not a select, so as long as we aren’t using 
SQLite (and are using 0.9) we get nested join behavior like this:

SELECT first.first_id AS first_first_id, first.partition_key AS 
first_partition_key, partitioned.id AS partitioned_id, 
partitioned.partition_key AS partitioned_partition_key, partitioned.other_id AS 
partitioned_other_id 
FROM first JOIN (partitioned JOIN second ON partitioned.other_id = 
second.other_id) ON first.partition_key = partitioned.partition_key AND 
first.first_id = second.first_id
2013-12-05 11:27:18,347 INFO sqlalchemy.engine.base.Engine {}
(First(1, p1), Partitioned(p1, 1))
(First(2, p1), Partitioned(p1, 1))
(First(3, p2), Partitioned(p2, 2))


the load of f.partitioned will load the Partitioned objects in terms of the 
“partitioned_second” mapper, so those objects will have those extra cols from 
“second” on them.  You can screw around with this using exclude_properties for 
those cols you don’t need to refer to on the mapping, and perhaps primary_key 
if the mapper complains, such as:

partitioned_second = mapper(Partitioned, j, non_primary=True, properties={
id: j.c.partitioned_id,
other_id: [j.c.partitioned_other_id, j.c.second_other_id],
}, exclude_properties=[j.c.second_id], primary_key=[j.c.partitioned_id, 
j.c.second_other_id])

or you can just ignore those extra attributes on some of your Partitioned 
objects.








On Dec 5, 2013, at 11:03 AM, Adrian Schreyer adrian.schre...@gmail.com wrote:

 Given the three mappings First, Second and Partitioned, I want to declare a 
 relationship between First and Partitioned. The problem is that Partitioned 
 is partitioned by partition_key which is a column in First but not in Second. 
 Second however contains the identifier that actually links First to specific 
 rows in the partitioned table.
 
 So far the mapping looks like this mock example:
 
 
 partitioned = relationship(Partitioned,
 secondary=Base.metadata.tables['schema.seconds'],
 primaryjoin=and_(First.first_id==Second.first_id, 
 First.partition_key==Partitioned.partition_key),
 secondaryjoin=Second.other_id==Partitioned.other_id,
 foreign_keys=[Second.first_id, 

Re: [sqlalchemy] fake models ?

2013-12-05 Thread Richard Gerd Kuesters
the vertical mapper example was kind of a base for me to develop my 
eav system. i have added also a table that defines the structure of 
the entity, since then I use schematics (and its json schema import - 
export utility). it's very handy.


i think that a pseudo-code can explain a little better, those ones in sa 
(i'll just write simplified):



*class Entity(Base):**
**id = Column(int, pk)**
**name = Column(str, unique)**
**
**
**class **Attribute(Base):**
**id = Column(int, pk)**
**name = Column(str)**
**discriminator = Column(enum)  # bool, string, integer, blob, etc**
**entity_id = Column(fk(Entity.id)**)**
**
**entity = relationship(Entity)**
**
**tbl_args = uniqueconstraint(name, entity_id)*


basically, with those classes i can define how my eav objects are (of 
course, they're much more complete, but for now it can give the idea). 
so, when I have this schematics model:



*class Person(Model):**
**name = StringType(required=True)**
**website = URLType()*


it will be interpreted as:


*person_entity = Entity()**
**person_entity.name('person')**
**
**session.add(person_entity)**
**session.commit()**
**
**name_attr = Attribute()**
**name_attr.name = 'name'**
**name_attr.discriminator = TypeDiscriminator.STRING  # i used here a 
slightly modified code from decl_enum, a post you wrote in your blog**

**name_attr.entity. = person_entity**
**
**session.add(name_attr)**
**session.commit()**
**
**website_attr = Attribute()**
**...*


i think this can start to illustrate better what the Person model 
really is. now, let's go to the values (in poor code again, lol):



*class**Instance**(Base):**
**id = Column(int, pk)**
**entity_id = Column(fk(Entity.id))**
**
**entity = relationship(Entity)**
**valid_attributes = relationship(Attribute**, 
primaryjoin=entity_id==Attribute.entity_id)**

**
**
**class Value(Base):**
**id = Column(int, pk)**
**attribute_id = Column(fk(Attribute.id))**
**discriminator = Column(enum)**
**
**__mapper_args__ = dict(polymorphic_on=discriminator)**
**
**
**class StringValue(Value):**
**id = Column(fk(Value.id))**
**value = Column(string)**
**
**__mapper_args__ = 
dict(polymorphic_identity=**TypeDiscriminator.STRING)**

**
**
**class BoolValue(Value):**
** ...*


then, with a dozen of ifs and elses, I can translate the values given to 
a Person instance from schematics directly to the database.


so, if i want to find a string value (foo), from the Person model, with 
an attribute named name, my query would be something like (the mess 
below):



*res = session.query([Entity, Attribute, Instance, StringValue])
.join(Attribute, Attribute.entity_id == Entity.id)
.join(Instance, Instance.entity_id == Entity.id)
.join(StringValue, [StringValue.id == Value.id, 
Value.attribute_id == Attribute.id])
.filter(Entity.name == 'person', Attribute.name == name, 
Attribute.discriminator == TypeDiscriminator.STRING, StringValue.value 
== 'foo')**

.all()*


ok. this query seems crappy and propably won't work if i run it now (i'm 
writing this on the fly, lol), but it can give you a better idea of my 
goal :)


now, instead of making all these queries (i don't have a problem writing 
them because i'm writing the system), i would like to create something 
easier to develop (since this project is the first python project some 
of them will work on), so imagine using sqlalchemy at this level :)


thanks a lot for your help.


my best regards,
richard.




On 12/05/2013 01:52 PM, Michael Bayer wrote:


On Dec 5, 2013, at 6:16 AM, Richard Gerd Kuesters 
rich...@humantech.com.br mailto:rich...@humantech.com.br wrote:



ok, let's see if I can explain the scenario better :D

first, i'm using schematics http://schematics.readthedocs.org/ to 
define arbitrary models for message exchange between processes, ui, 
events and so on. based on a json schema (that the schematics model 
generates), i'm able to store data on the database using a relatively 
sparse system of tables that compose my eav system -- not a good 
system (eav), but sometimes necessary (and i have bad experiences 
with nosql databases).


so, given a composed model with data, i can populate the database and 
retrieve data from it (i'm using postgres btw). so far so good.


now, i would like to query them :) let's suppose i have the 
following (simple) model (from schematics example):

*
* *class Person(Model):**
**name = StringType(required=True)**
**website = URLType()*

with that, i know that StringType should go to StringDataTable, 
URLType to StringDataTable, Boolean to BooleanTable, and so on. using 
some sql filtering, i can retrieve all values that represents the 
field name from the model Person, but that's quite not viable if 
I want my devs to work with eav when needed instead of going nuts 
because someone created a hundred tables to store just lines of data 
on each one.


so, if could extend 

Re: [sqlalchemy] ORM events order

2013-12-05 Thread Tim Kersten
thank you.

What of the relative ordering of the different ORM event types? i.e.

before_flush
before_delete
after_flush
etc

When looking at before_flush I see the before_delete has not yet been fired, 
yet is has been fired in the after_flush. Is this guaranteed to always be the 
case?


On 5 Dec 2013, at 16:01, Michael Bayer mike...@zzzcomputing.com wrote:

 
 On Dec 5, 2013, at 10:51 AM, Tim Kersten t...@io41.com wrote:
 
 Hi Folks,
 
 Is the order ORM events ( 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/events.html ) are fired in 
 deterministic and guaranteed to be the same every time? I've searched the 
 docs and google but couldn't anything relating to their relative order. 
 
 
 the events are ordered.   when you do an event.listen it appends the event 
 listener to a list of listeners.   the events are fired from the beginning of 
 the list on forward.there’s actually an undocumented argument to 
 event.listen() “insert=True” that will cause the listener to be inserted at 
 position zero rather than appended.
 
 the reason the order of events is not really mentioned much is because 
 there’s complex cases where the order of listener application has not been 
 evaluated or tested.  When you make use of mapper or instrumentation events 
 against un-mapped base classes and such, the actual append() operation 
 doesn’t occur until later, when classes are actually mapped, and this works 
 by shuttling the event listener functions around to those classes.  In these 
 cases we don’t as yet have guarantees in place as to the order of the 
 listeners being first applied, e.g. if you had a class that is a product of 
 two mixins, and each mixin has listeners applied to it, that sort of thing.
 
 however, the order of listeners once applied should definitely be the same 
 each time assuming no changes to the listener collections.
 
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] ORM events order

2013-12-05 Thread Michael Bayer

On Dec 5, 2013, at 12:14 PM, Tim Kersten t...@io41.com wrote:

 thank you.
 
 What of the relative ordering of the different ORM event types? i.e.
 
 before_flush
 before_delete
 after_flush
 etc
 
 When looking at before_flush I see the before_delete has not yet been fired, 
 yet is has been fired in the after_flush. Is this guaranteed to always be the 
 case?

yes, before_flush and after_flush provide boundaries around the mechanics of 
the flush itself.  before_delete as well as the other mapper-level events like 
before_update before_insert after_update etc. are all within the flush 
mechanics.

you can’t necessarily rely upon the ordering of insert/update/delete events 
within the flush however, relative to different objects and especially across 
different kinds of objects.  The mapper-level flush events are fired right as 
individual batches of objects are being prepared for INSERT/UPDATE/DELETE 
statements.




 
 
 On 5 Dec 2013, at 16:01, Michael Bayer mike...@zzzcomputing.com wrote:
 
 
 On Dec 5, 2013, at 10:51 AM, Tim Kersten t...@io41.com wrote:
 
 Hi Folks,
 
 Is the order ORM events ( 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/events.html ) are fired in 
 deterministic and guaranteed to be the same every time? I've searched the 
 docs and google but couldn't anything relating to their relative order. 
 
 
 the events are ordered.   when you do an event.listen it appends the event 
 listener to a list of listeners.   the events are fired from the beginning 
 of the list on forward.there’s actually an undocumented argument to 
 event.listen() “insert=True” that will cause the listener to be inserted at 
 position zero rather than appended.
 
 the reason the order of events is not really mentioned much is because 
 there’s complex cases where the order of listener application has not been 
 evaluated or tested.  When you make use of mapper or instrumentation events 
 against un-mapped base classes and such, the actual append() operation 
 doesn’t occur until later, when classes are actually mapped, and this works 
 by shuttling the event listener functions around to those classes.  In these 
 cases we don’t as yet have guarantees in place as to the order of the 
 listeners being first applied, e.g. if you had a class that is a product of 
 two mixins, and each mixin has listeners applied to it, that sort of thing.
 
 however, the order of listeners once applied should definitely be the same 
 each time assuming no changes to the listener collections.
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.



signature.asc
Description: Message signed with OpenPGP using GPGMail


[sqlalchemy] question about race conditions

2013-12-05 Thread Jonathan Vanasco

i'm looking at moving some raw sql in twisted to SqlAlchemy and have a 
question.

I have a multi-threaded twisted daemon that tends to generate a lot of race 
conditions on a few tables that are frequently hit.

I get integrity errors from something like this :

 domain = SELECT * FROM domains WHERE 
 if not domain :
domain = INSERT INTO domain VALUES 

the fix was :

 domain = SELECT * FROM domains WHERE 
 if not domain :
try:
   savepoint = db.savepoint()
   INSERT INTO domain VALUES 
 except psycopg2.IntegrityError : 
  savepoint,release()
  domain = SELECT * FROM domains WHERE 

is there a way to catch an integrity error like this with SqlAlchemy ?

i'm trying to get away from directly using psycopg2, it's getting too 
annoying to maintain raw sql.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] ORM relationships and (PostgreSQL) partition constraint-exclusion

2013-12-05 Thread Adrian Schreyer
The partitioned relationship actually referred to the tertiary table in
both the primary and secondary join - the problem for me was that in the
primaryjoin

primaryjoin=and_(First.first_id==Second.first_id,
First.partition_key==Partitioned.partition_key)

only First.first_id will be interpolated with the actual value first_id of
the instance in question whereas First.partition_key on the other hand will
be interpolated as column object. The problem is that in this case
First.partition_key has to be interpolated with the actual value to get the
constraint-exclusion to work. In a normal many-to-many relationship this
would not be necessary and maybe that is why it only interpolates the
values for the join on the secondary table.

The partitioned relationship emits a query like this if the attribute is
accessed:

SELECT partitioned.*
  FROM partitioned, second, first
 WHERE %(param_1)s = second.first_id
   AND first.partition_key = partitioned.partition_key
   AND second.other_id = partitioned.other_id

But I would need first.partitioned_key to be %(param_2)s.

So far I used a @property around a query function to add the partition_key
to query.filter() manually.


On Thu, Dec 5, 2013 at 4:37 PM, Michael Bayer mike...@zzzcomputing.comwrote:

 oh, you want to refer to the tertiary table in both the primary and
 secondary join.so right this pattern does not correspond to the
 A-secondary-B pattern and isn’t really a classic many-to-many.

 a quick way to map these are to use non primary mappers (was going to just
 paraphrase, but let me just try it out b.c. these are fun anyway, and I
 want to see the new joining behavior we have in 0.9…):

 from sqlalchemy import *
 from sqlalchemy.orm import *
 from sqlalchemy.ext.declarative import declarative_base

 Base = declarative_base()

 class First(Base):
 __tablename__ = 'first'

 first_id = Column(Integer, primary_key=True)
 partition_key = Column(String)

 def __repr__(self):
 return (First(%s, %s) % (self.first_id, self.partition_key))

 class Second(Base):
 __tablename__ = 'second'

 id = Column(Integer, primary_key=True)
 first_id = Column(Integer)
 other_id = Column(Integer)

 class Partitioned(Base):
 __tablename__ = 'partitioned'

 id = Column(Integer, primary_key=True)
 partition_key = Column(String)
 other_id = Column(Integer)

 def __repr__(self):
 return (Partitioned(%s, %s) % (self.partition_key,
 self.other_id))


 j = join(Partitioned, Second, Partitioned.other_id == Second.other_id)

 partitioned_second = mapper(Partitioned, j, non_primary=True, properties={
 # note we need to disambiguate columns here - the join()
 # will provide them as j.c.tablename_colname for access,
 # but they retain their real names in the mapping
 id: j.c.partitioned_id,
 other_id: [j.c.partitioned_other_id, j.c.second_other_id],
 secondary_id: j.c.second_id
 })

 First.partitioned = relationship(
 partitioned_second,
 primaryjoin=and_(
 First.partition_key ==
 partitioned_second.c.partition_key,
 First.first_id ==
 foreign(partitioned_second.c.first_id)
 ), innerjoin=True)

 e = create_engine(postgresql://scott:tiger@localhost/test, echo=True)
 Base.metadata.drop_all(e)
 Base.metadata.create_all(e)
 s = Session(e)
 s.add_all([
 First(first_id=1, partition_key='p1'),
 First(first_id=2, partition_key='p1'),
 First(first_id=3, partition_key='p2'),
 Second(first_id=1, other_id=1),
 Second(first_id=2, other_id=1),
 Second(first_id=3, other_id=2),
 Partitioned(partition_key='p1', other_id=1),
 Partitioned(partition_key='p1', other_id=2),
 Partitioned(partition_key='p2', other_id=2),
 ])
 s.commit()

 for row in s.query(First, Partitioned).join(First.partitioned):
 print(row)

 for f in s.query(First):
 for p in f.partitioned:
 print(f.partition_key, p.partition_key)


 I mapped to a join directly, and not a select, so as long as we aren’t
 using SQLite (and are using 0.9) we get nested join behavior like this:

 SELECT first.first_id AS first_first_id, first.partition_key AS
 first_partition_key, partitioned.id AS partitioned_id,
 partitioned.partition_key AS partitioned_partition_key,
 partitioned.other_id AS partitioned_other_id
 FROM first JOIN (partitioned JOIN second ON partitioned.other_id =
 second.other_id) ON first.partition_key = partitioned.partition_key AND
 first.first_id = second.first_id
 2013-12-05 11:27:18,347 INFO sqlalchemy.engine.base.Engine {}
 (First(1, p1), Partitioned(p1, 1))
 (First(2, p1), Partitioned(p1, 1))
 (First(3, p2), Partitioned(p2, 2))


 the load of f.partitioned will load the Partitioned objects in terms of
 the “partitioned_second” mapper, so those objects will have those extra
 cols from “second” on them.  You can 

Re: [sqlalchemy] Expected behaviour for Oracle's lost contact

2013-12-05 Thread Michael Bayer
also note if you really want to prevent disconnects at the top of a 
transaction, you can use a pessimistic approach, see 
http://docs.sqlalchemy.org/en/rel_0_9/core/pooling.html?highlight=pessimistic#disconnect-handling-pessimistic
 .


On Dec 5, 2013, at 11:05 AM, Michael Bayer mike...@zzzcomputing.com wrote:

 
 On Dec 5, 2013, at 10:54 AM, Mariano Mara mariano.m...@gmail.com wrote:
 
 Hi there, in one application I am using sqlalchemy 0.7.8 against Oracle 11 
 release 1 and I am hitting Oracle's lost contact more times than I would 
 like to. Since the app seems unable to recover from the error, I have to 
 restart it completely (it is a pyramid app).
 According to this changeset[1], the error ORA-03135 (connection lost 
 contact) is included in the expected disconnection errors from oracle since 
 0.7.5  and my expectation would be that sqlalchemy would be able to recover 
 from such error gracefully. Did I misunderstand how it works? Should I take 
 actions im my app when I somehow detects this situation?
 
 TIA,
 Mariano
 
 [1] 
 http://www.sqlalchemy.org/trac/changeset/dfb0b6aa622006e1e71a5296c3fa17667e24aac1/
 
 it will recover in that when you catch this exception, then continue to use 
 that Connection, it will know that it needs to reconnect with a fresh DBAPI 
 connection.   the underlying connection pool is also dumped when this 
 exception is encountered.
 
 it’s not possible for the Connection to transparently retry the operation 
 with a new DBAPI connection without raising an error because connection 
 session state is lost.   e.g. if it occurs in the middle of a result set, the 
 results are gone, occurs in the middle of a transaction, everything in the 
 transaction is gone, etc.
 
 
 
 



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [sqlalchemy] question about race conditions

2013-12-05 Thread Michael Bayer

On Dec 5, 2013, at 2:24 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 
 On Dec 5, 2013, at 1:25 PM, Jonathan Vanasco jonat...@findmeon.com wrote:
 
 
 i'm looking at moving some raw sql in twisted to SqlAlchemy and have a 
 question.
 
 I have a multi-threaded twisted daemon that tends to generate a lot of race 
 conditions on a few tables that are frequently hit.
 
 I get integrity errors from something like this :
 
 domain = SELECT * FROM domains WHERE 
 if not domain :
domain = INSERT INTO domain VALUES 
 
 the fix was :
 
 domain = SELECT * FROM domains WHERE 
 if not domain :
try:
   savepoint = db.savepoint()
   INSERT INTO domain VALUES 
 except psycopg2.IntegrityError : 
  savepoint,release()
  domain = SELECT * FROM domains WHERE 
 
 is there a way to catch an integrity error like this with SqlAlchemy ?
 
 i'm trying to get away from directly using psycopg2, it's getting too 
 annoying to maintain raw sql.
 
 sure, catch sqlalchemy.ext.IntegrityError instead.

that should be sqlalchemy.exc.IntegrityError




signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [sqlalchemy] Expected behaviour for Oracle's lost contact

2013-12-05 Thread Mariano Mara



On jue 05 dic 2013 16:19:14 ART, Michael Bayer wrote:

also note if you really want to prevent disconnects at the top of a 
transaction, you can use a pessimistic approach, see 
http://docs.sqlalchemy.org/en/rel_0_9/core/pooling.html?highlight=pessimistic#disconnect-handling-pessimistic
 .




thanks so much for your quick answer. I will review this option and 
evaluate if I can use it.





On Dec 5, 2013, at 11:05 AM, Michael Bayer mike...@zzzcomputing.com wrote:



On Dec 5, 2013, at 10:54 AM, Mariano Mara mariano.m...@gmail.com wrote:


Hi there, in one application I am using sqlalchemy 0.7.8 against Oracle 11 
release 1 and I am hitting Oracle's lost contact more times than I would like 
to. Since the app seems unable to recover from the error, I have to restart it 
completely (it is a pyramid app).
According to this changeset[1], the error ORA-03135 (connection lost contact) 
is included in the expected disconnection errors from oracle since 0.7.5  and 
my expectation would be that sqlalchemy would be able to recover from such 
error gracefully. Did I misunderstand how it works? Should I take actions im my 
app when I somehow detects this situation?

TIA,
Mariano

[1] 
http://www.sqlalchemy.org/trac/changeset/dfb0b6aa622006e1e71a5296c3fa17667e24aac1/


it will recover in that when you catch this exception, then continue to use 
that Connection, it will know that it needs to reconnect with a fresh DBAPI 
connection.   the underlying connection pool is also dumped when this exception 
is encountered.

it’s not possible for the Connection to transparently retry the operation with 
a new DBAPI connection without raising an error because connection session 
state is lost.   e.g. if it occurs in the middle of a result set, the results 
are gone, occurs in the middle of a transaction, everything in the transaction 
is gone, etc.








--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] ORM relationships and (PostgreSQL) partition constraint-exclusion

2013-12-05 Thread Adrian Schreyer
I will try this out then, thanks for your help! I assume this works in 0.9
only?


On Thu, Dec 5, 2013 at 7:18 PM, Michael Bayer mike...@zzzcomputing.comwrote:

 With the example I gave, when accessing .partitioned on a First instance,
 the lazy loader will convert all columns from “First” into a bound
 parameter, it emits this:


  SELECT partitioned.other_id AS partitioned_other_id, second.other_id AS
 second_other_id, partitioned.partition_key AS partitioned_partition_key,
 second.first_id AS second_first_id
 FROM partitioned JOIN second ON partitioned.other_id = second.other_id
 WHERE ? = partitioned.partition_key AND ? = second.first_id
 2013-12-05 14:14:42,689 INFO sqlalchemy.engine.base.Engine (u'p1', 2)


 “first.partition_key” is not in the query, it’s replaced by ‘p1’ in this
 case, the value that was assigned to that First instance.There is no
 “secondary” table per se in the example I gave.




 On Dec 5, 2013, at 1:55 PM, Adrian Schreyer adrian.schre...@gmail.com
 wrote:

 The partitioned relationship actually referred to the tertiary table in
 both the primary and secondary join - the problem for me was that in the
 primaryjoin

 primaryjoin=and_(First.first_id==Second.first_id, 
 First.partition_key==Partitioned.partition_key)

 only First.first_id will be interpolated with the actual value first_id of
 the instance in question whereas First.partition_key on the other hand
 will be interpolated as column object. The problem is that in this case
 First.partition_key has to be interpolated with the actual value to get
 the constraint-exclusion to work. In a normal many-to-many relationship
 this would not be necessary and maybe that is why it only interpolates the
 values for the join on the secondary table.

 The partitioned relationship emits a query like this if the attribute is
 accessed:

 SELECT partitioned.*
   FROM partitioned, second, first
  WHERE %(param_1)s = second.first_id
AND first.partition_key = partitioned.partition_key
AND second.other_id = partitioned.other_id

 But I would need first.partitioned_key to be %(param_2)s.

 So far I used a @property around a query function to add the partition_key
 to query.filter() manually.


 On Thu, Dec 5, 2013 at 4:37 PM, Michael Bayer mike...@zzzcomputing.comwrote:

 oh, you want to refer to the tertiary table in both the primary and
 secondary join.so right this pattern does not correspond to the
 A-secondary-B pattern and isn’t really a classic many-to-many.

 a quick way to map these are to use non primary mappers (was going to
 just paraphrase, but let me just try it out b.c. these are fun anyway, and
 I want to see the new joining behavior we have in 0.9…):

 from sqlalchemy import *
 from sqlalchemy.orm import *
 from sqlalchemy.ext.declarative import declarative_base

 Base = declarative_base()

  class First(Base):
 __tablename__ = 'first'

 first_id = Column(Integer, primary_key=True)
 partition_key = Column(String)

 def __repr__(self):
 return (First(%s, %s) % (self.first_id, self.partition_key))

 class Second(Base):
 __tablename__ = 'second'

 id = Column(Integer, primary_key=True)
 first_id = Column(Integer)
 other_id = Column(Integer)

 class Partitioned(Base):
 __tablename__ = 'partitioned'

 id = Column(Integer, primary_key=True)
 partition_key = Column(String)
 other_id = Column(Integer)

 def __repr__(self):
 return (Partitioned(%s, %s) % (self.partition_key,
 self.other_id))


 j = join(Partitioned, Second, Partitioned.other_id == Second.other_id)

 partitioned_second = mapper(Partitioned, j, non_primary=True, properties={
 # note we need to disambiguate columns here - the join()
 # will provide them as j.c.tablename_colname for access,
 # but they retain their real names in the mapping
 id: j.c.partitioned_id,
 other_id: [j.c.partitioned_other_id, j.c.second_other_id],
 secondary_id: j.c.second_id
 })

 First.partitioned = relationship(
 partitioned_second,
 primaryjoin=and_(
 First.partition_key ==
 partitioned_second.c.partition_key,
 First.first_id ==
 foreign(partitioned_second.c.first_id)
 ), innerjoin=True)

 e = create_engine(postgresql://scott:tiger@localhost/test, echo=True)
 Base.metadata.drop_all(e)
 Base.metadata.create_all(e)
 s = Session(e)
 s.add_all([
 First(first_id=1, partition_key='p1'),
 First(first_id=2, partition_key='p1'),
 First(first_id=3, partition_key='p2'),
 Second(first_id=1, other_id=1),
 Second(first_id=2, other_id=1),
 Second(first_id=3, other_id=2),
 Partitioned(partition_key='p1', other_id=1),
 Partitioned(partition_key='p1', other_id=2),
 Partitioned(partition_key='p2', other_id=2),
 ])
 s.commit()

 for row in s.query(First, 

Re: [sqlalchemy] ORM relationships and (PostgreSQL) partition constraint-exclusion

2013-12-05 Thread Michael Bayer
it should work in 0.8 as well (and can be done even in 0.7 with some 
adjustments), just not the more optimized nested JOIN part.


On Dec 5, 2013, at 2:41 PM, Adrian Schreyer adrian.schre...@gmail.com wrote:

 I will try this out then, thanks for your help! I assume this works in 0.9 
 only?
 
 
 On Thu, Dec 5, 2013 at 7:18 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 With the example I gave, when accessing .partitioned on a First instance, the 
 lazy loader will convert all columns from “First” into a bound parameter, it 
 emits this:
 
 
  SELECT partitioned.other_id AS partitioned_other_id, second.other_id AS 
 second_other_id, partitioned.partition_key AS partitioned_partition_key, 
 second.first_id AS second_first_id 
 FROM partitioned JOIN second ON partitioned.other_id = second.other_id 
 WHERE ? = partitioned.partition_key AND ? = second.first_id
 2013-12-05 14:14:42,689 INFO sqlalchemy.engine.base.Engine (u'p1', 2)
 
 
 “first.partition_key” is not in the query, it’s replaced by ‘p1’ in this 
 case, the value that was assigned to that First instance.There is no 
 “secondary” table per se in the example I gave.
 
 
 
 
 On Dec 5, 2013, at 1:55 PM, Adrian Schreyer adrian.schre...@gmail.com wrote:
 
 The partitioned relationship actually referred to the tertiary table in both 
 the primary and secondary join - the problem for me was that in the 
 primaryjoin
 
 primaryjoin=and_(First.first_id==Second.first_id, 
 First.partition_key==Partitioned.partition_key)
 only First.first_id will be interpolated with the actual value first_id of 
 the instance in question whereas First.partition_key on the other hand will 
 be interpolated as column object. The problem is that in this case 
 First.partition_key has to be interpolated with the actual value to get the 
 constraint-exclusion to work. In a normal many-to-many relationship this 
 would not be necessary and maybe that is why it only interpolates the values 
 for the join on the secondary table.
 
 The partitioned relationship emits a query like this if the attribute is 
 accessed:
 
 
 SELECT partitioned.*
   FROM partitioned, second, first
  WHERE %(param_1)s = second.first_id
AND first.partition_key = partitioned.partition_key
AND second.other_id = partitioned.other_id
 But I would need first.partitioned_key to be %(param_2)s.
 
 So far I used a @property around a query function to add the partition_key 
 to query.filter() manually.
 
 
 
 On Thu, Dec 5, 2013 at 4:37 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 oh, you want to refer to the tertiary table in both the primary and 
 secondary join.so right this pattern does not correspond to the 
 A-secondary-B pattern and isn’t really a classic many-to-many.
 
 a quick way to map these are to use non primary mappers (was going to just 
 paraphrase, but let me just try it out b.c. these are fun anyway, and I want 
 to see the new joining behavior we have in 0.9…):
 
 from sqlalchemy import *
 from sqlalchemy.orm import *
 from sqlalchemy.ext.declarative import declarative_base
 
 Base = declarative_base()
 
 class First(Base):
 __tablename__ = 'first'
 
 first_id = Column(Integer, primary_key=True)
 partition_key = Column(String)
 
 def __repr__(self):
 return (First(%s, %s) % (self.first_id, self.partition_key))
 
 class Second(Base):
 __tablename__ = 'second'
 
 id = Column(Integer, primary_key=True)
 first_id = Column(Integer)
 other_id = Column(Integer)
 
 class Partitioned(Base):
 __tablename__ = 'partitioned'
 
 id = Column(Integer, primary_key=True)
 partition_key = Column(String)
 other_id = Column(Integer)
 
 def __repr__(self):
 return (Partitioned(%s, %s) % (self.partition_key, self.other_id))
 
 
 j = join(Partitioned, Second, Partitioned.other_id == Second.other_id)
 
 partitioned_second = mapper(Partitioned, j, non_primary=True, properties={
 # note we need to disambiguate columns here - the join()
 # will provide them as j.c.tablename_colname for access,
 # but they retain their real names in the mapping
 id: j.c.partitioned_id,
 other_id: [j.c.partitioned_other_id, j.c.second_other_id],
 secondary_id: j.c.second_id
 })
 
 First.partitioned = relationship(
 partitioned_second,
 primaryjoin=and_(
 First.partition_key == 
 partitioned_second.c.partition_key,
 First.first_id == 
 foreign(partitioned_second.c.first_id)
 ), innerjoin=True)
 
 e = create_engine(postgresql://scott:tiger@localhost/test, echo=True)
 Base.metadata.drop_all(e)
 Base.metadata.create_all(e)
 s = Session(e)
 s.add_all([
 First(first_id=1, partition_key='p1'),
 First(first_id=2, partition_key='p1'),
 First(first_id=3, partition_key='p2'),
 Second(first_id=1, other_id=1),
 Second(first_id=2, 

Re: [sqlalchemy] question about race conditions

2013-12-05 Thread Jonathan Vanasco
oh that's great - I didn't expect SqlAlchemy to aggregate/support the 
different driver errors like that!

thanks so much, Michael!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] creating a functional index for XML

2013-12-05 Thread Christian Lang
Hi,

I'm trying to create the following index (on PostgreSQL) with SQLAlchemy 
0.8:

CREATE INDEX doc_idx ON xmltab (CAST((xpath('//@bla', doc))[1] AS TEXT));

(where doc is a column of type XML)

I got this far:

Index(doc_idx, cast((func.xpath('//@bla', xmlTable.c.doc))[1], TEXT))

but get the error:
Traceback (most recent call last):
  File xmltests.py, line 146, in module
idx = Index(doc_idx, cast((func.xpath('//@bla', xmlTable.c.doc))[1], 
TEXT))
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/operators.py, line 
320, in __getitem__
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
2311, in operate
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/operators.py, line 
320, in __getitem__
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
1994, in operate
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
2148, in _unsupported_impl
NotImplementedError: Operator 'getitem' is not supported on this expression

It seems getitem should be allowed since the xpath expression returns an 
array of nodes (and it is fine in PostgreSQL).
Any idea what I am doing wrong and how to fix it?

Thanks,
Christian

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] question about race conditions

2013-12-05 Thread Julien Cigar
On Thu, Dec 05, 2013 at 10:25:46AM -0800, Jonathan Vanasco wrote:
 
 i'm looking at moving some raw sql in twisted to SqlAlchemy and have a 
 question.
 
 I have a multi-threaded twisted daemon that tends to generate a lot of race 
 conditions on a few tables that are frequently hit.
 
 I get integrity errors from something like this :
 
  domain = SELECT * FROM domains WHERE 
  if not domain :
 domain = INSERT INTO domain VALUES 
 
 the fix was :
 
  domain = SELECT * FROM domains WHERE 
  if not domain :
 try:
savepoint = db.savepoint()
INSERT INTO domain VALUES 
  except psycopg2.IntegrityError : 
   savepoint,release()
   domain = SELECT * FROM domains WHERE 

A maybe better way would be
http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-UPSERT-EXAMPLE

 
 is there a way to catch an integrity error like this with SqlAlchemy ?
 
 i'm trying to get away from directly using psycopg2, it's getting too 
 annoying to maintain raw sql.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] creating a functional index for XML

2013-12-05 Thread Michael Bayer
I think if you give your func.xpath a datatype of pg.ARRAY(of whatever) you’ll 
get it:

func.xpath(…, type_=postgresql.ARRAY)[1]


 from sqlalchemy import func, String
 from sqlalchemy.dialects import postgresql
 print func.xpath('something', 'somethingelse', 
 type_=postgresql.ARRAY(String()))[1].compile(dialect=postgresql.dialect())
xpath(%(xpath_1)s, %(xpath_2)s)[%(xpath_3)s]
 



On Dec 5, 2013, at 3:33 PM, Christian Lang christian.a.l...@gmail.com wrote:

 Hi,
 
 I'm trying to create the following index (on PostgreSQL) with SQLAlchemy 0.8:
 
 CREATE INDEX doc_idx ON xmltab (CAST((xpath('//@bla', doc))[1] AS TEXT));
 
 (where doc is a column of type XML)
 
 I got this far:
 
 Index(doc_idx, cast((func.xpath('//@bla', xmlTable.c.doc))[1], TEXT))
 
 but get the error:
 Traceback (most recent call last):
   File xmltests.py, line 146, in module
 idx = Index(doc_idx, cast((func.xpath('//@bla', xmlTable.c.doc))[1], 
 TEXT))
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/operators.py, line 320, 
 in __getitem__
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
 2311, in operate
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/operators.py, line 320, 
 in __getitem__
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
 1994, in operate
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
 2148, in _unsupported_impl
 NotImplementedError: Operator 'getitem' is not supported on this expression
 
 It seems getitem should be allowed since the xpath expression returns an 
 array of nodes (and it is fine in PostgreSQL).
 Any idea what I am doing wrong and how to fix it?
 
 Thanks,
 Christian
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [sqlalchemy] fake models ?

2013-12-05 Thread Michael Bayer

Overall the different pieces aren’t making sense entirely.

We have the notion of a “schema”, stored in the database - that is, 
Entity/Attribute.  Those tables are fixed per type.  It can tell me for example 
that there’s a “Person” type with two attributes associated, “name” and 
“website”.

So what happens when I do this:

class Person(Model):
name = StringType(required=True)
website = URLType()

is there a metaclass that’s writing to the database at that point the 
Entity/Attribute rows for that type?  It’s not clear when I say Person.website, 
am I just going by the fact that we have Person.website in the Python model, 
and if so what am I getting with the Attribute or even the Value table?  There 
could just be a table called “url_values” and I join from Instance to that.   
The schema seems to be stated twice here in two very different ways.

Also not clear what the purpose of Instance.valid_attributes are, this seems 
redundant vs. Entity already referring to Attribute.




On Dec 5, 2013, at 11:48 AM, Richard Gerd Kuesters rich...@humantech.com.br 
wrote:

 the vertical mapper example was kind of a base for me to develop my eav 
 system. i have added also a table that defines the structure of the entity, 
 since then I use schematics (and its json schema import - export utility). 
 it's very handy.
 
 i think that a pseudo-code can explain a little better, those ones in sa 
 (i'll just write simplified):
 
 
 class Entity(Base):
 id = Column(int, pk)
 name = Column(str, unique)
 
 
 class Attribute(Base):
 id = Column(int, pk)
 name = Column(str)
 discriminator = Column(enum)  # bool, string, integer, blob, etc
 entity_id = Column(fk(Entity.id))
 
 entity = relationship(Entity)
 
 tbl_args = uniqueconstraint(name, entity_id)
 
 
 basically, with those classes i can define how my eav objects are (of course, 
 they're much more complete, but for now it can give the idea). so, when I 
 have this schematics model:
 
 
 class Person(Model):
 name = StringType(required=True)
 website = URLType()
 
 
 it will be interpreted as:
 
 
 person_entity = Entity()
 person_entity.name('person')
 
 session.add(person_entity)
 session.commit()
 
 name_attr = Attribute()
 name_attr.name = 'name'
 name_attr.discriminator = TypeDiscriminator.STRING  # i used here a slightly 
 modified code from decl_enum, a post you wrote in your blog
 name_attr.entity. = person_entity
 
 session.add(name_attr)
 session.commit()
 
 website_attr = Attribute()
 ...
 
 
 i think this can start to illustrate better what the Person model really 
 is. now, let's go to the values (in poor code again, lol):
 
 
 class Instance(Base):
 id = Column(int, pk)
 entity_id = Column(fk(Entity.id))
 
 entity = relationship(Entity)
 valid_attributes = relationship(Attribute, 
 primaryjoin=entity_id==Attribute.entity_id)
 
 
 class Value(Base):
 id = Column(int, pk)
 attribute_id = Column(fk(Attribute.id))
 discriminator = Column(enum)
 
 __mapper_args__ = dict(polymorphic_on=discriminator)
 
 
 class StringValue(Value):
 id = Column(fk(Value.id))
 value = Column(string)
 
 __mapper_args__ = dict(polymorphic_identity=TypeDiscriminator.STRING)
 
 
 class BoolValue(Value):
  ...
 
 
 then, with a dozen of ifs and elses, I can translate the values given to a 
 Person instance from schematics directly to the database.
 
 so, if i want to find a string value (foo), from the Person model, with an 
 attribute named name, my query would be something like (the mess below):
 
 
 res = session.query([Entity, Attribute, Instance, StringValue])
 .join(Attribute, Attribute.entity_id == Entity.id)
 .join(Instance, Instance.entity_id == Entity.id)
 .join(StringValue, [StringValue.id == Value.id, Value.attribute_id == 
 Attribute.id])
 .filter(Entity.name == 'person', Attribute.name == name, 
 Attribute.discriminator == TypeDiscriminator.STRING, StringValue.value == 
 'foo')
 .all()
 
 
 ok. this query seems crappy and propably won't work if i run it now (i'm 
 writing this on the fly, lol), but it can give you a better idea of my goal :)
 
 now, instead of making all these queries (i don't have a problem writing them 
 because i'm writing the system), i would like to create something easier to 
 develop (since this project is the first python project some of them will 
 work on), so imagine using sqlalchemy at this level :)
 
 thanks a lot for your help.
 
 
 my best regards,
 richard.
 
 
 
 
 On 12/05/2013 01:52 PM, Michael Bayer wrote:
 
 On Dec 5, 2013, at 6:16 AM, Richard Gerd Kuesters rich...@humantech.com.br 
 wrote:
 
 ok, let's see if I can explain the scenario better :D
 
 first, i'm using schematics http://schematics.readthedocs.org/ to define 
 arbitrary models for message exchange between processes, ui, events and 
 so on. based on a json schema (that the schematics model generates), i'm 
 able to store data on the 

Re: [sqlalchemy] creating a functional index for XML

2013-12-05 Thread Christian Lang
Thanks for the quick reply. Getting closer...

I changed the code to

idx = Index(doc_idx, cast((func.xpath('//@bla', xmlTable.c.doc, 
type_=postgresql.ARRAY(String([1].compile(dialect=postgresql.dialect()), 
TEXT))
idx.create(engine)

and the first line is now OK. But the second line (create) gives this error:

  File xmltests.py, line 148, in module
idx.create(engine)
  File build/bdist.linux-x86_64/egg/sqlalchemy/schema.py, line 2519, in 
create
  File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 1479, 
in _run_visitor
  File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 1122, 
in _run_visitor
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/visitors.py, line 122, 
in traverse_single
  File build/bdist.linux-x86_64/egg/sqlalchemy/engine/ddl.py, line 105, 
in visit_index
  File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 662, 
in execute
  File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 714, 
in _execute_ddl
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
1920, in compile
  File build/bdist.linux-x86_64/egg/sqlalchemy/schema.py, line 2954, in 
_compiler
  File build/bdist.linux-x86_64/egg/sqlalchemy/engine/interfaces.py, line 
787, in __init__
  File build/bdist.linux-x86_64/egg/sqlalchemy/engine/interfaces.py, line 
806, in process
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/visitors.py, line 80, 
in _compiler_dispatch
  File 
build/bdist.linux-x86_64/egg/sqlalchemy/dialects/postgresql/base.py, line 
1086, in visit_create_index
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/compiler.py, line 
2022, in _verify_index_table
sqlalchemy.exc.CompileError: Index 'doc_idx' is not associated with any 
table.

Which seems strange since xmlTable is referenced in the index functional 
expression and it was defined earlier as:

xmlTable = Table(xmltab, metadata,
Column(document_id, Integer, primary_key=True),
Column(doc, XML) 
   )

(where XML is a UserDefinedType)

Did this table reference get lost? Or is something missing in my index 
definition?
The table gets created properly in PostgreSQL with XML column btw.

On Thursday, December 5, 2013 12:55:12 PM UTC-8, Michael Bayer wrote:

 I think if you give your func.xpath a datatype of pg.ARRAY(of whatever) 
 you’ll get it:

 func.xpath(…, type_=postgresql.ARRAY)[1]


  from sqlalchemy import func, String
  from sqlalchemy.dialects import postgresql
  print func.xpath('something', 'somethingelse', 
 type_=postgresql.ARRAY(String()))[1].compile(dialect=postgresql.dialect())
 xpath(%(xpath_1)s, %(xpath_2)s)[%(xpath_3)s]
  



 On Dec 5, 2013, at 3:33 PM, Christian Lang 
 christia...@gmail.comjavascript: 
 wrote:

 Hi,

 I'm trying to create the following index (on PostgreSQL) with SQLAlchemy 
 0.8:

 CREATE INDEX doc_idx ON xmltab (CAST((xpath('//@bla', doc))[1] AS TEXT));

 (where doc is a column of type XML)

 I got this far:

 Index(doc_idx, cast((func.xpath('//@bla', xmlTable.c.doc))[1], TEXT))

 but get the error:
 Traceback (most recent call last):
   File xmltests.py, line 146, in module
 idx = Index(doc_idx, cast((func.xpath('//@bla', xmlTable.c.doc))[1], 
 TEXT))
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/operators.py, line 
 320, in __getitem__
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
 2311, in operate
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/operators.py, line 
 320, in __getitem__
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
 1994, in operate
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
 2148, in _unsupported_impl
 NotImplementedError: Operator 'getitem' is not supported on this expression

 It seems getitem should be allowed since the xpath expression returns an 
 array of nodes (and it is fine in PostgreSQL).
 Any idea what I am doing wrong and how to fix it?

 Thanks,
 Christian


 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] creating a functional index for XML

2013-12-05 Thread Michael Bayer
OK well the compile(dialect=…) was just to illustrate the string form, we don’t 
put that in the Index, so that way the expression still provides access to the 
column, which it needs to search for in order to get at the table:

xmlTable = Table('xmltable', m, Column('doc', TEXT))
idx = Index(doc_idx,
cast((func.xpath('//@bla', xmlTable.c.doc, 
type_=postgresql.ARRAY(String([1], TEXT))

e = create_engine(postgresql://scott:tiger@localhost/test, echo=True)
idx.create(e)

the SQL itself still fails on PG (not familiar with the xpath function) but it 
renders:

CREATE INDEX doc_idx ON xmltable (CAST(xpath('//@bla', doc)[1] AS TEXT))





On Dec 5, 2013, at 4:47 PM, Christian Lang christian.a.l...@gmail.com wrote:

 Thanks for the quick reply. Getting closer...
 
 I changed the code to
 
 idx = Index(doc_idx, cast((func.xpath('//@bla', xmlTable.c.doc, 
 type_=postgresql.ARRAY(String([1].compile(dialect=postgresql.dialect()), 
 TEXT))
 idx.create(engine)
 
 and the first line is now OK. But the second line (create) gives this error:
 
   File xmltests.py, line 148, in module
 idx.create(engine)
   File build/bdist.linux-x86_64/egg/sqlalchemy/schema.py, line 2519, in 
 create
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 1479, 
 in _run_visitor
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 1122, 
 in _run_visitor
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/visitors.py, line 122, 
 in traverse_single
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/ddl.py, line 105, in 
 visit_index
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 662, in 
 execute
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 714, in 
 _execute_ddl
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
 1920, in compile
   File build/bdist.linux-x86_64/egg/sqlalchemy/schema.py, line 2954, in 
 _compiler
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/interfaces.py, line 
 787, in __init__
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/interfaces.py, line 
 806, in process
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/visitors.py, line 80, in 
 _compiler_dispatch
   File build/bdist.linux-x86_64/egg/sqlalchemy/dialects/postgresql/base.py, 
 line 1086, in visit_create_index
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/compiler.py, line 2022, 
 in _verify_index_table
 sqlalchemy.exc.CompileError: Index 'doc_idx' is not associated with any table.
 
 Which seems strange since xmlTable is referenced in the index functional 
 expression and it was defined earlier as:
 
 xmlTable = Table(xmltab, metadata,
   Column(document_id, Integer, primary_key=True),
   Column(doc, XML)  
)
 
 (where XML is a UserDefinedType)
 
 Did this table reference get lost? Or is something missing in my index 
 definition?
 The table gets created properly in PostgreSQL with XML column btw.
 
 On Thursday, December 5, 2013 12:55:12 PM UTC-8, Michael Bayer wrote:
 I think if you give your func.xpath a datatype of pg.ARRAY(of whatever) 
 you’ll get it:
 
 func.xpath(…, type_=postgresql.ARRAY)[1]
 
 
  from sqlalchemy import func, String
  from sqlalchemy.dialects import postgresql
  print func.xpath('something', 'somethingelse', 
  type_=postgresql.ARRAY(String()))[1].compile(dialect=postgresql.dialect())
 xpath(%(xpath_1)s, %(xpath_2)s)[%(xpath_3)s]
  
 
 
 
 On Dec 5, 2013, at 3:33 PM, Christian Lang christia...@gmail.com wrote:
 
 Hi,
 
 I'm trying to create the following index (on PostgreSQL) with SQLAlchemy 0.8:
 
 CREATE INDEX doc_idx ON xmltab (CAST((xpath('//@bla', doc))[1] AS TEXT));
 
 (where doc is a column of type XML)
 
 I got this far:
 
 Index(doc_idx, cast((func.xpath('//@bla', xmlTable.c.doc))[1], TEXT))
 
 but get the error:
 Traceback (most recent call last):
   File xmltests.py, line 146, in module
 idx = Index(doc_idx, cast((func.xpath('//@bla', xmlTable.c.doc))[1], 
 TEXT))
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/operators.py, line 320, 
 in __getitem__
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
 2311, in operate
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/operators.py, line 320, 
 in __getitem__
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
 1994, in operate
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
 2148, in _unsupported_impl
 NotImplementedError: Operator 'getitem' is not supported on this expression
 
 It seems getitem should be allowed since the xpath expression returns an 
 array of nodes (and it is fine in PostgreSQL).
 Any idea what I am doing wrong and how to fix it?
 
 Thanks,
 Christian
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to 

Re: [sqlalchemy] creating a functional index for XML

2013-12-05 Thread Christian Lang
I see, thanks for clarifying.

I think it fails in PG because of a missing pair of parentheses. SA 
generates:

CREATE INDEX doc_idx ON xmltab (CAST(xpath('//@value_dbl', doc)[1] AS TEXT))

but it should be:

CREATE INDEX doc_idx ON xmltab (CAST((xpath('//@value_dbl', doc))[1] AS 
TEXT))

Subtle but seems to matter to PG... is there a way to enforce an extra pair 
of () ?



On Thursday, December 5, 2013 2:08:37 PM UTC-8, Michael Bayer wrote:

 OK well the compile(dialect=…) was just to illustrate the string form, we 
 don’t put that in the Index, so that way the expression still provides 
 access to the column, which it needs to search for in order to get at the 
 table:

 xmlTable = Table('xmltable', m, Column('doc', TEXT))
 idx = Index(doc_idx,
 cast((func.xpath('//@bla', xmlTable.c.doc, 
 type_=postgresql.ARRAY(String([1], TEXT))

 e = create_engine(postgresql://scott:tiger@localhost/test, echo=True)
 idx.create(e)

 the SQL itself still fails on PG (not familiar with the xpath function) 
 but it renders:

 CREATE INDEX doc_idx ON xmltable (CAST(xpath('//@bla', doc)[1] AS TEXT))





 On Dec 5, 2013, at 4:47 PM, Christian Lang 
 christia...@gmail.comjavascript: 
 wrote:

 Thanks for the quick reply. Getting closer...

 I changed the code to

 idx = Index(doc_idx, cast((func.xpath('//@bla', xmlTable.c.doc, 
 type_=postgresql.ARRAY(String([1].compile(dialect=postgresql.dialect()), 
 TEXT))
 idx.create(engine)

 and the first line is now OK. But the second line (create) gives this 
 error:

   File xmltests.py, line 148, in module
 idx.create(engine)
   File build/bdist.linux-x86_64/egg/sqlalchemy/schema.py, line 2519, in 
 create
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 
 1479, in _run_visitor
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 
 1122, in _run_visitor
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/visitors.py, line 
 122, in traverse_single
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/ddl.py, line 105, 
 in visit_index
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 662, 
 in execute
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 714, 
 in _execute_ddl
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
 1920, in compile
   File build/bdist.linux-x86_64/egg/sqlalchemy/schema.py, line 2954, in 
 _compiler
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/interfaces.py, 
 line 787, in __init__
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/interfaces.py, 
 line 806, in process
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/visitors.py, line 80, 
 in _compiler_dispatch
   File 
 build/bdist.linux-x86_64/egg/sqlalchemy/dialects/postgresql/base.py, line 
 1086, in visit_create_index
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/compiler.py, line 
 2022, in _verify_index_table
 sqlalchemy.exc.CompileError: Index 'doc_idx' is not associated with any 
 table.

 Which seems strange since xmlTable is referenced in the index functional 
 expression and it was defined earlier as:

 xmlTable = Table(xmltab, metadata,
 Column(document_id, Integer, primary_key=True),
 Column(doc, XML) 
)

 (where XML is a UserDefinedType)

 Did this table reference get lost? Or is something missing in my index 
 definition?
 The table gets created properly in PostgreSQL with XML column btw.

 On Thursday, December 5, 2013 12:55:12 PM UTC-8, Michael Bayer wrote:

 I think if you give your func.xpath a datatype of pg.ARRAY(of whatever) 
 you’ll get it:

 func.xpath(…, type_=postgresql.ARRAY)[1]


  from sqlalchemy import func, String
  from sqlalchemy.dialects import postgresql
  print func.xpath('something', 'somethingelse', 
 type_=postgresql.ARRAY(String()))[1].compile(dialect=postgresql.dialect())
 xpath(%(xpath_1)s, %(xpath_2)s)[%(xpath_3)s]
  



 On Dec 5, 2013, at 3:33 PM, Christian Lang christia...@gmail.com wrote:

 Hi,

 I'm trying to create the following index (on PostgreSQL) with SQLAlchemy 
 0.8:

 CREATE INDEX doc_idx ON xmltab (CAST((xpath('//@bla', doc))[1] AS TEXT));

 (where doc is a column of type XML)

 I got this far:

 Index(doc_idx, cast((func.xpath('//@bla', xmlTable.c.doc))[1], TEXT))

 but get the error:
 Traceback (most recent call last):
   File xmltests.py, line 146, in module
 idx = Index(doc_idx, cast((func.xpath('//@bla', 
 xmlTable.c.doc))[1], TEXT))
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/operators.py, line 
 320, in __getitem__
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
 2311, in operate
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/operators.py, line 
 320, in __getitem__
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
 1994, in operate
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
 2148, in _unsupported_impl
 NotImplementedError: Operator 'getitem' is not supported on this 
 

Re: [sqlalchemy] creating a functional index for XML

2013-12-05 Thread Michael Bayer
parens can be forced using Grouping

from sqlalchemy.sql.expression import Grouping

idx = Index(doc_idx,
cast(
Grouping(func.xpath(
'//@bla',
xmlTable.c.doc,
type_=postgresql.ARRAY(String())
))[1],
TEXT)
)



On Dec 5, 2013, at 5:20 PM, Christian Lang christian.a.l...@gmail.com wrote:

 I see, thanks for clarifying.
 
 I think it fails in PG because of a missing pair of parentheses. SA generates:
 
 CREATE INDEX doc_idx ON xmltab (CAST(xpath('//@value_dbl', doc)[1] AS TEXT))
 
 but it should be:
 
 CREATE INDEX doc_idx ON xmltab (CAST((xpath('//@value_dbl', doc))[1] AS TEXT))
 
 Subtle but seems to matter to PG... is there a way to enforce an extra pair 
 of () ?
 
 
 
 On Thursday, December 5, 2013 2:08:37 PM UTC-8, Michael Bayer wrote:
 OK well the compile(dialect=…) was just to illustrate the string form, we 
 don’t put that in the Index, so that way the expression still provides access 
 to the column, which it needs to search for in order to get at the table:
 
 xmlTable = Table('xmltable', m, Column('doc', TEXT))
 idx = Index(doc_idx,
 cast((func.xpath('//@bla', xmlTable.c.doc, 
 type_=postgresql.ARRAY(String([1], TEXT))
 
 e = create_engine(postgresql://scott:tiger@localhost/test, echo=True)
 idx.create(e)
 
 the SQL itself still fails on PG (not familiar with the xpath function) but 
 it renders:
 
 CREATE INDEX doc_idx ON xmltable (CAST(xpath('//@bla', doc)[1] AS TEXT))
 
 
 
 
 
 On Dec 5, 2013, at 4:47 PM, Christian Lang christia...@gmail.com wrote:
 
 Thanks for the quick reply. Getting closer...
 
 I changed the code to
 
 idx = Index(doc_idx, cast((func.xpath('//@bla', xmlTable.c.doc, 
 type_=postgresql.ARRAY(String([1].compile(dialect=postgresql.dialect()), 
 TEXT))
 idx.create(engine)
 
 and the first line is now OK. But the second line (create) gives this error:
 
   File xmltests.py, line 148, in module
 idx.create(engine)
   File build/bdist.linux-x86_64/egg/sqlalchemy/schema.py, line 2519, in 
 create
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 1479, 
 in _run_visitor
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 1122, 
 in _run_visitor
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/visitors.py, line 122, 
 in traverse_single
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/ddl.py, line 105, in 
 visit_index
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 662, 
 in execute
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 714, 
 in _execute_ddl
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
 1920, in compile
   File build/bdist.linux-x86_64/egg/sqlalchemy/schema.py, line 2954, in 
 _compiler
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/interfaces.py, line 
 787, in __init__
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/interfaces.py, line 
 806, in process
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/visitors.py, line 80, 
 in _compiler_dispatch
   File 
 build/bdist.linux-x86_64/egg/sqlalchemy/dialects/postgresql/base.py, line 
 1086, in visit_create_index
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/compiler.py, line 2022, 
 in _verify_index_table
 sqlalchemy.exc.CompileError: Index 'doc_idx' is not associated with any 
 table.
 
 Which seems strange since xmlTable is referenced in the index functional 
 expression and it was defined earlier as:
 
 xmlTable = Table(xmltab, metadata,
  Column(document_id, Integer, primary_key=True),
  Column(doc, XML)  
)
 
 (where XML is a UserDefinedType)
 
 Did this table reference get lost? Or is something missing in my index 
 definition?
 The table gets created properly in PostgreSQL with XML column btw.
 
 On Thursday, December 5, 2013 12:55:12 PM UTC-8, Michael Bayer wrote:
 I think if you give your func.xpath a datatype of pg.ARRAY(of whatever) 
 you’ll get it:
 
 func.xpath(…, type_=postgresql.ARRAY)[1]
 
 
  from sqlalchemy import func, String
  from sqlalchemy.dialects import postgresql
  print func.xpath('something', 'somethingelse', 
  type_=postgresql.ARRAY(String()))[1].compile(dialect=postgresql.dialect())
 xpath(%(xpath_1)s, %(xpath_2)s)[%(xpath_3)s]
  
 
 
 
 On Dec 5, 2013, at 3:33 PM, Christian Lang christia...@gmail.com wrote:
 
 Hi,
 
 I'm trying to create the following index (on PostgreSQL) with SQLAlchemy 
 0.8:
 
 CREATE INDEX doc_idx ON xmltab (CAST((xpath('//@bla', doc))[1] AS TEXT));
 
 (where doc is a column of type XML)
 
 I got this far:
 
 Index(doc_idx, cast((func.xpath('//@bla', xmlTable.c.doc))[1], TEXT))
 
 but get the error:
 Traceback (most recent call last):
   File xmltests.py, line 146, in module
 idx = Index(doc_idx, cast((func.xpath('//@bla', xmlTable.c.doc))[1], 
 TEXT))
   File 

Re: [sqlalchemy] creating a functional index for XML

2013-12-05 Thread Christian Lang
Wonderful, that did it! Thanks so much.

On Thursday, December 5, 2013 3:01:04 PM UTC-8, Michael Bayer wrote:

 parens can be forced using Grouping

 from sqlalchemy.sql.expression import Grouping

 idx = Index(doc_idx,
 cast(
 Grouping(func.xpath(
 '//@bla',
 xmlTable.c.doc,
 type_=postgresql.ARRAY(String())
 ))[1],
 TEXT)
 )



 On Dec 5, 2013, at 5:20 PM, Christian Lang 
 christia...@gmail.comjavascript: 
 wrote:

 I see, thanks for clarifying.

 I think it fails in PG because of a missing pair of parentheses. SA 
 generates:

 CREATE INDEX doc_idx ON xmltab (CAST(xpath('//@value_dbl', doc)[1] AS 
 TEXT))

 but it should be:

 CREATE INDEX doc_idx ON xmltab (CAST((xpath('//@value_dbl', doc))[1] AS 
 TEXT))

 Subtle but seems to matter to PG... is there a way to enforce an extra 
 pair of () ?



 On Thursday, December 5, 2013 2:08:37 PM UTC-8, Michael Bayer wrote:

 OK well the compile(dialect=…) was just to illustrate the string form, we 
 don’t put that in the Index, so that way the expression still provides 
 access to the column, which it needs to search for in order to get at the 
 table:

 xmlTable = Table('xmltable', m, Column('doc', TEXT))
 idx = Index(doc_idx,
 cast((func.xpath('//@bla', xmlTable.c.doc, 
 type_=postgresql.ARRAY(String([1], TEXT))

 e = create_engine(postgresql://scott:tiger@localhost/test, echo=True)
 idx.create(e)

 the SQL itself still fails on PG (not familiar with the xpath function) 
 but it renders:

 CREATE INDEX doc_idx ON xmltable (CAST(xpath('//@bla', doc)[1] AS TEXT))





 On Dec 5, 2013, at 4:47 PM, Christian Lang christia...@gmail.com wrote:

 Thanks for the quick reply. Getting closer...

 I changed the code to

 idx = Index(doc_idx, cast((func.xpath('//@bla', xmlTable.c.doc, 
 type_=postgresql.ARRAY(String([1].compile(dialect=postgresql.dialect()), 
 TEXT))
 idx.create(engine)

 and the first line is now OK. But the second line (create) gives this 
 error:

   File xmltests.py, line 148, in module
 idx.create(engine)
   File build/bdist.linux-x86_64/egg/sqlalchemy/schema.py, line 2519, in 
 create
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 
 1479, in _run_visitor
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 
 1122, in _run_visitor
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/visitors.py, line 
 122, in traverse_single
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/ddl.py, line 105, 
 in visit_index
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 
 662, in execute
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 
 714, in _execute_ddl
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, line 
 1920, in compile
   File build/bdist.linux-x86_64/egg/sqlalchemy/schema.py, line 2954, in 
 _compiler
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/interfaces.py, 
 line 787, in __init__
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/interfaces.py, 
 line 806, in process
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/visitors.py, line 
 80, in _compiler_dispatch
   File 
 build/bdist.linux-x86_64/egg/sqlalchemy/dialects/postgresql/base.py, line 
 1086, in visit_create_index
   File build/bdist.linux-x86_64/egg/sqlalchemy/sql/compiler.py, line 
 2022, in _verify_index_table
 sqlalchemy.exc.CompileError: Index 'doc_idx' is not associated with any 
 table.

 Which seems strange since xmlTable is referenced in the index 
 functional expression and it was defined earlier as:

 xmlTable = Table(xmltab, metadata,
 Column(document_id, Integer, primary_key=True),
 Column(doc, XML) 
)

 (where XML is a UserDefinedType)

 Did this table reference get lost? Or is something missing in my index 
 definition?
 The table gets created properly in PostgreSQL with XML column btw.

 On Thursday, December 5, 2013 12:55:12 PM UTC-8, Michael Bayer wrote:

 I think if you give your func.xpath a datatype of pg.ARRAY(of whatever) 
 you’ll get it:

 func.xpath(…, type_=postgresql.ARRAY)[1]


  from sqlalchemy import func, String
  from sqlalchemy.dialects import postgresql
  print func.xpath('something', 'somethingelse', 
 type_=postgresql.ARRAY(String()))[1].compile(dialect=postgresql.dialect())
 xpath(%(xpath_1)s, %(xpath_2)s)[%(xpath_3)s]
  



 On Dec 5, 2013, at 3:33 PM, Christian Lang christia...@gmail.com 
 wrote:

 Hi,

 I'm trying to create the following index (on PostgreSQL) with SQLAlchemy 
 0.8:

 CREATE INDEX doc_idx ON xmltab (CAST((xpath('//@bla', doc))[1] AS TEXT));

 (where doc is a column of type XML)

 I got this far:

 Index(doc_idx, cast((func.xpath('//@bla', xmlTable.c.doc))[1], TEXT))

 but get the error:
 Traceback (most recent call last):
   File xmltests.py, line 146, in module
 idx = Index(doc_idx, cast((func.xpath('//@bla', 
 xmlTable.c.doc))[1],