Re: [sqlalchemy] generate top-level python code for an alembic revision from render_item

2020-05-06 Thread Adrian
 Works great, thanks!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/8211829c-bf2c-4675-b9e7-7d728fad7c47%40googlegroups.com.


[sqlalchemy] generate top-level python code for an alembic revision from render_item

2020-05-05 Thread Adrian
I have a custom type implementing enums (no idea if there's something 
better now, but it's used in many places so
replacing it is not an option atm). Currently I'm using render_item to 
simply import the type and the enum and pass the
enum to the type and it works fine.

However, in the alembic revision I'd prefer to define a copy of the enum 
how it looked like at generation time since it may
change and in that case running an older revision's upgrade would be broken 
as it would generate the column based on
a different enum.

In any case, I would like to have the enum definition inside my revision 
file, either top-level or inside `upgrade`. But for neither
of these options I found any way to do it properly as there doesn't seem to 
be any way to provide code to be included in
`upgrade` or variables for the mako template context used to render the 
revision file.

I managed to do it using this **incredibly ugly** workaround, but I really 
hope there's some better way... so if there is one, please
let me know. Because not adding this abomination in my codebase would be 
great:
https://gist.github.com/ThiefMaster/3ad2a35b531ff5cada7d2232333619fa

Thanks for any help/suggestions!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/fe2d88a4-f40d-411e-941f-44ceff7fdc9d%40googlegroups.com.


[sqlalchemy] Re: Custom (more restrictive) primaryjoin and deletion cascades

2017-04-19 Thread Adrian
Here's a MVCE-style example showing the problem I have:

from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import *

Base = declarative_base()


class Type(Base):
__tablename__ = 'types'
id = Column(Integer, primary_key=True)

def __repr__(self):
return ''.format(self.id)


class Bar(Base):
__tablename__ = 'bars'
id = Column(Integer, primary_key=True)
deleted = Column(Boolean, nullable=False, default=False)
type_id = Column(ForeignKey('types.id'), nullable=True)
type = relationship(Type, backref=backref('bars', primaryjoin='(Bar.type_id 
== Type.id) & ~Bar.deleted'))

def __repr__(self):
return ''.format(self.id, self.type, self.deleted)


e = create_engine('postgresql:///test', echo=False)
Base.metadata.create_all(e)
s = Session(e, autoflush=False)

t = Type()
b1 = Bar(type=t)
b2 = Bar(type=t, deleted=True)
s.add(t)
s.commit()
s.delete(t)
s.flush()



So basically when I'm using the relationship in my code I never want 
deleted items to show up. However, for cascading I still need them.
Using serverside cascades could work but if there's a way of doing that 
without having to switch to serverside cascades it'd be nicer.

BTW the example doesn't work with SQLite, apparently it automatically NULLs 
invalid FKs even without specifying `on delete set null` on the FK.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Custom (more restrictive) primaryjoin and deletion cascades

2017-04-19 Thread Adrian
I have this relationship which adds a `ContributionType.proposed_abstracts` 
backref that contains only abstracts not flagged as deleted.

contrib_type = relationship(
'ContributionType', lazy=True, foreign_keys=contrib_type_id,
backref=backref('proposed_abstracts', 
primaryjoin='(Abstract.contrib_type_id == ContributionType.id) & 
~Abstract.is_deleted', lazy=True)
)

This works perfectly fine but unfortunately a 
`session.delete(some_contribution_type)` now does not NULL out the 
contrib_type_id of an abstract that
has been flagged as deleted.

Is there any way to use different join criteria for deletion cascades and 
for just accessing the relationship? Or do I need to hook into the 
before_delete
event for this?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Shortcut for session.query(somequery.exists()).scalar()

2016-12-01 Thread Adrian
`from_self().exists()` seems to produce an unnecessarily complex query 
(still containing all the original columns)

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Shortcut for session.query(somequery.exists()).scalar()

2016-12-01 Thread Adrian
Would you be interested in a PR adding `Query.row_exists()` or even 
`Query.row_exists(disable_eagerloads=True)` which would also disable 
eagerloads by default?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Shortcut for session.query(somequery.exists()).scalar()

2016-12-01 Thread Adrian
> I would normally do session.query(Foo).count()

COUNT is somewhat expensive compared to just checking whether rows exist, 
especially if lots of rows match (2.2M rows in the example):

In [2]: %timeit -n1 -r1 EventPerson.query.count()
1 loop, best of 1: 135 ms per loop
In [3]: %timeit -n1 -r1 
db.session.query(EventPerson.query.exists()).scalar()
1 loop, best of 1: 2.44 ms per loop


> how does this interact with filtering etc?
http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.exists
 
:P

It keeps all the filters present on the query object (in that example, 
`self` is the existing query object)

> this query is against column expressions, not entities, so eager loading 
is not involved.
That's what I first thought, but the related OUTER JOINs are still added. 
Maybe a bug?
https://gist.github.com/ThiefMaster/00f812a5be0ce321c91241de297dbdd0

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Shortcut for session.query(somequery.exists()).scalar()

2016-12-01 Thread Adrian
Is there any shorter/prettier way for this?

session.query(session.query(Foo).exists()).scalar()

It's not hard to add a custom method to the base query class that returns

self.session.query(self.exists()).scalar()

but it feels like something that should be part of SQLAlchemy.

Also, is there any case where `.enable_eagerloads(False)` right before 
`.exists()` could cause problems / different results?
If not, wouldn't it make sense for exists() to do that automatically? 
Loading a relationship doesn't ever seem to make sense
when checking exists().

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] "CompileError: Multiple, unrelated CTEs found with the same name" when using column property

2016-06-10 Thread Adrian
Unless there'll be a release fixing this soon-ish: Is there any workaround 
that doesn't require patching the sqlalchemy to avoid the issue?
Otherwise I'd probably go for a hack like this:

@contextmanager
def dirty_hack():
orig = sqlalchemy.orm.properties._orm_full_deannotate
sqlalchemy.orm.properties._orm_full_deannotate = lambda x: x
try:
yield
finally:
sqlalchemy.orm.properties._orm_full_deannotate = orig

and then wrap the creation of the column property in this contextmanager. 
But I don't know whether skipping the deannotate step might cause other 
issues...

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] "CompileError: Multiple, unrelated CTEs found with the same name" when using column property

2016-06-09 Thread Adrian
Yes, works fine with this change.

On Thursday, June 9, 2016 at 4:37:31 PM UTC+2, Mike Bayer wrote:
>
>
> if you can confirm the query is correct with this patch: 
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] "CompileError: Multiple, unrelated CTEs found with the same name" when using column property

2016-06-09 Thread Adrian
I've already tried not specifying a name - in that case it's `anon_2` in 
the error.

Here's an MCVE: 
https://gist.github.com/ThiefMaster/593e5a78f08d6323eb1b88270256baa7

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] "CompileError: Multiple, unrelated CTEs found with the same name" when using column property

2016-06-09 Thread Adrian
I'm trying to add a `deep_children_count` column property to one of my 
models.
As a regular property it works perfectly fine but I'd like to make it a 
column property so I don't have to spam extra queries if I need the counts 
for multiple objects.

So I tried this:

cat_alias = db.aliased(Category)
cte_query = (select([cat_alias.id, db.cast(array([]), ARRAY(db.Integer
)).label('parents')])
 .where(cat_alias.parent_id.is_(None) & ~cat_alias.
is_deleted)
 .cte('chains', recursive=True))
parent_query = (select([cat_alias.id, cte_query.c.parents.op('||')(
cat_alias.parent_id)])
.where((cat_alias.parent_id == cte_query.c.id) & ~
cat_alias.is_deleted))
cte_query = cte_query.union_all(parent_query)
query = select([db.func.count()]).where(cte_query.c.parents.contains(
array([Category.id])))
Category.deep_children_count = column_property(query)


Unfortunately this fails with an exception when loading one of the objects:
CompileError: Multiple, unrelated CTEs found with the same name: u'chains'

I'm not sure why I end up with *multiple* CTEs and since it's a compile 
error I cannot look at the SQL it
tried to generate either...

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Eager-load a CTE via relationship or column property

2016-06-03 Thread Adrian
I think there's a misunderstanding - I don't want to manually populate the 
relationship, I want to avoid spamming queries if I get e.g. 10 categories 
and need the parent chains for all of them.

Here's a pseudo-ish example of what I'd like to do (without queries in the 
loop):

categories = Category.query.filter(Category.id.in_([1, 2, 3, 4, 5, 
6])).options(...)
for category in categories:
print category, category.parent_list

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Eager-load a CTE via relationship or column property

2016-06-03 Thread Adrian
I have a Category model that has (among other things) a `id` and 
`parent_id` since my categories are organized in a tree.

@property
def chain_query(self):
"""Get a query object for the category chain.

The query retrieves the root category first and then all the
intermediate categories up to (and including) this category.
"""
cte_query = (select([Category.id, Category.parent_id, literal(0).
label('level')])
 .where(Category.id == self.id)
 .cte('category_chain', recursive=True))
parent_query = (select([Category.id, Category.parent_id, cte_query.c
.level + 1])
.where(Category.id == cte_query.c.parent_id))
cte_query = cte_query.union_all(parent_query)
return Category.query.join(cte_query, Category.id == cte_query.c.id
).order_by(cte_query.c.level.desc())

This works fine, but I'd now I'd like to fetch multiple categories and get 
their parent chains too as if they were in a relationship.
Is there any way to adapt what I currently have using either `relationship` 
(and probably some magic to use the Category model but fetch from the CTE)
or `column_property` to achieve this?

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Weird SELECT when assigning to one-to-one relationship

2016-02-19 Thread Adrian
Thought something like that.. i did actually find that it's the backref 
causing it by stepping through tons of SA code ;)
So I guess setting it to None explicitly on creation is the correct way to 
avoid the SELECT?

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Weird SELECT when assigning to one-to-one relationship

2016-02-19 Thread Adrian
Check this testcase: 
https://gist.github.com/ThiefMaster/913446490d0e4c31776d

When assigning an object to the relationship attribute a SELECT is sent, but
this does not happen when explicitly setting the attribute to None before 
assigning
the object to it.

If the SELECT being issued is not a bug, is initializing the attribute with 
an explicit
`None` the proper way to avoid it?

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] PostgreSQL table inheritance with "postgresql_inherits" does not take "add_is_dependent_on" into account

2015-11-23 Thread Adrian Schreyer
I actually I just found the problem; the tables are in fact created in the
right order - the problem is that the DDL contains INHERITS ( "parent" ).
It gives the same error if I try to run the code in a GUI with the
inherited table name quoted, without (the quoting) though it works.

On Mon, Nov 23, 2015 at 6:55 PM Mike Bayer <mike...@zzzcomputing.com> wrote:

>
>
> On 11/23/2015 12:43 PM, Adrian wrote:
> > Hello,
> >
> > I have the following problem - I recently upgraded to the 1.0+ branch
> > from 0.9 and now the PostgreSQL table inheritance does not work properly
> > any longer because the tables that inherit from the master table are
> > sometimes created before (random) the actual table they inherit from,
> > throwing (psycopg2.ProgrammingError) relation "" does not exist
> > errors. With the 0.9+ branch a simple add_is_dependent_on was working to
> > solve this but it does not seem to be taken into account anymore.
>
> this is not the case, that API is still taken into account.  I can
> remove the code that does so and the test which exercises this feature
> then fails, so it is also tested.
>
> Can you please provide a full reproducing test case?It needs to be
> succinct, single file, and runnable by me, thanks.
>
>
>
>
> Is
> > there something that changed from 0.9 to 1.0 that needs to be done to
> > get it to work? metadata.sorted_tables returns the proper table order
> > (master table first, dependencies later) though but tables are not
> > created in that order by metadata.create_all().
> >
> > Thanks,
> >
> > Adrian
> >
> > --
> > You received this message because you are subscribed to the Google
> > Groups "sqlalchemy" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> > an email to sqlalchemy+unsubscr...@googlegroups.com
> > <mailto:sqlalchemy+unsubscr...@googlegroups.com>.
> > To post to this group, send email to sqlalchemy@googlegroups.com
> > <mailto:sqlalchemy@googlegroups.com>.
> > Visit this group at http://groups.google.com/group/sqlalchemy.
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/k4Lhj7i5sBM/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] PostgreSQL table inheritance with "postgresql_inherits" does not take "add_is_dependent_on" into account

2015-11-23 Thread Adrian
I attached a script that reproduces the problem. It actually only happens 
if the metadata contains a schema, then the tablename in the INHERITS() 
clause get quoted, which causes the problem.

On Monday, November 23, 2015 at 7:13:27 PM UTC+1, Adrian wrote:
>
> I actually I just found the problem; the tables are in fact created in the 
> right order - the problem is that the DDL contains INHERITS ( "parent" ). 
> It gives the same error if I try to run the code in a GUI with the 
> inherited table name quoted, without (the quoting) though it works. 
>
> On Mon, Nov 23, 2015 at 6:55 PM Mike Bayer <mike...@zzzcomputing.com> 
> wrote:
>
>>
>>
>> On 11/23/2015 12:43 PM, Adrian wrote:
>> > Hello,
>> >
>> > I have the following problem - I recently upgraded to the 1.0+ branch
>> > from 0.9 and now the PostgreSQL table inheritance does not work properly
>> > any longer because the tables that inherit from the master table are
>> > sometimes created before (random) the actual table they inherit from,
>> > throwing (psycopg2.ProgrammingError) relation "" does not exist
>> > errors. With the 0.9+ branch a simple add_is_dependent_on was working to
>> > solve this but it does not seem to be taken into account anymore.
>>
>> this is not the case, that API is still taken into account.  I can
>> remove the code that does so and the test which exercises this feature
>> then fails, so it is also tested.
>>
>> Can you please provide a full reproducing test case?It needs to be
>> succinct, single file, and runnable by me, thanks.
>>
>>
>>
>>
>> Is
>> > there something that changed from 0.9 to 1.0 that needs to be done to
>> > get it to work? metadata.sorted_tables returns the proper table order
>> > (master table first, dependencies later) though but tables are not
>> > created in that order by metadata.create_all().
>> >
>> > Thanks,
>> >
>> > Adrian
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups "sqlalchemy" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an email to sqlalchemy+unsubscr...@googlegroups.com
>> > <mailto:sqlalchemy+unsubscr...@googlegroups.com>.
>> > To post to this group, send email to sqlalchemy@googlegroups.com
>> > <mailto:sqlalchemy@googlegroups.com>.
>> > Visit this group at http://groups.google.com/group/sqlalchemy.
>> > For more options, visit https://groups.google.com/d/optout.
>>
>> --
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "sqlalchemy" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/sqlalchemy/k4Lhj7i5sBM/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at http://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
from sqlalchemy import create_engine
from sqlalchemy.types import String
from sqlalchemy.engine.url import URL
from sqlalchemy import MetaData, Table, Column

url = URL(drivername="postgresql+psycopg2", username="",
  password="", host="localhost", port=5432, database="")

engine = create_engine(url, echo=True)

metadata = MetaData(bind=engine, schema="myschema")

parent = Table(
"parent", metadata,
Column("foo", String(4), nullable=False),
prefixes=["UNLOGGED"])

child = Table(
"child", metadata,
Column("bar", String(4), nullable=False),
postgresql_inherits=parent.fullname,
prefixes=["UNLOGGED"])

child.add_is_dependent_on(parent)

metadata.drop_all(checkfirst=True)
metadata.create_all(checkfirst=True)


Re: [sqlalchemy] PostgreSQL table inheritance with "postgresql_inherits" does not take "add_is_dependent_on" into account

2015-11-23 Thread Adrian Schreyer
That's true now that you are saying it, I actually implemented it myself
before using a simple @compiles with CreateTable.

On Mon, Nov 23, 2015 at 9:33 PM Mike Bayer <mike...@zzzcomputing.com> wrote:

>
>
> On 11/23/2015 03:15 PM, Adrian wrote:
> > I attached a script that reproduces the problem. It actually only
> > happens if the metadata contains a schema, then the tablename in the
> > INHERITS() clause get quoted, which causes the problem.
> >
>
> postgresql_inherits was only added in 1.0.How can this have "worked"
> in 0.9?
>
>
>
>
>
>
> > On Monday, November 23, 2015 at 7:13:27 PM UTC+1, Adrian wrote:
> >
> > I actually I just found the problem; the tables are in fact created
> > in the right order - the problem is that the DDL contains INHERITS (
> > "parent" ). It gives the same error if I try to run the code in a
> > GUI with the inherited table name quoted, without (the quoting)
> > though it works.
> >
> > On Mon, Nov 23, 2015 at 6:55 PM Mike Bayer <mike...@zzzcomputing.com
> > <mailto:mike...@zzzcomputing.com>> wrote:
> >
> >
> >
> > On 11/23/2015 12:43 PM, Adrian wrote:
> > > Hello,
> > >
> > > I have the following problem - I recently upgraded to the 1.0+
> > branch
> > > from 0.9 and now the PostgreSQL table inheritance does not
> > work properly
> > > any longer because the tables that inherit from the master
> > table are
> > > sometimes created before (random) the actual table they
> > inherit from,
> > > throwing (psycopg2.ProgrammingError) relation "" does
> > not exist
> > > errors. With the 0.9+ branch a simple add_is_dependent_on was
> > working to
> > > solve this but it does not seem to be taken into account
> anymore.
> >
> > this is not the case, that API is still taken into account.  I
> can
> > remove the code that does so and the test which exercises this
> > feature
> > then fails, so it is also tested.
> >
> > Can you please provide a full reproducing test case?It needs
> > to be
> > succinct, single file, and runnable by me, thanks.
> >
> >
> >
> >
> > Is
> > > there something that changed from 0.9 to 1.0 that needs to be
> > done to
> > > get it to work? metadata.sorted_tables returns the proper
> > table order
> > > (master table first, dependencies later) though but tables are
> not
> > > created in that order by metadata.create_all().
> > >
> > > Thanks,
> > >
> > > Adrian
> > >
> > > --
> > > You received this message because you are subscribed to the
> Google
> > > Groups "sqlalchemy" group.
> > > To unsubscribe from this group and stop receiving emails from
> > it, send
> > > an email to sqlalchemy+unsubscr...@googlegroups.com
> > <mailto:sqlalchemy%2bunsubscr...@googlegroups.com>
> > > <mailto:sqlalchemy+unsubscr...@googlegroups.com
> > <mailto:sqlalchemy%2bunsubscr...@googlegroups.com>>.
> > > To post to this group, send email to
> > sqlalchemy@googlegroups.com <mailto:sqlalchemy@googlegroups.com>
> > > <mailto:sqlalchemy@googlegroups.com
> > <mailto:sqlalchemy@googlegroups.com>>.
> > > Visit this group at http://groups.google.com/group/sqlalchemy
> > <http://groups.google.com/group/sqlalchemy>.
> > > For more options, visit https://groups.google.com/d/optout
> > <https://groups.google.com/d/optout>.
> >
> > --
> > You received this message because you are subscribed to a topic
> > in the Google Groups "sqlalchemy" group.
> > To unsubscribe from this topic, visit
> >
> https://groups.google.com/d/topic/sqlalchemy/k4Lhj7i5sBM/unsubscribe
> > <
> https://groups.google.com/d/topic/sqlalchemy/k4Lhj7i5sBM/unsubscribe>.
> > To unsubscribe from this group and all its topics, send an email
> > to sqlalchemy+unsubscr...@googlegroups.com
> > <mailto:sqlalchemy%2bunsubscr...@googlegroups.com>.
> >

[sqlalchemy] PostgreSQL table inheritance with "postgresql_inherits" does not take "add_is_dependent_on" into account

2015-11-23 Thread Adrian
Hello,

I have the following problem - I recently upgraded to the 1.0+ branch from 
0.9 and now the PostgreSQL table inheritance does not work properly any 
longer because the tables that inherit from the master table are sometimes 
created before (random) the actual table they inherit from, throwing 
(psycopg2.ProgrammingError) 
relation "" does not exist errors. With the 0.9+ branch a simple 
add_is_dependent_on was working to solve this but it does not seem to be 
taken into account anymore. Is there something that changed from 0.9 to 1.0 
that needs to be done to get it to work? metadata.sorted_tables returns the 
proper table order (master table first, dependencies later) though but 
tables are not created in that order by metadata.create_all().

Thanks,

Adrian

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] PostgreSQL table inheritance with "postgresql_inherits" does not take "add_is_dependent_on" into account

2015-11-23 Thread Adrian Schreyer
That works and solves it, thanks!

On Mon, Nov 23, 2015 at 9:37 PM Mike Bayer <mike...@zzzcomputing.com> wrote:

>
>
> On 11/23/2015 03:15 PM, Adrian wrote:
> > I attached a script that reproduces the problem. It actually only
> > happens if the metadata contains a schema, then the tablename in the
> > INHERITS() clause get quoted, which causes the problem.
>
> anyway, there's no direct "postgresql_inherits_schema" feature as of
> yet, so yo can work around that you have to artificially place the
> schema name inside of the table name using direct quote control:
>
> from sqlalchemy.sql.elements import quoted_name
>
> child = Table(
> "child", metadata,
> Column("bar", String(4), nullable=False),
> postgresql_inherits=quoted_name(parent.fullname, quote=False),
> prefixes=["UNLOGGED"])
>
>
>
> >
> > On Monday, November 23, 2015 at 7:13:27 PM UTC+1, Adrian wrote:
> >
> > I actually I just found the problem; the tables are in fact created
> > in the right order - the problem is that the DDL contains INHERITS (
> > "parent" ). It gives the same error if I try to run the code in a
> > GUI with the inherited table name quoted, without (the quoting)
> > though it works.
> >
> > On Mon, Nov 23, 2015 at 6:55 PM Mike Bayer <mike...@zzzcomputing.com
> > <mailto:mike...@zzzcomputing.com>> wrote:
> >
> >
> >
> > On 11/23/2015 12:43 PM, Adrian wrote:
> > > Hello,
> > >
> > > I have the following problem - I recently upgraded to the 1.0+
> > branch
> > > from 0.9 and now the PostgreSQL table inheritance does not
> > work properly
> > > any longer because the tables that inherit from the master
> > table are
> > > sometimes created before (random) the actual table they
> > inherit from,
> > > throwing (psycopg2.ProgrammingError) relation "" does
> > not exist
> > > errors. With the 0.9+ branch a simple add_is_dependent_on was
> > working to
> > > solve this but it does not seem to be taken into account
> anymore.
> >
> > this is not the case, that API is still taken into account.  I
> can
> > remove the code that does so and the test which exercises this
> > feature
> > then fails, so it is also tested.
> >
> > Can you please provide a full reproducing test case?It needs
> > to be
> > succinct, single file, and runnable by me, thanks.
> >
> >
> >
> >
> > Is
> > > there something that changed from 0.9 to 1.0 that needs to be
> > done to
> > > get it to work? metadata.sorted_tables returns the proper
> > table order
> > > (master table first, dependencies later) though but tables are
> not
> > > created in that order by metadata.create_all().
> > >
> > > Thanks,
> > >
> > > Adrian
> > >
> > > --
> > > You received this message because you are subscribed to the
> Google
> > > Groups "sqlalchemy" group.
> > > To unsubscribe from this group and stop receiving emails from
> > it, send
> > > an email to sqlalchemy+unsubscr...@googlegroups.com
> > <mailto:sqlalchemy%2bunsubscr...@googlegroups.com>
> > > <mailto:sqlalchemy+unsubscr...@googlegroups.com
> > <mailto:sqlalchemy%2bunsubscr...@googlegroups.com>>.
> > > To post to this group, send email to
> > sqlalchemy@googlegroups.com <mailto:sqlalchemy@googlegroups.com>
> > > <mailto:sqlalchemy@googlegroups.com
> > <mailto:sqlalchemy@googlegroups.com>>.
> > > Visit this group at http://groups.google.com/group/sqlalchemy
> > <http://groups.google.com/group/sqlalchemy>.
> > > For more options, visit https://groups.google.com/d/optout
> > <https://groups.google.com/d/optout>.
> >
> > --
> > You received this message because you are subscribed to a topic
> > in the Google Groups "sqlalchemy" group.
> > To unsubscribe from this topic, visit
> >
> https://groups.google.com/d/topic/sqlalchemy/k4Lhj7i5sBM/unsubscribe
> >

[sqlalchemy] Storing data on an instance that's deleted when the object is expired

2015-09-25 Thread Adrian
For some methods/properties on a model it might be useful to memoize its 
result.
There are some common decoratos such as cached_property from werkzeug which 
simply add the
value to the object's __dict__ after retrieving it the first time (thus not 
calling the property's getter again).
Or you might end up using custom memoization code that stores the value 
somewhere on the object.

Anyway, the problem with all those things is that I'm likely to cache 
things too long in case they depend
on a column/relationship value since expiring the object won't expire my 
cached data.

Is there any place where I could store custom data associated with an 
instance of a mapped object that
is cleared when the object is expired?

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Using a backref-generated attribute in an event listener

2015-06-29 Thread Adrian
I tried this code:

@listens_for(AttachmentFolder.all_attachments, 'append')
def _attachment_added(target, value, *unused):
target.modified_dt = now_utc()

However AttachmentFolder.all_attachments is a backref so it doesn't exist 
at import time (I usually
register listeners right after the definition of the model class). So I 
tried using the mapper_configured
event, but backrefs still don't exist at that point.

Besides flipping the relationship/backref around (which would work only 
because I don't have
any listeners on the relationship itself yet), is there any clean way to 
bind an event listener
to a backref?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to do a revision table or How to get a 1:1 and 1:n relationship between the same tables to work properly

2015-06-18 Thread Adrian
I hadn't seen that part of the documentation - doing it that way works fine 
now!
I ended up using a signal to update `revisions` automatically when setting 
`current_revision`:
https://github.com/ThiefMaster/indico/blob/f300c3b9dc8d499b4d745dee74edceff53e7ffb4/indico/modules/events/notes/models/notes.py#L159-L164
Is there any better way to do this or is that the way to go?

I'd definitely advise doing it that way, that's the supported way to do 
 a favorite id approach and is more relationally correct (e.g. not 
 possible to have multiple favorites).It seems like you read the 
 docs at 

 http://docs.sqlalchemy.org/en/rel_1_0/orm/relationship_persistence.html#rows-that-point-to-themselves-mutually-dependent-rows,
  

 so I'd give that a revisit and feel free to share the errors from that 
 case. 



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] How to do a revision table or How to get a 1:1 and 1:n relationship between the same tables to work properly

2015-06-17 Thread Adrian
I'm trying to store old versions of (some of) the data in one of my tables.
To do so, I'm thinking about models like this (not including anything not 
relevant to the case):

class EventNote(db.Model):
id = db.Column(db.Integer, primary_key=True)
latest_revision = db.relationship(
'EventNoteRevision',
lazy=False,
uselist=False,
primaryjoin=lambda: (EventNote.id == EventNoteRevision.note_id)  
EventNoteRevision.is_latest,
back_populates='note'
)
revisions = db.relationship(
'EventNoteRevision',
lazy=True,
cascade='all, delete-orphan',
primaryjoin=lambda: EventNote.id == EventNoteRevision.note_id,
order_by=lambda: EventNoteRevision.created_dt.desc(),
back_populates='note'
)


class EventNoteRevision(db.Model):
id = db.Column(db.Integer, primary_key=True)
note_id = db.Column(db.Integer, db.ForeignKey('events.notes.id'), 
nullable=False, index=True)
is_latest = db.Column(db.Boolean, nullable=False, default=False)
# ...and some columns for the actual data of that revision
note = db.relationship(
'EventNote',
lazy=False,
back_populates='revisions'
)



However, it always breaks somewhere (I was trying around with some 
variations in the relationship configurations)...
These are the problems I've encountered so far:

- A newly created revision assigned to `latest_revision` is flushed with a 
null `note_id`
- Assigning a new revision to `latest_revision` (i.e. with another revision 
already existing) results in the old one being DELETEd or its note_id being 
NULLed out (neither should happen)


I could really use some help on how to do this properly. The model posted 
above can be changed in any way. For example, I wouldn't mind having a 
`latest_revision_id` column in `EventNote`,
but when I tried that (including `use_alter` and `post_update`) I also 
ended up with tons of different errors, including some that showed up every 
other time I started my application (seems like
something doesn't happen in a deterministic order during mapper 
configuration).

One option to avoid all the problems could be using the revision table only 
for OLD data, i.e. keeping all the latest data inside `EventNote` and only 
adding a new revision when something changes.
That way I would avoid having two relationships and all the problems would 
go away. I know at least one big site doing it like this (Stack Overflow), 
so maybe it's not the worst option... even though
they probably had other reasons to do it like this since they aren't using 
SQLAlchemy. But after having spent half the afternoon trying to get the 
two-relationship solution working I'm really tempted
to do it like this... Especially since I wouldn't have to worry about 
allowing only one `is_latest` revision per `note_id` (easy with a 
conditional unique index, but needs extra code to mark the old ones
as not being the latest one anymore)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] How to conditionally create an index in a clean way?

2015-06-08 Thread Adrian
Currently I have this code which does the job, but it feels extremely dirty:
https://gist.github.com/ThiefMaster/f7a7f7651245ec97a256

My `has_extension` function executes SQL to check if the given postgres 
extension is installed or not.
Something like DDL execute_if would be perfect, but from what I've seen 
it's not available on the Index class.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Merge table args from multiple mixins

2015-05-11 Thread Adrian
Is there any clean way to get indexes, constraints etc. from multiple mixin 
classes?

I've tried adding a declared attr for __table_args__ but apparently it 
follows the normal MRO and only uses one of those methods.
On Stack Overflow, I've found http://stackoverflow.com/a/23429892/298479 
but that looks pretty hack-ish.

Cheers
Adrian

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Can I run code when an object is added to a certain relationship?

2015-04-30 Thread Adrian
I have a User model with an association proxy referencing the Email model, 
so I can access the user's email via user.email.
Since I'm soft-deleting users and require emails for non-deleted users to 
be unique, I have a unique constraint on my email table with a `WHERE not 
is_user_deleted`.
In the User model I have a property that automatically sets 
email.is_user_deleted when User.is_deleted is set.

However, when setting user.email = 'something' for an already deleted user, 
the association proxy only runs my creator callable UserEmail(email=v) and 
never sets is_user_deleted=True.
Since the user instance if not available within the creator function of the 
association proxy I wonder if there's any way I can run code whenever 
something is added to the underlying relationship (User._email),
i.e. something like this:

@on_stuff_added(User, '_email')
def do_stuff(user, email):
email.is_user_deleted = user.is_deleted

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Can I run code when an object is added to a certain relationship?

2015-04-30 Thread Adrian
Nevermind. I had to use `set` instead of `append` in the attribute event.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How can i use LIKE with an association proxy?

2015-04-28 Thread Adrian
Ugh, somehow my reply sent by email nerver arrived here... here's my code: 
https://gist.github.com/ThiefMaster/40cd1f91e2a792150496

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] How can i use LIKE with an association proxy?

2015-04-27 Thread Adrian
In my user have I have an association proxy so I can access all email 
addresses of the user via User.all_emails.
For a simple exact search I simply 
.filter(User.all_emails.contains('f...@example.com')).

Is it also possible to use e.g. a LIKE match (besides manually joining the 
Emails table and using Email.email.like(...))?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How can i use LIKE with an association proxy?

2015-04-27 Thread Adrian
That's the first thing I've tried. Unfortunately it doesn't work...

--- 1 User.find_all(User.all_emails.any(UserEmail.email.like('%adrian%')))

/home/adrian/dev/indico/env/lib/python2.7/site-packages/sqlalchemy/ext/associationproxy.pyc
 
in any(self, criterion, **kwargs)
367 
368
-- 369 if self._value_is_scalar:
370 value_expr = getattr(
371 self.target_class, self.value_attr).has(criterion, 
**kwargs)

/home/adrian/dev/indico/env/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.pyc
 
in __get__(self, obj, cls)
723 if obj is None:
724 return self
-- 725 obj.__dict__[self.__name__] = result = self.fget(obj)
726 return result
727

/home/adrian/dev/indico/env/lib/python2.7/site-packages/sqlalchemy/ext/associationproxy.pyc
 
in _value_is_scalar(self)
231 def _value_is_scalar(self):
232 return not self._get_property().\
-- 233 mapper.get_property(self.value_attr).uselist
234
235 @util.memoized_property

AttributeError: 'ColumnProperty' object has no attribute 'uselist'


My relationship and association proxy are defined like this:

_all_emails = db.relationship(
'UserEmail',
lazy=True,
viewonly=True,
primaryjoin='User.id == UserEmail.user_id',
collection_class=set,
backref=db.backref('user', lazy=False)
)



On Monday, April 27, 2015 at 5:28:49 PM UTC+2, Michael Bayer wrote:

  
 the has() / any()  operators can allow this:


 User.all_emails.any(Email.email.like('%foo%'))


 it will produce an EXISTS subquery expression, which is not as efficient 
 in SQL as a regular JOIN.




  -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] before_create receives a list of tuples in the `tables` kwarg since 1.0

2015-04-27 Thread Adrian
I just tried updating from 0.9.9 to 1.0.2 and noticed that this code is now 
broken (tuple object has no attribute schema):

def _before_create(target, connection, **kw):
schemas = {table.schema for table in kw['tables']}
for schema in schemas:

CreateSchema(schema).execute_if(callable_=_should_create_schema).execute(connection)

listen(db.Model.metadata, 'before_create', _before_create)



Is this change intentional? I couldn't find anything about it in the 1.0 
changelog.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is it possible to add another criterion to this backref?

2015-03-25 Thread Adrian
Is what I'm trying to be possible assuming I cannot add any code to the 
User model?
In the future there might be plugins in my application which could contain 
favorites, but while plugins can add their own models, they are never 
allowed to directly modify a class in the application core.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is it possible to add another criterion to this backref?

2015-03-25 Thread Adrian
In case it's unclear what exactly I'm trying to do, here's the version with 
the relationship defined right in the User model that works fine.
I'd like to do this exact same thing, but somehow define the relationship 
outside the User model. Preferably by using the normal declarative syntax 
to define the association table and defining the relationship there.

# in the User model:

favorite_users = db.relationship(
'User',
secondary=favorite_user_table,
primaryjoin=id == favorite_user_table.c.user_id,
secondaryjoin=(id == favorite_user_table.c.target_id)  ~is_deleted,
lazy=True,
backref=db.backref('favorite_of', lazy=True),
)

# the association table:
favorite_user_table = db.Table(
'favorite_users',
db.metadata,
db.Column(
'user_id',
db.Integer,
db.ForeignKey('users.users.id'),
primary_key=True,
nullable=False,
index=True
),
db.Column(
'target_id',
db.Integer,
db.ForeignKey('users.users.id'),
primary_key=True,
nullable=False
),
schema='users'
)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re[2]: [sqlalchemy] Is it possible to add another criterion to this backref?

2015-03-25 Thread Adrian
Sure, no problem with that.
I'll add a small self-contained example for it tomorrow.

- Adrian

On 25.03.2015 14:21 Michael Bayer wrote:
 Im trying to avoid having to write a full example for you from scratch so if
 you could provide everything in one example, both models and where you want
 the relationships, with all the columns, we can work from there, thanks.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is it possible to add another criterion to this backref?

2015-03-24 Thread Adrian
@declared_attr
def user(cls):
The user owning this favorite
return db.relationship(
'User',
lazy=False,
foreign_keys=lambda: [cls.user_id],
backref=db.backref(
'_favorite_users',
lazy=True,
cascade='all, delete-orphan',
primaryjoin=lambda: '(User.id == user_id)  
~target.is_deleted'
)
)

I've added it on the backref since that's the relationship where I want the 
filter to apply.
In the end I'd like to be able to do this: 
User.query.get(123)._favorite_users which would get me a list of all the 
favorite users (I'll be using association_proxy, but for now I need to get 
the relationships themselves working) besides those users who have 
is_deleted=True (on the User, not the FavoriteUser).

But no matter what I put there (tried both lambdas and strings), I always 
get this error (so I can't even try to figure out the correct criteria to 
use, since it fails early, during mapper configuration time):
sqlalchemy.exc.ArgumentError: Column-based expression object expected for 
argument 'primaryjoin'; got: '(User.id == user_id)  ~target.is_deleted', 
type type 'unicode'


Actually looking at this code again... it's almost a standard 
many-to-many relationship, so I should probably be using secondary and 
secondaryjoin somewhere. Can I define this backref-like, i.e. from within 
the FavoriteUser model? That way I don't have to spread things around so 
much (which would be the case if I defined the relationship in the User 
model).

-- Adrian

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Delete fails using automap with joined inheritance

2014-03-27 Thread Adrian Robert
I have been using the automap extension with postgres, with an inheritance 
structure using the joined inheritance pattern. I could not figure out a 
way to have this reflected from the DB so I define the classes for this 
part of my schema explicitly, and when automap initializes, these classes 
get used (and modified) for those tables and the rest get autogenerated.

It works fine until I try to delete an instance of an inheriting class: 
then I get a circular dependency error which seems to relate to 
relationships and backreferences created for the foreign key relationship 
underlying the joined inheritance.

The attached code demonstrates the issue.  The first run generates the DB 
schema from the classes, and works, any number of times.  On the second 
run, switch the two comments for Base (in two places) to use automap.  The 
output of the first:

% ./test.py
RELATIONSHIPS: []
Run completed successfully.

% ./test.py
RELATIONSHIPS: [('employee', RelationshipProperty at 0x10263d310; 
employee), ('engineer_collection', RelationshipProperty at 0x102663210; 
engineer_collection)]
Circular dependency detected. Cycles: {DeleteState(Engineer at 
0x1026a9d10)} all edges: {(DeleteState(Engineer at 0x1026a9d10), 
DeleteState(Engineer at 0x1026a9d10)), 
(ProcessState(OneToManyDP(Employee.engineer_collection), Engineer at 
0x1026a9d10, delete=True), DeleteState(Engineer at 0x1026a9d10))}

Notice that the mapper in the first case shows no relationships, despite 
the foreign key created for the inheritance.  On the other hand when the 
same structure is read from the DB by automap, we see forward and back 
relationships, which I guess is somehow causing the circular dependency.

Am I doing something wrong in my attempt to use joined inheritance in 
conjunction with automap, or is this a bug or something unsupported?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
#!/usr/bin/env python

from sqlalchemy import Column, Integer, String, create_engine
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy.schema import ForeignKey

# SWITCH commenting here and below to test
Base = declarative_base()# Run 1
#Base = automap_base(declarative_base=declarative_base())# Run 2

class Employee(Base):
__tablename__ = 'employee'
id = Column(Integer, primary_key=True)
name = Column(String(50))
type = Column(String(50))
__mapper_args__ = {
'polymorphic_identity':'employee', 'polymorphic_on':type
}

class Engineer(Employee):
__tablename__ = 'engineer'
id = Column(Integer, ForeignKey('employee.id'), primary_key=True)
engineer_name = Column(String(30))
__mapper_args__ = {
'polymorphic_identity':'engineer',
}

engine = create_engine(postgresql://user:pw@localhost/test)
Session = sessionmaker(bind=engine)

# SWITCH commenting to test
Base.metadata.create_all(engine)# Run 1
#Base.prepare(engine, reflect=True)# Run 2

# Test
session = Session()
engineer = Engineer(engineer_name='Eng 1', name='Emp 1')
print(RELATIONSHIPS:  + str(engineer.__mapper__.relationships.items()))
session.add(engineer)
session.commit()

session = Session()
engineer = session.query(Engineer).all()[0]
session.delete(engineer)

try:
session.commit()
print(Run completed successfully)
except Exception as e:
print(EXCEPTION:  + str(e))


[sqlalchemy] compiles() no longer working with PrimaryKeyConstraint after update from 0.8 to 0.9.2

2014-02-18 Thread Adrian

Hi guys,

After updating to 0.9.2 from 0.8 my custom compiles() are not working any 
longer. The definition looks likes this:

@compiles(PrimaryKeyConstraint, postgresql)def 
add_pg_primary_key_options(constraint, compiler, **kwargs):
ddl = compiler.visit_primary_key_constraint(constraint, **kwargs)

if 'postgresql_fillfactor' in constraint.kwargs:
fillfactor = constraint.kwargs.get('postgresql_fillfactor')
pos = ddl.index(')') + 1
ddl = ddl[:pos] +  WITH (FILLFACTOR=%s) % fillfactor + ddl[pos:]

return ddl

But now I get this error: 

sqlalchemy.exc.ArgumentError: Argument 'postgresql_fillfactor' is not 
accepted by dialect 'postgresql' on behalf of class 
'sqlalchemy.sql.schema.PrimaryKeyConstraint'

It seems the function itself is not executed at all. Are there any changes 
in 0.9+ I am not aware of that changed the compiles() behaviour?

Thanks,

Adrian

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Automap and naming of relationship attributes

2014-02-06 Thread Adrian Robert
Well, using the mapper event would be nicer, but in any case I was already 
iterating over Base.classes and adding them to my own module's namespace like 
so:

globals()[cls.__name__] = cls

It works for the rest of my application being able to see the classes by 
importing the module, but apparently not for this.  I'm not really expert at 
Python class and namespace innards, but from the error message as well as the 
default str() output it seems the automap-generated classes considers 
themselves to be in the sqlalchemy.ext.automap module but are not registered in 
that namespace.

Is there a way to tell the classes to use a different namespace from an 
instrument_class handler?  (And incidentally I'm already using my own base 
class through automap_base(declarative_base(cls=...)) but that doesn't make any 
difference.)




On 2014.2.6, at 15:59, Michael Bayer mike...@zzzcomputing.com wrote:

 Python pickle can't pickle class instances where the class isn't locatable as 
 module-level imports.  As automap necessarily creates classes on the fly, 
 these classes aren't part of any module.  to have them part of a module you'd 
 want to use an event to place them in the namespace of one of your own 
 modules, or you can implement a custom `__reduce__()` method on them (see the 
 Python docs for __reduce__()).
 
 a good event to use here might be instrument_class:
 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/events.html#sqlalchemy.orm.events.MapperEvents.instrument_class
 
 
 
 On Feb 6, 2014, at 4:32 AM, Adrian Robert adrian.b.rob...@gmail.com wrote:
 
 One other point, I was trying out the dogpile cache example and ran into 
 (after I stuck a .encode('utf-8') into the key mangler since I'm using 
 Python-3 and pylibmc):
 
 _pickle.PicklingError: Can't pickle class 'sqlalchemy.ext.automap.Person': 
 attribute lookup sqlalchemy.ext.automap.Person failed
 
 This was fixed by a hack
 
   sqlalchemy.ext.automap.__dict__[cls.__name__] = cls
 
 run over all the automap-created classes.  It might be I'm only having to do 
 this because I'm doing something wrong elsewhere, but I just thought I'd 
 mention it in case it comes up for someone.
 
 
 
 On 2014.2.2, at 14:22, Adrian Robert adrian.b.rob...@gmail.com wrote:
 
 Thanks, that works beautifully.
 
 I had noticed name_for_scalar_relationship parameter but I guess wasn't 
 confident enough that I understood what was going on to try it.  :-[
 
 
 -- 
 You received this message because you are subscribed to a topic in the 
 Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/sqlalchemy/p6YkPuCs_Ks/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Automap and naming of relationship attributes

2014-02-02 Thread Adrian Robert
Thanks, that works beautifully.

I had noticed name_for_scalar_relationship parameter but I guess wasn't 
confident enough that I understood what was going on to try it.  :-[

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] ORM relationships and (PostgreSQL) partition constraint-exclusion

2013-12-05 Thread Adrian
Hi All,

I have a few partitioned tables in my PostgreSQL database but I do not know 
yet how to make the ORM relationship() with partition 
constraint-exclusionhttp://www.postgresql.org/docs/9.3/static/ddl-partitioning.html#DDL-PARTITIONING-CONSTRAINT-EXCLUSION
 on 
the instance level. Constraint-exclusion does not work with joins and 
requires scalar values - the problem is that I would need to add an 
additional WHERE clause to the primaryjoin (which adds the partition key) 
if the relationship is accessed from the* instance level*, e.g. 
user.addresses. Is there a mechanism in relationship() to distinguish 
between class-based joins (User.addresses) and instance-level access?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] ORM relationships and (PostgreSQL) partition constraint-exclusion

2013-12-05 Thread Adrian Schreyer
Never mind,

the problem was that I specified the clause in a secondaryjoin and not in
the primaryjoin of the relationship().


On Thu, Dec 5, 2013 at 10:44 AM, Adrian adrian.schre...@gmail.com wrote:

 Hi All,

 I have a few partitioned tables in my PostgreSQL database but I do not
 know yet how to make the ORM relationship() with partition
 constraint-exclusionhttp://www.postgresql.org/docs/9.3/static/ddl-partitioning.html#DDL-PARTITIONING-CONSTRAINT-EXCLUSION
  on
 the instance level. Constraint-exclusion does not work with joins and
 requires scalar values - the problem is that I would need to add an
 additional WHERE clause to the primaryjoin (which adds the partition key)
 if the relationship is accessed from the* instance level*, e.g.
 user.addresses. Is there a mechanism in relationship() to distinguish
 between class-based joins (User.addresses) and instance-level access?

 --
 You received this message because you are subscribed to a topic in the
 Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/sqlalchemy/ov-mYWA7XAM/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] ORM relationships and (PostgreSQL) partition constraint-exclusion

2013-12-05 Thread Adrian Schreyer
Given the three mappings *First*, *Second* and *Partitioned*, I want to
declare a relationship between *First* and *Partitioned*. The problem is
that *Partitioned* is partitioned by partition_key which is a column in
*First* but not in *Second*. *Second* however contains the identifier that
actually links *First* to specific rows in the partitioned table.

So far the mapping looks like this mock example:

partitioned = relationship(Partitioned,
secondary=Base.metadata.tables['schema.seconds'],
primaryjoin=and_(First.first_id==Second.first_id,
First.partition_key==Partitioned.partition_key),
secondaryjoin=Second.other_id==Partitioned.other_id,
foreign_keys=[Second.first_id, Partitioned.partition_key,
Partitioned.other_id],
uselist=True, innerjoin=True, lazy='dynamic')

It works, but it only interpolates the First.first_id with the actual value
which normally makes sense but to make the PostgreSQL constraint-exclusion
work the First.partition_key would need to be interpolated with the proper
value as well. Right now it is only given as
First.partition_key==Partitioned.partition_key.

Does that make sense? I am not sure if my relationship configuration is
wrong or if this kind of mapping is simply not supported.


On Thu, Dec 5, 2013 at 3:31 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Dec 5, 2013, at 6:57 AM, Adrian Schreyer adrian.schre...@gmail.com
 wrote:

 Actually that was a bit too early but I tracked the problem down to the
 many-to-many relationship. Parameters are only interpolated (e.g.
 %(param_1)s) for the primaryjoin to the secondary table. Is there a
 technique to force relationship() to interpolate a parameter between the
 1st and 3rd table instead of using only table.column=table.column?


 there’s no reason why that would be the case can you provide more
 specifics?






 On Thu, Dec 5, 2013 at 10:58 AM, Adrian Schreyer 
 adrian.schre...@gmail.com wrote:

 Never mind,

 the problem was that I specified the clause in a secondaryjoin and not in
 the primaryjoin of the relationship().


 On Thu, Dec 5, 2013 at 10:44 AM, Adrian adrian.schre...@gmail.comwrote:

 Hi All,

 I have a few partitioned tables in my PostgreSQL database but I do not
 know yet how to make the ORM relationship() with partition
 constraint-exclusionhttp://www.postgresql.org/docs/9.3/static/ddl-partitioning.html#DDL-PARTITIONING-CONSTRAINT-EXCLUSION
  on
 the instance level. Constraint-exclusion does not work with joins and
 requires scalar values - the problem is that I would need to add an
 additional WHERE clause to the primaryjoin (which adds the partition key)
 if the relationship is accessed from the* instance level*, e.g.
 user.addresses. Is there a mechanism in relationship() to distinguish
 between class-based joins (User.addresses) and instance-level access?

 --
 You received this message because you are subscribed to a topic in the
 Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/sqlalchemy/ov-mYWA7XAM/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.




 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.

 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] ORM relationships and (PostgreSQL) partition constraint-exclusion

2013-12-05 Thread Adrian Schreyer
 screw around with this using
 exclude_properties for those cols you don’t need to refer to on the
 mapping, and perhaps primary_key if the mapper complains, such as:

 partitioned_second = mapper(Partitioned, j, non_primary=True, properties={
 id: j.c.partitioned_id,
 other_id: [j.c.partitioned_other_id, j.c.second_other_id],
 }, exclude_properties=[j.c.second_id],
 primary_key=[j.c.partitioned_id, j.c.second_other_id])

 or you can just ignore those extra attributes on some of your Partitioned
 objects.








 On Dec 5, 2013, at 11:03 AM, Adrian Schreyer adrian.schre...@gmail.com
 wrote:

 Given the three mappings *First*, *Second* and *Partitioned*, I want to
 declare a relationship between *First* and *Partitioned*. The problem is
 that *Partitioned* is partitioned by partition_key which is a column in
 *First* but not in *Second*. *Second* however contains the identifier
 that actually links *First* to specific rows in the partitioned table.

 So far the mapping looks like this mock example:

 partitioned = relationship(Partitioned,
 secondary=Base.metadata.tables['schema.seconds'],
 primaryjoin=and_(First.first_id==Second.first_id, 
 First.partition_key==Partitioned.partition_key),
 secondaryjoin=Second.other_id==Partitioned.other_id,
 foreign_keys=[Second.first_id, Partitioned.partition_key, 
 Partitioned.other_id],
 uselist=True, innerjoin=True, lazy='dynamic')

 It works, but it only interpolates the First.first_id with the actual
 value which normally makes sense but to make the PostgreSQL
 constraint-exclusion work the First.partition_key would need to be
 interpolated with the proper value as well. Right now it is only given as
 First.partition_key==Partitioned.partition_key.

 Does that make sense? I am not sure if my relationship configuration is
 wrong or if this kind of mapping is simply not supported.


 On Thu, Dec 5, 2013 at 3:31 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Dec 5, 2013, at 6:57 AM, Adrian Schreyer adrian.schre...@gmail.com
 wrote:

 Actually that was a bit too early but I tracked the problem down to the
 many-to-many relationship. Parameters are only interpolated (e.g.
 %(param_1)s) for the primaryjoin to the secondary table. Is there a
 technique to force relationship() to interpolate a parameter between the
 1st and 3rd table instead of using only table.column=table.column?


 there’s no reason why that would be the case can you provide more
 specifics?






 On Thu, Dec 5, 2013 at 10:58 AM, Adrian Schreyer 
 adrian.schre...@gmail.com wrote:

 Never mind,

 the problem was that I specified the clause in a secondaryjoin and not
 in the primaryjoin of the relationship().


 On Thu, Dec 5, 2013 at 10:44 AM, Adrian adrian.schre...@gmail.comwrote:

 Hi All,

 I have a few partitioned tables in my PostgreSQL database but I do not
 know yet how to make the ORM relationship() with partition
 constraint-exclusionhttp://www.postgresql.org/docs/9.3/static/ddl-partitioning.html#DDL-PARTITIONING-CONSTRAINT-EXCLUSION
  on
 the instance level. Constraint-exclusion does not work with joins and
 requires scalar values - the problem is that I would need to add an
 additional WHERE clause to the primaryjoin (which adds the partition key)
 if the relationship is accessed from the* instance level*, e.g.
 user.addresses. Is there a mechanism in relationship() to distinguish
 between class-based joins (User.addresses) and instance-level access?

 --
 You received this message because you are subscribed to a topic in the
 Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/sqlalchemy/ov-mYWA7XAM/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.




 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.

 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.




 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop

Re: [sqlalchemy] ORM relationships and (PostgreSQL) partition constraint-exclusion

2013-12-05 Thread Adrian Schreyer
I will try this out then, thanks for your help! I assume this works in 0.9
only?


On Thu, Dec 5, 2013 at 7:18 PM, Michael Bayer mike...@zzzcomputing.comwrote:

 With the example I gave, when accessing .partitioned on a First instance,
 the lazy loader will convert all columns from “First” into a bound
 parameter, it emits this:


  SELECT partitioned.other_id AS partitioned_other_id, second.other_id AS
 second_other_id, partitioned.partition_key AS partitioned_partition_key,
 second.first_id AS second_first_id
 FROM partitioned JOIN second ON partitioned.other_id = second.other_id
 WHERE ? = partitioned.partition_key AND ? = second.first_id
 2013-12-05 14:14:42,689 INFO sqlalchemy.engine.base.Engine (u'p1', 2)


 “first.partition_key” is not in the query, it’s replaced by ‘p1’ in this
 case, the value that was assigned to that First instance.There is no
 “secondary” table per se in the example I gave.




 On Dec 5, 2013, at 1:55 PM, Adrian Schreyer adrian.schre...@gmail.com
 wrote:

 The partitioned relationship actually referred to the tertiary table in
 both the primary and secondary join - the problem for me was that in the
 primaryjoin

 primaryjoin=and_(First.first_id==Second.first_id, 
 First.partition_key==Partitioned.partition_key)

 only First.first_id will be interpolated with the actual value first_id of
 the instance in question whereas First.partition_key on the other hand
 will be interpolated as column object. The problem is that in this case
 First.partition_key has to be interpolated with the actual value to get
 the constraint-exclusion to work. In a normal many-to-many relationship
 this would not be necessary and maybe that is why it only interpolates the
 values for the join on the secondary table.

 The partitioned relationship emits a query like this if the attribute is
 accessed:

 SELECT partitioned.*
   FROM partitioned, second, first
  WHERE %(param_1)s = second.first_id
AND first.partition_key = partitioned.partition_key
AND second.other_id = partitioned.other_id

 But I would need first.partitioned_key to be %(param_2)s.

 So far I used a @property around a query function to add the partition_key
 to query.filter() manually.


 On Thu, Dec 5, 2013 at 4:37 PM, Michael Bayer mike...@zzzcomputing.comwrote:

 oh, you want to refer to the tertiary table in both the primary and
 secondary join.so right this pattern does not correspond to the
 A-secondary-B pattern and isn’t really a classic many-to-many.

 a quick way to map these are to use non primary mappers (was going to
 just paraphrase, but let me just try it out b.c. these are fun anyway, and
 I want to see the new joining behavior we have in 0.9…):

 from sqlalchemy import *
 from sqlalchemy.orm import *
 from sqlalchemy.ext.declarative import declarative_base

 Base = declarative_base()

  class First(Base):
 __tablename__ = 'first'

 first_id = Column(Integer, primary_key=True)
 partition_key = Column(String)

 def __repr__(self):
 return (First(%s, %s) % (self.first_id, self.partition_key))

 class Second(Base):
 __tablename__ = 'second'

 id = Column(Integer, primary_key=True)
 first_id = Column(Integer)
 other_id = Column(Integer)

 class Partitioned(Base):
 __tablename__ = 'partitioned'

 id = Column(Integer, primary_key=True)
 partition_key = Column(String)
 other_id = Column(Integer)

 def __repr__(self):
 return (Partitioned(%s, %s) % (self.partition_key,
 self.other_id))


 j = join(Partitioned, Second, Partitioned.other_id == Second.other_id)

 partitioned_second = mapper(Partitioned, j, non_primary=True, properties={
 # note we need to disambiguate columns here - the join()
 # will provide them as j.c.tablename_colname for access,
 # but they retain their real names in the mapping
 id: j.c.partitioned_id,
 other_id: [j.c.partitioned_other_id, j.c.second_other_id],
 secondary_id: j.c.second_id
 })

 First.partitioned = relationship(
 partitioned_second,
 primaryjoin=and_(
 First.partition_key ==
 partitioned_second.c.partition_key,
 First.first_id ==
 foreign(partitioned_second.c.first_id)
 ), innerjoin=True)

 e = create_engine(postgresql://scott:tiger@localhost/test, echo=True)
 Base.metadata.drop_all(e)
 Base.metadata.create_all(e)
 s = Session(e)
 s.add_all([
 First(first_id=1, partition_key='p1'),
 First(first_id=2, partition_key='p1'),
 First(first_id=3, partition_key='p2'),
 Second(first_id=1, other_id=1),
 Second(first_id=2, other_id=1),
 Second(first_id=3, other_id=2),
 Partitioned(partition_key='p1', other_id=1),
 Partitioned(partition_key='p1', other_id=2),
 Partitioned(partition_key='p2', other_id=2),
 ])
 s.commit()

 for row in s.query(First, Partitioned).join

Re: [sqlalchemy] Problem/bug with column_property on eagerloaded polymophic table

2011-10-31 Thread Adrian Tejn Kern
Thank you, very much. 

I actually did try to use the actually Column, but I could figure out how 
to resolve my interdependencies since my column_property is actually a 
subselect, and apparently I didn't test it on my test case.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/l3wBUQTi7jMJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Problem/bug with column_property on eagerloaded polymophic table

2011-10-30 Thread Adrian Tejn Kern
I have a column_property on a polymorphic base class. When I 
joinedload/subqueryload a derived class the colum_property makes the query 
fail.

class A(Base):
__tablename__ = a
id  = Column(Integer, primary_key=True)
type= Column(String(40), nullable=False)
__mapper_args__ = {'polymorphic_on': type}

A.anything = orm.column_property(A.id + 1000)

class B(A):
__tablename__ = b
account_id  = Column(Integer, ForeignKey('a.id'), primary_key=True)
x_id= Column(Integer, ForeignKey('x.id'), nullable=False)
__mapper_args__ = {'polymorphic_identity': 'named'}

class X(Base):
__tablename__ = x
id = Column(Integer, primary_key=True)
b = orm.relationship(B


Calling:
print Session.query(X).options(joinedload(b))

produces:

SELECT x.id AS x_id,
   anon_1.a_id AS anon_1_a_id,
   anon_1.a_type AS anon_1_a_type,
   a.id + %(id_1)s AS anon_2,
 anon_1.b_account_id AS anon_1_b_account_id,
 anon_1.b_x_id AS anon_1_b_x_idFROM a,
 xLEFT OUTER JOIN
  (SELECT a.id AS a_id,
  a.TYPE AS a_type,
b.account_id AS b_account_id,
b.x_id AS b_x_id
   FROM a
   JOIN b ON a.id = b.account_id) AS anon_1 ON x.id = anon_1.b_x_id


It seems that the a.id + %(id_1) should changed to anon_1.a_id and a 
removed from FROM or better a.id + %(id_1)s should be moved into the 
sub select named anon_1. This is probably what you want if the 
column_property was actually a subselect itself (which is want I'm actually 
trying to do).

Am I correct in thinking that this corner case simply isn't supported yet? 
Or is it a bug? Or am I doing something wrong?

Actually the above query doesn't fail outright. Although it does create a 
unsuspecting join. But if the column_property instead was something like

class subA(Base):
__tablename__ = subA
id = Column(Integer, primary_key=True)
a_id = Column(Integer, ForeignKey('a.id'), nullable=False)
value = Column(Integer, nullable=False)

A.anything = orm.column_property(
select([func.sum(subA.value)], subA.a_id==A.id))

Then the sql would be:

SELECT x.id AS x_id,
   anon_1.a_id AS anon_1_a_id,
   anon_1.a_type AS anon_1_a_type,

  (SELECT sum(subA.value) AS sum_1
   FROM subA
   WHERE subA.a_id = a.id) AS anon_2,
   anon_1.b_account_id AS anon_1_b_account_id,
   anon_1.b_x_id AS anon_1_b_x_idFROM xLEFT OUTER JOIN
  (SELECT a.id AS a_id,
  a.TYPE AS a_type,
b.account_id AS b_account_id,
b.x_id AS b_x_id
   FROM a
   JOIN b ON a.id = b.account_id) AS anon_1 ON x.id = anon_1.b_x_id


Which naturally doesn't work at all, since a.id inside the first 
subselect doesn't refer to anything.


PS: I have no idea how this email is going to get formatted, please let me 
know if it is impossible to read.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/1CSullHjqPMJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] mapped attribute throws AttributeError: 'symbol' object has no attribute

2011-10-12 Thread Adrian
When I try to access a specific mapped attribute the error below is thrown 
but only if I use a custom __eq__(). 

def __eq__(self, other):
'''
'''
return self.aromatic_ring_id == other.aromatic_ring_id

When I print the type of both, the 'other' class is 'symbol': class 
'credoscript.models.aromaticring.AromaticRing' class 
'sqlalchemy.util.langhelpers.symbol'

Any ideas what went wrong?

---
AttributeErrorTraceback (most recent call last)

/home/adrian/ipython console in module()

/usr/local/lib/python2.7/dist-packages/SQLAlchemy-0.7.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/attributes.pyc
 
in __get__(self, instance, owner)
166 return dict_[self.key]
167 else:
-- 168 return self.impl.get(instance_state(instance),dict_)
169 
170 def create_proxied_attribute(descriptor):

/usr/local/lib/python2.7/dist-packages/SQLAlchemy-0.7.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/attributes.pyc
 
in get(self, state, dict_, passive)
422 value = ATTR_EMPTY
423 
-- 424 if value in (PASSIVE_NO_RESULT, NEVER_SET):
425 return value
426 elif value is ATTR_WAS_SET:

/home/adrian/Software/credoscript/models/aromaticring.py in __eq__(self, 
other)
 48 '''
 49 print type(self), type(other)
--- 50 return self.aromatic_ring_id == other.aromatic_ring_id
 51 
 52 def __ne__(self, other):

AttributeError: 'symbol' object has no attribute 'aromatic_ring_id'

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/3kqkVsvmxQ8J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Selecting more than one entity from a subquery

2011-09-17 Thread Adrian
I created a gist with example code https://gist.github.com/1223926

query.sql shows you the basic SQL query of what I am trying to do - fetching 
the Residue as an entity and the 12 summed values from the subquery. 
orm-code.py is the orm code for the upper part of the query (the part I am 
struggling with at the moment). The tricky part for me is how to the select 
the Residue entity *and *the result of the aggregates.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/qoR2Raq9UXgJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Selecting more than one entity from a subquery

2011-09-16 Thread Adrian
I have seen that it is possible to get an entity from a subquery with the 
aliased(entity,statement) construct. Is there also a way to get more than 
one entity from a subquery, for example 2?

Cheers

Adrian

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/2pEAnubaBukJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Using custom function expression throws 'TypeError: an integer is required' in orm query

2011-07-04 Thread Adrian
Yes, the __eq__() and __hash__() functions are overridden to compare
the primary keys (e.g. self.id==other.id, or hash(self.id)). The orm
query works as expected once I remove them from the Entity class
definition.

On Jul 1, 4:03 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 does your Entity class have some overridden __eq__(), __cmp__(), __hash__() 
 on it ?   I think there might be an issue here but I need a lot more 
 specifics.

 On Jul 1, 2011, at 6:34 AM, Adrian wrote:







  I just tested it and session.execute(query.statement) returns the
  proper resultset. The 'similarity' functions returns REAL.

  ---

  In [13]: query.all()
  ---
  TypeError                                 Traceback (most recent call
  last)

  /usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.pyc in
  all(self)
    1675
    1676         
  - 1677         return list(self)
    1678
    1679     @_generative(_no_clauseelement_condition)

  /usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.pyc in
  instances(self, cursor, _Query__context)
    1916
    1917             if filter:
  - 1918                 rows = filter(rows)
    1919
    1920             if context.refresh_state and self._only_load_props
  \

  /usr/local/lib/python2.7/dist-packages/sqlalchemy/util/
  _collections.pyc in unique_list(seq, hashfunc)
     594     if not hashfunc:
     595     return [x for x in seq
  -- 596                 if x not in seen
     597                 and not seen.__setitem__(x, True)]
     598     else:

  TypeError: an integer is required

  # the last value in the row is the similarity value
  In [14]: session.execute(query).fetchall()
  Out[14]:
  [(10581, u'STI', u'STI', None, u'4-[(4-methylpiperazin-1-yl)methyl]-N-
  [4-methyl-3-[[4-(3-pyridyl)pyrimidin-2-yl]amino]phenyl]benzamide',
  None, None, u'Cc1ccc(cc1Nc2nccc(n2)c3cccnc3)NC(=O)c4ccc(cc4)C[NH
  +]5CC[NH+](CC5)C', 495.619, 37, 29, 8, 0, 0.275862, 2, 2, 0, 41, 8, 5,
  4, 0, 0, 88.68, 2.588, 0.241379, True, False, False, False, False,
  False, False, False, False, False, 1.0),
  (8099, u'MPZ', u'MPZ', None, u'4-[(4-methylpiperazin-1-yl)methyl]-N-
  [3-[[4-(3-pyridyl)pyrimidin-2-yl]amino]phenyl]benzamide', None, None,
  u'C[NH+]1CC[NH+](CC1)Cc2ccc(cc2)C(=O)Nc3(c3)Nc4nccc(n4)c5cccnc5',
  481.592, 36, 28, 8, 0, 0.285714, 2, 2, 0, 40, 8, 5, 4, 0, 0, 88.68,
  2.292, 0.214286, True, False, False, False, False, False, False,
  False, False, False, 0.811594202898551),
  (9593, u'PRC', u'PRC', None, u'N-[4-methyl-3-[[4-(3-
  pyridyl)pyrimidin-2-yl]amino]phenyl]pyridine-3-carboxamide', None,
  None, u'Cc1ccc(cc1Nc2nccc(n2)c3cccnc3)NC(=O)c4cccnc4', 382.418, 29,
  22, 7, 0, 0.318182, 0, 0, 0, 32, 6, 4, 4, 0, 0, 92.69, 2.143,
  0.0454545, True, False, False, False, False, False, False, False,
  False, False, 0.691176470588235),
  (5653, u'G6G', u'G6G', None, u'N-[3-[[3-[4-(4-methoxyanilino)-1,3,5-
  triazin-2-yl]-2-pyridyl]amino]-4-methyl-phenyl]-4-[(4-
  methylpiperazin-1-yl)methyl]benzamide', None, None,
  u'Cc1ccc(cc1Nc2c(cccn2)c3ncnc(n3)Nc4ccc(cc4)OC)NC(=O)c5ccc(cc5)C[NH
  +]6CC[NH+](CC6)C', 617.743, 46, 35, 11, 0, 0.314286, 2, 2, 0, 51, 11,
  6, 5, 0, 0, 122.83, 3.668, 0.228571, True, False, False, False, False,
  False, False, False, False, False, 0.619047619047619),
  (1153, u'406', u'406', None, u'4-[[(1R,3R)-3-
  (dimethylamino)pyrrolidin-1-yl]methyl]-N-[4-methyl-3-[(4-pyrimidin-5-
  ylpyrimidin-2-yl)amino]phenyl]-3-(trifluoromethyl)benzamide', None,
  None, u'Cc1ccc(cc1Nc2nccc(n2)c3cncnc3)NC(=O)c4ccc(c(c4)C(F)(F)F)C[N@@H
  +]5CC[C@@H](C5)[NH+](C)C', 578.631, 42, 30, 12, 3, 0.4, 2, 2, 0, 46,
  10, 5, 4, 0, 0, 101.57, 3.187, 0.3, True, False, False, False, False,
  False, False, False, False, False, 0.526881720430108),
  (8552, u'NIL', u'NIL', None, u'4-methyl-N-[3-(4-methylimidazol-1-
  yl)-5-(trifluoromethyl)phenyl]-3-[[4-(3-pyridyl)pyrimidin-2-
  yl]amino]benzamide', None, None,
  u'Cc1ccc(cc1Nc2nccc(n2)c3cccnc3)C(=O)Nc4cc(cc(c4)n5cc(nc5)C)C(F)(F)F',
  529.516, 39, 28, 11, 3, 0.392857, 0, 0, 0, 43, 8, 5, 5, 0, 0, 97.62,
  3.771, 0.107143, True, False, False, False, False, False, False,
  False, False, False, 0.50561797752809)]

  On Jun 30, 3:27 pm, Michael Bayer mike...@zzzcomputing.com wrote:
  On Jun 30, 2011, at 9:23 AM, Adrian wrote:

  SQAlchemy 0.7.1 / pyscopg 2.2.1 / PostgreSQL 9.0
  ---

  I have a weird problem with orm queries that contain custom functions,
  in this case from postgres contrib modules. When I do a query like
  this

  session.query(Entity, func.similarity(Entity.string,
  'querystring')).all() # postgres pg_trgm extension

  I will get the error below. However, when I specify one or all the
  columns of the Entity individually it works. It also works if the
  function is in the .order_by() clause.

  Any ideas where the problem could come from?

  this seems like it has to do

[sqlalchemy] Re: Using custom function expression throws 'TypeError: an integer is required' in orm query

2011-07-04 Thread Adrian
I found the problem now - the __hash__() function I had did not return
an integer, it returned a tuple of the composite primary key. I
changed it now and it works, thanks for your help!

On Jul 4, 8:50 am, Adrian adr...@schreyer.me wrote:
 Yes, the __eq__() and __hash__() functions are overridden to compare
 the primary keys (e.g. self.id==other.id, or hash(self.id)). The orm
 query works as expected once I remove them from the Entity class
 definition.

 On Jul 1, 4:03 pm, Michael Bayer mike...@zzzcomputing.com wrote:







  does your Entity class have some overridden __eq__(), __cmp__(), __hash__() 
  on it ?   I think there might be an issue here but I need a lot more 
  specifics.

  On Jul 1, 2011, at 6:34 AM, Adrian wrote:

   I just tested it and session.execute(query.statement) returns the
   proper resultset. The 'similarity' functions returns REAL.

   ---

   In [13]: query.all()
   ---
   TypeError                                 Traceback (most recent call
   last)

   /usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.pyc in
   all(self)
     1675
     1676         
   - 1677         return list(self)
     1678
     1679     @_generative(_no_clauseelement_condition)

   /usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.pyc in
   instances(self, cursor, _Query__context)
     1916
     1917             if filter:
   - 1918                 rows = filter(rows)
     1919
     1920             if context.refresh_state and self._only_load_props
   \

   /usr/local/lib/python2.7/dist-packages/sqlalchemy/util/
   _collections.pyc in unique_list(seq, hashfunc)
      594     if not hashfunc:
      595     return [x for x in seq
   -- 596                 if x not in seen
      597                 and not seen.__setitem__(x, True)]
      598     else:

   TypeError: an integer is required

   # the last value in the row is the similarity value
   In [14]: session.execute(query).fetchall()
   Out[14]:
   [(10581, u'STI', u'STI', None, u'4-[(4-methylpiperazin-1-yl)methyl]-N-
   [4-methyl-3-[[4-(3-pyridyl)pyrimidin-2-yl]amino]phenyl]benzamide',
   None, None, u'Cc1ccc(cc1Nc2nccc(n2)c3cccnc3)NC(=O)c4ccc(cc4)C[NH
   +]5CC[NH+](CC5)C', 495.619, 37, 29, 8, 0, 0.275862, 2, 2, 0, 41, 8, 5,
   4, 0, 0, 88.68, 2.588, 0.241379, True, False, False, False, False,
   False, False, False, False, False, 1.0),
   (8099, u'MPZ', u'MPZ', None, u'4-[(4-methylpiperazin-1-yl)methyl]-N-
   [3-[[4-(3-pyridyl)pyrimidin-2-yl]amino]phenyl]benzamide', None, None,
   u'C[NH+]1CC[NH+](CC1)Cc2ccc(cc2)C(=O)Nc3(c3)Nc4nccc(n4)c5cccnc5',
   481.592, 36, 28, 8, 0, 0.285714, 2, 2, 0, 40, 8, 5, 4, 0, 0, 88.68,
   2.292, 0.214286, True, False, False, False, False, False, False,
   False, False, False, 0.811594202898551),
   (9593, u'PRC', u'PRC', None, u'N-[4-methyl-3-[[4-(3-
   pyridyl)pyrimidin-2-yl]amino]phenyl]pyridine-3-carboxamide', None,
   None, u'Cc1ccc(cc1Nc2nccc(n2)c3cccnc3)NC(=O)c4cccnc4', 382.418, 29,
   22, 7, 0, 0.318182, 0, 0, 0, 32, 6, 4, 4, 0, 0, 92.69, 2.143,
   0.0454545, True, False, False, False, False, False, False, False,
   False, False, 0.691176470588235),
   (5653, u'G6G', u'G6G', None, u'N-[3-[[3-[4-(4-methoxyanilino)-1,3,5-
   triazin-2-yl]-2-pyridyl]amino]-4-methyl-phenyl]-4-[(4-
   methylpiperazin-1-yl)methyl]benzamide', None, None,
   u'Cc1ccc(cc1Nc2c(cccn2)c3ncnc(n3)Nc4ccc(cc4)OC)NC(=O)c5ccc(cc5)C[NH
   +]6CC[NH+](CC6)C', 617.743, 46, 35, 11, 0, 0.314286, 2, 2, 0, 51, 11,
   6, 5, 0, 0, 122.83, 3.668, 0.228571, True, False, False, False, False,
   False, False, False, False, False, 0.619047619047619),
   (1153, u'406', u'406', None, u'4-[[(1R,3R)-3-
   (dimethylamino)pyrrolidin-1-yl]methyl]-N-[4-methyl-3-[(4-pyrimidin-5-
   ylpyrimidin-2-yl)amino]phenyl]-3-(trifluoromethyl)benzamide', None,
   None, u'Cc1ccc(cc1Nc2nccc(n2)c3cncnc3)NC(=O)c4ccc(c(c4)C(F)(F)F)C[N@@H
   +]5CC[C@@H](C5)[NH+](C)C', 578.631, 42, 30, 12, 3, 0.4, 2, 2, 0, 46,
   10, 5, 4, 0, 0, 101.57, 3.187, 0.3, True, False, False, False, False,
   False, False, False, False, False, 0.526881720430108),
   (8552, u'NIL', u'NIL', None, u'4-methyl-N-[3-(4-methylimidazol-1-
   yl)-5-(trifluoromethyl)phenyl]-3-[[4-(3-pyridyl)pyrimidin-2-
   yl]amino]benzamide', None, None,
   u'Cc1ccc(cc1Nc2nccc(n2)c3cccnc3)C(=O)Nc4cc(cc(c4)n5cc(nc5)C)C(F)(F)F',
   529.516, 39, 28, 11, 3, 0.392857, 0, 0, 0, 43, 8, 5, 5, 0, 0, 97.62,
   3.771, 0.107143, True, False, False, False, False, False, False,
   False, False, False, 0.50561797752809)]

   On Jun 30, 3:27 pm, Michael Bayer mike...@zzzcomputing.com wrote:
   On Jun 30, 2011, at 9:23 AM, Adrian wrote:

   SQAlchemy 0.7.1 / pyscopg 2.2.1 / PostgreSQL 9.0
   ---

   I have a weird problem with orm queries that contain custom functions,
   in this case from postgres contrib modules. When I do a query like
   this

   session.query(Entity, func.similarity

[sqlalchemy] Re: Using custom function expression throws 'TypeError: an integer is required' in orm query

2011-07-01 Thread Adrian
I just tested it and session.execute(query.statement) returns the
proper resultset. The 'similarity' functions returns REAL.

---

In [13]: query.all()
---
TypeError Traceback (most recent call
last)


/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.pyc in
all(self)
   1675
   1676 
- 1677 return list(self)
   1678
   1679 @_generative(_no_clauseelement_condition)

/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.pyc in
instances(self, cursor, _Query__context)
   1916
   1917 if filter:
- 1918 rows = filter(rows)
   1919
   1920 if context.refresh_state and self._only_load_props
\

/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/
_collections.pyc in unique_list(seq, hashfunc)
594 if not hashfunc:
595 return [x for x in seq
-- 596 if x not in seen
597 and not seen.__setitem__(x, True)]
598 else:

TypeError: an integer is required



# the last value in the row is the similarity value
In [14]: session.execute(query).fetchall()
Out[14]:
[(10581, u'STI', u'STI', None, u'4-[(4-methylpiperazin-1-yl)methyl]-N-
[4-methyl-3-[[4-(3-pyridyl)pyrimidin-2-yl]amino]phenyl]benzamide',
None, None, u'Cc1ccc(cc1Nc2nccc(n2)c3cccnc3)NC(=O)c4ccc(cc4)C[NH
+]5CC[NH+](CC5)C', 495.619, 37, 29, 8, 0, 0.275862, 2, 2, 0, 41, 8, 5,
4, 0, 0, 88.68, 2.588, 0.241379, True, False, False, False, False,
False, False, False, False, False, 1.0),
 (8099, u'MPZ', u'MPZ', None, u'4-[(4-methylpiperazin-1-yl)methyl]-N-
[3-[[4-(3-pyridyl)pyrimidin-2-yl]amino]phenyl]benzamide', None, None,
u'C[NH+]1CC[NH+](CC1)Cc2ccc(cc2)C(=O)Nc3(c3)Nc4nccc(n4)c5cccnc5',
481.592, 36, 28, 8, 0, 0.285714, 2, 2, 0, 40, 8, 5, 4, 0, 0, 88.68,
2.292, 0.214286, True, False, False, False, False, False, False,
False, False, False, 0.811594202898551),
 (9593, u'PRC', u'PRC', None, u'N-[4-methyl-3-[[4-(3-
pyridyl)pyrimidin-2-yl]amino]phenyl]pyridine-3-carboxamide', None,
None, u'Cc1ccc(cc1Nc2nccc(n2)c3cccnc3)NC(=O)c4cccnc4', 382.418, 29,
22, 7, 0, 0.318182, 0, 0, 0, 32, 6, 4, 4, 0, 0, 92.69, 2.143,
0.0454545, True, False, False, False, False, False, False, False,
False, False, 0.691176470588235),
 (5653, u'G6G', u'G6G', None, u'N-[3-[[3-[4-(4-methoxyanilino)-1,3,5-
triazin-2-yl]-2-pyridyl]amino]-4-methyl-phenyl]-4-[(4-
methylpiperazin-1-yl)methyl]benzamide', None, None,
u'Cc1ccc(cc1Nc2c(cccn2)c3ncnc(n3)Nc4ccc(cc4)OC)NC(=O)c5ccc(cc5)C[NH
+]6CC[NH+](CC6)C', 617.743, 46, 35, 11, 0, 0.314286, 2, 2, 0, 51, 11,
6, 5, 0, 0, 122.83, 3.668, 0.228571, True, False, False, False, False,
False, False, False, False, False, 0.619047619047619),
 (1153, u'406', u'406', None, u'4-[[(1R,3R)-3-
(dimethylamino)pyrrolidin-1-yl]methyl]-N-[4-methyl-3-[(4-pyrimidin-5-
ylpyrimidin-2-yl)amino]phenyl]-3-(trifluoromethyl)benzamide', None,
None, u'Cc1ccc(cc1Nc2nccc(n2)c3cncnc3)NC(=O)c4ccc(c(c4)C(F)(F)F)C[N@@H
+]5CC[C@@H](C5)[NH+](C)C', 578.631, 42, 30, 12, 3, 0.4, 2, 2, 0, 46,
10, 5, 4, 0, 0, 101.57, 3.187, 0.3, True, False, False, False, False,
False, False, False, False, False, 0.526881720430108),
 (8552, u'NIL', u'NIL', None, u'4-methyl-N-[3-(4-methylimidazol-1-
yl)-5-(trifluoromethyl)phenyl]-3-[[4-(3-pyridyl)pyrimidin-2-
yl]amino]benzamide', None, None,
u'Cc1ccc(cc1Nc2nccc(n2)c3cccnc3)C(=O)Nc4cc(cc(c4)n5cc(nc5)C)C(F)(F)F',
529.516, 39, 28, 11, 3, 0.392857, 0, 0, 0, 43, 8, 5, 5, 0, 0, 97.62,
3.771, 0.107143, True, False, False, False, False, False, False,
False, False, False, 0.50561797752809)]



On Jun 30, 3:27 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jun 30, 2011, at 9:23 AM, Adrian wrote:

  SQAlchemy 0.7.1 / pyscopg 2.2.1 / PostgreSQL 9.0
  ---

  I have a weird problem with orm queries that contain custom functions,
  in this case from postgres contrib modules. When I do a query like
  this

  session.query(Entity, func.similarity(Entity.string,
  'querystring')).all() # postgres pg_trgm extension

  I will get the error below. However, when I specify one or all the
  columns of the Entity individually it works. It also works if the
  function is in the .order_by() clause.

  Any ideas where the problem could come from?

 this seems like it has to do with the type of object being returned from 
 psycopg2, as the Query runs the rows through a uniquing function that uses 
 sets, maybe a comparison is emitting that TypeError.   The stack trace 
 doesn't quite make it clear.   It would be interesting to see what 
 session.execute(myquery.statement) sends back in the result rows.   (I 
 haven't looked up the SIMILARITY function in the PG docs yet to see what it 
 returns).









  -

  /usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.pyc in
  all(self)
    1675
    1676         
  - 1677         return list(self)
    1678
    1679

[sqlalchemy] Using custom function expression throws 'TypeError: an integer is required' in orm query

2011-06-30 Thread Adrian
SQAlchemy 0.7.1 / pyscopg 2.2.1 / PostgreSQL 9.0
---

I have a weird problem with orm queries that contain custom functions,
in this case from postgres contrib modules. When I do a query like
this

session.query(Entity, func.similarity(Entity.string,
'querystring')).all() # postgres pg_trgm extension

I will get the error below. However, when I specify one or all the
columns of the Entity individually it works. It also works if the
function is in the .order_by() clause.

Any ideas where the problem could come from?

-

/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.pyc in
all(self)
   1675
   1676 
- 1677 return list(self)
   1678
   1679 @_generative(_no_clauseelement_condition)

/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.pyc in
instances(self, cursor, _Query__context)
   1916
   1917 if filter:
- 1918 rows = filter(rows)
   1919
   1920 if context.refresh_state and self._only_load_props
\

/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/
_collections.pyc in unique_list(seq, hashfunc)
594 if not hashfunc:
595 return [x for x in seq
-- 596 if x not in seen
597 and not seen.__setitem__(x, True)]
598 else:

TypeError: an integer is required

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Conflicting state in identity map?

2011-05-20 Thread Adrian
I have a Turbogears server that uses sqlalchemy to interface with a
postgres database. Today, I noticed the server was down, so I tried
restarting it. Now my turbogears log is full of errors like:
AssertionError: A conflicting state is already present in the identity
map for key (class 'dr8db.ModelClasses.FITSHeaderKeyword', (1045,))
and
Exception KeyError: KeyError((class
'dr8db.ModelClasses.MaskbitsType', (61,)),) in bound method
InstanceState._cleanup of sqlalchemy.orm.state.InstanceState object
at 0x4445c90 ignored

I tried googling this stuff, but found nothing...

Basically it lets me start the paster (Turbogears) server, but after
~5-10 minutes the server dies and there are hundreds of these errors
in the log -- help!! I need to get this server back up ASAP!

Thanks,
Adrian

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Conflicting state in identity map?

2011-05-20 Thread Adrian
So if it is the latter, that the Session is being shared amongst
threads, what is the correct way to handle the sessions from within
Turbogears? What I do now is create a scoped_session in the model, and
import it into each controller. For a while, I made use of special
functions _before() and _after() to create the session in _before the
page loads, and close it in _after, but then I was getting
DetatchedInstanceErrors when I tried to access columns from objects
returned to the template.

I'm not sure how familiar you are with Turbogears, so I apologize if
this is too much of a TG question...but I asked this on their mailing
list and their answer was that what I was doing was correct --
obviously it's not if I'm getting these AssertionErrors...

Thanks for the quick reply, as usual!

On May 20, 9:59 am, Michael Bayer mike...@zzzcomputing.com wrote:
 On May 20, 2011, at 9:45 AM, Adrian wrote:





  I have a Turbogears server that uses sqlalchemy to interface with a
  postgres database. Today, I noticed the server was down, so I tried
  restarting it. Now my turbogears log is full of errors like:
  AssertionError: A conflicting state is already present in the identity
  map for key (class 'dr8db.ModelClasses.FITSHeaderKeyword', (1045,))
  and
  Exception KeyError: KeyError((class
  'dr8db.ModelClasses.MaskbitsType', (61,)),) in bound method
  InstanceState._cleanup of sqlalchemy.orm.state.InstanceState object
  at 0x4445c90 ignored

  I tried googling this stuff, but found nothing...

  Basically it lets me start the paster (Turbogears) server, but after
  ~5-10 minutes the server dies and there are hundreds of these errors
  in the log -- help!! I need to get this server back up ASAP!

 That's an assertion that is generally unreachable from within the Session.   
 The only ways I think you could get there would be via direct manipulation of 
 session.identity_map, or if the Session is being shared among concurrent 
 threads, which is not supported.

 The main thing you'd be looking for here is, at what point did this server 
 begin to fail and what event precluded that happening ?    Either a code 
 update, or perhaps the app was never tested against its current load, are the 
 two possibilities.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Conflicting state in identity map?

2011-05-20 Thread Adrian
Ok, I'll definitely do some quality debugging...

Just to be clear -- I **don't** have to worry about closing my
sessions in each controller?

On May 20, 6:08 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On May 20, 2011, at 4:12 PM, Adrian wrote:

  So if it is the latter, that the Session is being shared amongst
  threads, what is the correct way to handle the sessions from within
  Turbogears? What I do now is create a scoped_session in the model, and
  import it into each controller. For a while, I made use of special
  functions _before() and _after() to create the session in _before the
  page loads, and close it in _after, but then I was getting
  DetatchedInstanceErrors when I tried to access columns from objects
  returned to the template.

  I'm not sure how familiar you are with Turbogears, so I apologize if
  this is too much of a TG question...but I asked this on their mailing
  list and their answer was that what I was doing was correct --
  obviously it's not if I'm getting these AssertionErrors...

  Thanks for the quick reply, as usual!

 So scoped_session() will ensure that sessions are associated with threads.  
 Its not a total guarantee of thread safety if for example you're placing 
 objects in some kind of in-memory cache, then using them in other threads 
 without detaching them first from their original session.

 You really need to figure out what the catalyst for the issue was - short of 
 locating the actual cause of the bug, that would produce the most clues 
 towards what it is.   Or, this is a really harsh approach that I had to do 
 once when a deeply nested call to a defunct Google service started crashing 
 the site, I had to disable all pages on the site, then slowly turn one page 
 on after the next to isolate which one was the cause of the issue.  Clearly 
 that isn't an option in lots of cases but it depends on if you can reproduce 
 the issue locally, perhaps when load testing with Apache ab and such.

 If its something I had seen before that would help but I've never seen anyone 
 hitting that assertion before.





  On May 20, 9:59 am, Michael Bayer mike...@zzzcomputing.com wrote:
  On May 20, 2011, at 9:45 AM, Adrian wrote:

  I have a Turbogears server that uses sqlalchemy to interface with a
  postgres database. Today, I noticed the server was down, so I tried
  restarting it. Now my turbogears log is full of errors like:
  AssertionError: A conflicting state is already present in the identity
  map for key (class 'dr8db.ModelClasses.FITSHeaderKeyword', (1045,))
  and
  Exception KeyError: KeyError((class
  'dr8db.ModelClasses.MaskbitsType', (61,)),) in bound method
  InstanceState._cleanup of sqlalchemy.orm.state.InstanceState object
  at 0x4445c90 ignored

  I tried googling this stuff, but found nothing...

  Basically it lets me start the paster (Turbogears) server, but after
  ~5-10 minutes the server dies and there are hundreds of these errors
  in the log -- help!! I need to get this server back up ASAP!

  That's an assertion that is generally unreachable from within the Session. 
    The only ways I think you could get there would be via direct 
  manipulation of session.identity_map, or if the Session is being shared 
  among concurrent threads, which is not supported.

  The main thing you'd be looking for here is, at what point did this server 
  begin to fail and what event precluded that happening ?    Either a code 
  update, or perhaps the app was never tested against its current load, are 
  the two possibilities.

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] sqlalchemy speed help

2011-05-02 Thread Adrian
I'm facing some interesting speed issues with my database that only
seem to crop up within sqlalchemy. I have 2 tables in a one-to-one
relationship, with about ~1 million rows each. From Python, I grab an
object from one table (table A):
rowA = session.query(A).limit(1).one()

And then access a row in table B:
blah = rowA.b.someRow

I noticed that for ~500 of these, it started taking a long time - so I
wrapped it in time() calls:
time1 = time.time()
blah = rowA.b.someRow
print time.time() - time1

What I find is that when I run this in Python, it takes ~0.01 seconds
per object, but when I execute it directly in the database:
EXPLAIN ANALYZE SELECT *
FROM a
WHERE a.b_pk = 20;

It only takes ~0.01 milliseconds! Is there some optimization I can do
from within the Model Class definitions, or do I really need to
execute raw sql from my code to get raw speed?

Thanks,
Adrian

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: sqlalchemy speed help

2011-05-02 Thread Adrian
Awesome, I'll work through these suggestions -- thanks for the speedy
reply!

On May 2, 11:29 am, Michael Bayer mike...@zzzcomputing.com wrote:
 On May 2, 2011, at 11:14 AM, Adrian wrote:





  I'm facing some interesting speed issues with my database that only
  seem to crop up within sqlalchemy. I have 2 tables in a one-to-one
  relationship, with about ~1 million rows each. From Python, I grab an
  object from one table (table A):
     rowA = session.query(A).limit(1).one()

  And then access a row in table B:
     blah = rowA.b.someRow

  I noticed that for ~500 of these, it started taking a long time - so I
  wrapped it in time() calls:
     time1 = time.time()
     blah = rowA.b.someRow
     print time.time() - time1

  What I find is that when I run this in Python, it takes ~0.01 seconds
  per object, but when I execute it directly in the database:
     EXPLAIN ANALYZE SELECT *
     FROM a
     WHERE a.b_pk = 20;

  It only takes ~0.01 milliseconds! Is there some optimization I can do
  from within the Model Class definitions, or do I really need to
  execute raw sql from my code to get raw speed?

 a. using EXPLAIN ANALYZE does not take time spent fetching rows, network 
 overhead,  or the communication between the DBAPI and the database into 
 account .   A more apt comparison would be against raw DBAPI access using 
 connection.cursor() where all rows are fetched.
 b. turn on echo=True, see what SQL is being emitted.
 c. assuming every rowA.b.someRow is a lazyload of the related row, you are 
 executing 2x as many queries.   Use eager loading:  
 http://www.sqlalchemy.org/docs/orm/loading.html
 d. it is faster to load individual columns, that is sess.query(cls.id, 
 cls.name), rather than the whole object, i.e. sess.query(cls).  The latter 
 has a lot more bookkeeping to perform.
 e.  Emitting hundreds of single object queries is by far the least efficient 
 way to get data back, it would be much better if you could emit fewer queries 
 to load records in batches of 1000 or so.
 f. ORMs add lots of overhead as the cost of automation and state management, 
 not unlike Python itself does at the cost of high level, easy to use objects 
 instead of writing raw C code.   Some perspective on this specific to 
 SQLAlchemy is described here:  
 http://techspot.zzzeek.org/2010/12/12/a-tale-of-three-profiles/





  Thanks,
  Adrian

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Database Views

2010-11-08 Thread Adrian
Hi all,

This is a topic that has been discussed before, but I haven't been
able to successfully implement any of the proposed solutions in my own
code. I've created a few Views in my postgres database, and I'm
looking for a way to simply query them from sqlalchemy. I tried just
treating them as tables, but this failed when the mapper wasn't able
to find a primary key (makes sense).

The code I tried was just a naive attempt:

class SpectrumView(object):
pass

spectrum_view = Table('spectrum_view', metadata, autoload=True)
mapper(SpectrumView, spectrum_view)

So I read in some message that it might be possible to specify a
primary key to the mapper, like
mapper(SpectrumView, spectrum_view, primary_key=[..]), but the problem
is I'm not defining my tables in python - they are already created, so
I don't have any column objects to pass to primary_key.

Anyway, I'm just curious to see if anyone has had success with an
implementation of database views in sqlalchemy, and possibly examples
of those cases.

Thanks,
Adrian

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Database Views

2010-11-08 Thread Adrian
Thanks for the quick reply, this is exactly what I was looking for!

Thanks again,
Adrian

On Nov 8, 2:29 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Nov 8, 2010, at 1:16 PM, Adrian wrote:





  Hi all,

  This is a topic that has been discussed before, but I haven't been
  able to successfully implement any of the proposed solutions in my own
  code. I've created a few Views in my postgres database, and I'm
  looking for a way to simply query them from sqlalchemy. I tried just
  treating them as tables, but this failed when the mapper wasn't able
  to find a primary key (makes sense).

  The code I tried was just a naive attempt:

  class SpectrumView(object):
     pass

  spectrum_view = Table('spectrum_view', metadata, autoload=True)
  mapper(SpectrumView, spectrum_view)

  So I read in some message that it might be possible to specify a
  primary key to the mapper, like
  mapper(SpectrumView, spectrum_view, primary_key=[..]), but the problem
  is I'm not defining my tables in python - they are already created, so
  I don't have any column objects to pass to primary_key.

 The column objects are always available from table.c regardless of whether 
 autoload was used or not:

 mapper(SpectrumView, spectrum_view, primary_key=[spectrum_view.c.somecolumn, 
 spectrum_view.c.someothercolumn])





  Anyway, I'm just curious to see if anyone has had success with an
  implementation of database views in sqlalchemy, and possibly examples
  of those cases.

  Thanks,
  Adrian

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Unknown Issue Causing Extremely Slow Query

2010-07-19 Thread Adrian Price-Whelan
Hi,

I was just wondering at first whether there was a known issue with
ARRAY types, but here is the query:
spectra =
session.query(Spectrum).join(SpectrumHeader).filter(SpectrumHeader.filename
== spPlateFilename).all()

It should return ~1000 objects equaling about 120MB of data. In
Python, this query takes 10 minutes to complete,  but as a SQL query
(copying and pasting the echo'd command) it takes a few seconds:
SELECT spectrum.pk AS spectrum_pk, spectrum.ra AS spectrum_ra,
spectrum.dec AS spectrum_dec, spectrum.values AS spectrum_values,
spectrum.spectrum_header_pk AS spectrum_spectrum_header_pk,
spectrum.fiber_number AS spectrum_fiber_number, spectrum.inv_var AS
spectrum_inv_var, spectrum.and_mask AS spectrum_and_mask,
spectrum.or_mask AS spectrum_or_mask FROM spectrum JOIN
spectrum_header ON spectrum_header.pk = spectrum.spectrum_header_pk
WHERE spectrum_header.filename = 'spPlate-3586-55181.fits'

autoflush and autocommit are both set to False.

It seems like a straightforward query so I'm confused as to what could
be getting hung up.

Thanks for any insight,

Adrian

On Jul 16, 10:24 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 You absolutely need to turn in echoing and locate the specific SQL query 
 which causes the issue.   Queries can take excessive time for a very wide 
 variety of reasons.

 On Jul 16, 2010, at 12:56 PM, Adrian Price-Whelan wrote:



  Hello --

  I'm working with a database populated with data originally from a file 
  structure of files that are ~150MB each. We are dealing with a lot of data 
  that is being stored in the database using the 'ARRAY' datatype, 
  specifically numeric[]. After loading some of the data into the database I 
  tried performing a query to get back some data, and comparing it with code 
  that reads directly from the file system - but the database query took ~50 
  times longer to complete. For instance, to retrieve 100 records that 
  contain a few 4000 element arrays each using the code that reads the 
  filesystem it took less than a second, but the query on the database took 
  around 25 seconds to complete. Has anyone else had issues with array types 
  slowing down queries or does this sound more like another issue?

  Thanks!
  Adrian

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Unknown Issue Causing Extremely Slow Query

2010-07-19 Thread Adrian Price-Whelan
 does it take a few seconds to fully fetch all the results and it only gets 
 1000 rows ?  or is that just to get the initial result?

I'm not sure what you mean by this - the query does return 1000 rows.

 also if any of the individual columns are very large BLOBs or perhaps very 
 large PG arrays that would add to the overhead of a fetch.

There definitely are columns of PG arrays ~4000 elements each, so back
to my first email it seems like the culprit here could be the ARRAY's

Thanks for your help,

Adrian

On Jul 19, 10:10 am, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jul 19, 2010, at 9:52 AM, Adrian Price-Whelan wrote:





  Hi,

  I was just wondering at first whether there was a known issue with
  ARRAY types, but here is the query:
  spectra =
  session.query(Spectrum).join(SpectrumHeader).filter(SpectrumHeader.filename
  == spPlateFilename).all()

  It should return ~1000 objects equaling about 120MB of data. In
  Python, this query takes 10 minutes to complete,  but as a SQL query
  (copying and pasting the echo'd command) it takes a few seconds:
  SELECT spectrum.pk AS spectrum_pk, spectrum.ra AS spectrum_ra,
  spectrum.dec AS spectrum_dec, spectrum.values AS spectrum_values,
  spectrum.spectrum_header_pk AS spectrum_spectrum_header_pk,
  spectrum.fiber_number AS spectrum_fiber_number, spectrum.inv_var AS
  spectrum_inv_var, spectrum.and_mask AS spectrum_and_mask,
  spectrum.or_mask AS spectrum_or_mask FROM spectrum JOIN
  spectrum_header ON spectrum_header.pk = spectrum.spectrum_header_pk
  WHERE spectrum_header.filename = 'spPlate-3586-55181.fits'

  autoflush and autocommit are both set to False.

  It seems like a straightforward query so I'm confused as to what could
  be getting hung up.

 does it take a few seconds to fully fetch all the results and it only gets 
 1000 rows ?  or is that just to get the initial result? these are different 
 things.   also if any of the individual columns are very large BLOBs or 
 perhaps very large PG arrays that would add to the overhead of a fetch.   You 
 can also try writing a DBAPI-only script that runs the query, as well as 
 running engine.execute(myquery.statement) and fetching rows that way to see 
 if some in-object process is the factor (which is unlikely).





  Thanks for any insight,

  Adrian

  On Jul 16, 10:24 pm, Michael Bayer mike...@zzzcomputing.com wrote:
  You absolutely need to turn in echoing and locate the specific SQL query 
  which causes the issue.   Queries can take excessive time for a very wide 
  variety of reasons.

  On Jul 16, 2010, at 12:56 PM, Adrian Price-Whelan wrote:

  Hello --

  I'm working with a database populated with data originally from a file 
  structure of files that are ~150MB each. We are dealing with a lot of 
  data that is being stored in the database using the 'ARRAY' datatype, 
  specifically numeric[]. After loading some of the data into the database 
  I tried performing a query to get back some data, and comparing it with 
  code that reads directly from the file system - but the database query 
  took ~50 times longer to complete. For instance, to retrieve 100 records 
  that contain a few 4000 element arrays each using the code that reads the 
  filesystem it took less than a second, but the query on the database took 
  around 25 seconds to complete. Has anyone else had issues with array 
  types slowing down queries or does this sound more like another issue?

  Thanks!
  Adrian

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Unknown Issue Causing Extremely Slow Query

2010-07-19 Thread Adrian Price-Whelan
Here is some more detailed information trying the query multiple ways:

Piping the command into psql and writing to a tmp file takes 12 seconds (tmp 
file is 241MB):

time echo SELECT spectrum.pk AS spectrum_pk, spectrum.ra AS spectrum_ra, 
spectrum.dec AS spectrum_dec, spectrum.values AS spectrum_values, 
spectrum.spectrum_header_pk AS spectrum_spectrum_header_pk, 
spectrum.fiber_number AS spectrum_fiber_number, spectrum.inv_var AS 
spectrum_inv_var, spectrum.and_mask AS spectrum_and_mask, spectrum.or_mask AS 
spectrum_or_mask FROM spectrum JOIN spectrum_header ON spectrum_header.pk = 
spectrum.spectrum_header_pk WHERE spectrum_header.filename = 
'spPlate-3586-55181.fits'; | psql spectradb -U postgres  tmp

real0m12.052s
user0m2.501s
sys 0m0.274s

engine.execute on the same query took ~6 seconds:
spectra = engine.execute(SELECT spectrum.pk AS spectrum_pk, spectrum.ra AS 
spectrum_ra, spectrum.dec AS spectrum_dec, spectrum.values AS spectrum_values, 
spectrum.spectrum_header_pk AS spectrum_spectrum_header_pk, 
spectrum.fiber_number AS spectrum_fiber_number, spectrum.inv_var AS 
spectrum_inv_var, spectrum.and_mask AS spectrum_and_mask, spectrum.or_mask AS 
spectrum_or_mask FROM spectrum JOIN spectrum_header ON spectrum_header.pk = 
spectrum.spectrum_header_pk WHERE spectrum_header.filename = 
'spPlate-3586-55181.fits';)

spectra = 
session.query(Spectrum).join(SpectrumHeader).filter(SpectrumHeader.filename == 
spPlate-3665-55247.fits).all() clocked in at 489 seconds

Thanks, 
Adrian

On Jul 19, 2010, at 12:24 PM, David Gardner wrote:

 Try running that query directly against the database see how long that takes.
 Also try running explain on that query make sure it is using your indexes 
 properly.
 Since you are only using a single filter make sure that the 
 spectrum_header.filename
 has an index, and make sure your foreign key column 
 spectrum.spectrum_header_pk
 is indexed as well.
 
 On 07/19/2010 08:46 AM, Adrian Price-Whelan wrote:
 does it take a few seconds to fully fetch all the results and it only gets 
 1000 rows ?  or is that just to get the initial result?
 
 I'm not sure what you mean by this - the query does return 1000 rows.
 
   
 also if any of the individual columns are very large BLOBs or perhaps very 
 large PG arrays that would add to the overhead of a fetch.
 
 There definitely are columns of PG arrays ~4000 elements each, so back
 to my first email it seems like the culprit here could be the ARRAY's
 
 Thanks for your help,
 
 Adrian
 
 On Jul 19, 10:10 am, Michael Bayermike...@zzzcomputing.com  wrote:
   
 On Jul 19, 2010, at 9:52 AM, Adrian Price-Whelan wrote:
 
 
 
 
 
 
 Hi,
   
 
 I was just wondering at first whether there was a known issue with
 ARRAY types, but here is the query:
 spectra =
 session.query(Spectrum).join(SpectrumHeader).filter(SpectrumHeader.filename
 == spPlateFilename).all()
   
 
 It should return ~1000 objects equaling about 120MB of data. In
 Python, this query takes10 minutes to complete,  but as a SQL query
 (copying and pasting the echo'd command) it takes a few seconds:
 SELECT spectrum.pk AS spectrum_pk, spectrum.ra AS spectrum_ra,
 spectrum.dec AS spectrum_dec, spectrum.values AS spectrum_values,
 spectrum.spectrum_header_pk AS spectrum_spectrum_header_pk,
 spectrum.fiber_number AS spectrum_fiber_number, spectrum.inv_var AS
 spectrum_inv_var, spectrum.and_mask AS spectrum_and_mask,
 spectrum.or_mask AS spectrum_or_mask FROM spectrum JOIN
 spectrum_header ON spectrum_header.pk = spectrum.spectrum_header_pk
 WHERE spectrum_header.filename = 'spPlate-3586-55181.fits'
   
 
 autoflush and autocommit are both set to False.
   
 
 It seems like a straightforward query so I'm confused as to what could
 be getting hung up.
   
 does it take a few seconds to fully fetch all the results and it only gets 
 1000 rows ?  or is that just to get the initial result? these are different 
 things.   also if any of the individual columns are very large BLOBs or 
 perhaps very large PG arrays that would add to the overhead of a fetch.   
 You can also try writing a DBAPI-only script that runs the query, as well 
 as running engine.execute(myquery.statement) and fetching rows that way to 
 see if some in-object process is the factor (which is unlikely).
 
 
 
 
 
 
 Thanks for any insight,
   
 
 Adrian
   
 
 On Jul 16, 10:24 pm, Michael Bayermike...@zzzcomputing.com  wrote:
   
 You absolutely need to turn in echoing and locate the specific SQL query 
 which causes the issue.   Queries can take excessive time for a very wide 
 variety of reasons.
 
 
 On Jul 16, 2010, at 12:56 PM, Adrian Price-Whelan wrote:
 
 
 Hello --
   
 
 I'm working with a database populated with data originally from a file 
 structure of files that are ~150MB each. We are dealing with a lot of 
 data that is being stored in the database using the 'ARRAY' datatype

[sqlalchemy] Re: Unknown Issue Causing Extremely Slow Query

2010-07-19 Thread Adrian Price-Whelan
Also, as a follow-up - inserting these ARRAYs into the database is
very slow as well, slower than expected that is. Granted, it is
looping over 1000 objects and inserting a few 4000 element arrays for
each object, but doing one big SQL query takes considerably less time
than session.add(object), session.commit().

I thought that might be another clue, thanks!

On Jul 19, 1:53 pm, Adrian Price-Whelan adrian@gmail.com wrote:
 Here is some more detailed information trying the query multiple ways:

 Piping the command into psql and writing to a tmp file takes 12 seconds (tmp 
 file is 241MB):

 time echo SELECT spectrum.pk AS spectrum_pk, spectrum.ra AS spectrum_ra, 
 spectrum.dec AS spectrum_dec, spectrum.values AS spectrum_values, 
 spectrum.spectrum_header_pk AS spectrum_spectrum_header_pk, 
 spectrum.fiber_number AS spectrum_fiber_number, spectrum.inv_var AS 
 spectrum_inv_var, spectrum.and_mask AS spectrum_and_mask, spectrum.or_mask AS 
 spectrum_or_mask FROM spectrum JOIN spectrum_header ON spectrum_header.pk = 
 spectrum.spectrum_header_pk WHERE spectrum_header.filename = 
 'spPlate-3586-55181.fits'; | psql spectradb -U postgres  tmp

 real    0m12.052s
 user    0m2.501s
 sys     0m0.274s

 engine.execute on the same query took ~6 seconds:
 spectra = engine.execute(SELECT spectrum.pk AS spectrum_pk, spectrum.ra AS 
 spectrum_ra, spectrum.dec AS spectrum_dec, spectrum.values AS 
 spectrum_values, spectrum.spectrum_header_pk AS spectrum_spectrum_header_pk, 
 spectrum.fiber_number AS spectrum_fiber_number, spectrum.inv_var AS 
 spectrum_inv_var, spectrum.and_mask AS spectrum_and_mask, spectrum.or_mask AS 
 spectrum_or_mask FROM spectrum JOIN spectrum_header ON spectrum_header.pk = 
 spectrum.spectrum_header_pk WHERE spectrum_header.filename = 
 'spPlate-3586-55181.fits';)

 spectra = 
 session.query(Spectrum).join(SpectrumHeader).filter(SpectrumHeader.filename 
 == spPlate-3665-55247.fits).all() clocked in at 489 seconds

 Thanks,
 Adrian

 On Jul 19, 2010, at 12:24 PM, David Gardner wrote:



  Try running that query directly against the database see how long that 
  takes.
  Also try running explain on that query make sure it is using your indexes 
  properly.
  Since you are only using a single filter make sure that the 
  spectrum_header.filename
  has an index, and make sure your foreign key column 
  spectrum.spectrum_header_pk
  is indexed as well.

  On 07/19/2010 08:46 AM, Adrian Price-Whelan wrote:
  does it take a few seconds to fully fetch all the results and it only 
  gets 1000 rows ?  or is that just to get the initial result?

  I'm not sure what you mean by this - the query does return 1000 rows.

  also if any of the individual columns are very large BLOBs or perhaps 
  very large PG arrays that would add to the overhead of a fetch.

  There definitely are columns of PG arrays ~4000 elements each, so back
  to my first email it seems like the culprit here could be the ARRAY's

  Thanks for your help,

  Adrian

  On Jul 19, 10:10 am, Michael Bayermike...@zzzcomputing.com  wrote:

  On Jul 19, 2010, at 9:52 AM, Adrian Price-Whelan wrote:

  Hi,

  I was just wondering at first whether there was a known issue with
  ARRAY types, but here is the query:
  spectra =
  session.query(Spectrum).join(SpectrumHeader).filter(SpectrumHeader.filename
  == spPlateFilename).all()

  It should return ~1000 objects equaling about 120MB of data. In
  Python, this query takes10 minutes to complete,  but as a SQL query
  (copying and pasting the echo'd command) it takes a few seconds:
  SELECT spectrum.pk AS spectrum_pk, spectrum.ra AS spectrum_ra,
  spectrum.dec AS spectrum_dec, spectrum.values AS spectrum_values,
  spectrum.spectrum_header_pk AS spectrum_spectrum_header_pk,
  spectrum.fiber_number AS spectrum_fiber_number, spectrum.inv_var AS
  spectrum_inv_var, spectrum.and_mask AS spectrum_and_mask,
  spectrum.or_mask AS spectrum_or_mask FROM spectrum JOIN
  spectrum_header ON spectrum_header.pk = spectrum.spectrum_header_pk
  WHERE spectrum_header.filename = 'spPlate-3586-55181.fits'

  autoflush and autocommit are both set to False.

  It seems like a straightforward query so I'm confused as to what could
  be getting hung up.

  does it take a few seconds to fully fetch all the results and it only 
  gets 1000 rows ?  or is that just to get the initial result? these are 
  different things.   also if any of the individual columns are very large 
  BLOBs or perhaps very large PG arrays that would add to the overhead of a 
  fetch.   You can also try writing a DBAPI-only script that runs the 
  query, as well as running engine.execute(myquery.statement) and fetching 
  rows that way to see if some in-object process is the factor (which is 
  unlikely).

  Thanks for any insight,

  Adrian

  On Jul 16, 10:24 pm, Michael Bayermike...@zzzcomputing.com  wrote:

  You absolutely need to turn in echoing and locate the specific SQL 
  query which causes the issue.   Queries can take excessive time

Re: [sqlalchemy] Re: Unknown Issue Causing Extremely Slow Query

2010-07-19 Thread Adrian Price-Whelan
First off, thanks for your quick replies!

I will look into this, but I can tell you that the arrays are strictly numbers 
and the array columns are type numeric[]

Thanks again,
Adrian

On Jul 19, 2010, at 3:47 PM, Michael Bayer wrote:

 
 On Jul 19, 2010, at 1:53 PM, Adrian Price-Whelan wrote:
 
 Here is some more detailed information trying the query multiple ways:
 
 Piping the command into psql and writing to a tmp file takes 12 seconds (tmp 
 file is 241MB):
 
 time echo SELECT spectrum.pk AS spectrum_pk, spectrum.ra AS spectrum_ra, 
 spectrum.dec AS spectrum_dec, spectrum.values AS spectrum_values, 
 spectrum.spectrum_header_pk AS spectrum_spectrum_header_pk, 
 spectrum.fiber_number AS spectrum_fiber_number, spectrum.inv_var AS 
 spectrum_inv_var, spectrum.and_mask AS spectrum_and_mask, spectrum.or_mask 
 AS spectrum_or_mask FROM spectrum JOIN spectrum_header ON spectrum_header.pk 
 = spectrum.spectrum_header_pk WHERE spectrum_header.filename = 
 'spPlate-3586-55181.fits'; | psql spectradb -U postgres  tmp
 
 real 0m12.052s
 user 0m2.501s
 sys  0m0.274s
 
 engine.execute on the same query took ~6 seconds:
 spectra = engine.execute(SELECT spectrum.pk AS spectrum_pk, spectrum.ra AS 
 spectrum_ra, spectrum.dec AS spectrum_dec, spectrum.values AS 
 spectrum_values, spectrum.spectrum_header_pk AS spectrum_spectrum_header_pk, 
 spectrum.fiber_number AS spectrum_fiber_number, spectrum.inv_var AS 
 spectrum_inv_var, spectrum.and_mask AS spectrum_and_mask, spectrum.or_mask 
 AS spectrum_or_mask FROM spectrum JOIN spectrum_header ON spectrum_header.pk 
 = spectrum.spectrum_header_pk WHERE spectrum_header.filename = 
 'spPlate-3586-55181.fits';)
 
 Call fetchall() on the result to get a better idea what's going on.Here's 
 the source of the ARRAY type:
 
def result_processor(self, dialect, coltype):
item_proc = self.item_type.result_processor(dialect, coltype)
if item_proc:
def convert_item(item):
if isinstance(item, list):
return [convert_item(child) for child in item]
else:
return item_proc(item)
else:
def convert_item(item):
if isinstance(item, list):
return [convert_item(child) for child in item]
else:
return item
def process(value):
if value is None:
return value
return [convert_item(item) for item in value]
return process
 
 as you can see, if your ARRAY is of a Unicode type or similar, convert() must 
 be called on each item (only happens during fetch).  This is a likely cause 
 of the slowdown and you should consider what kind of converters you're using 
 on your ARRAY members.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Unknown Issue Causing Extremely Slow Query

2010-07-16 Thread Adrian Price-Whelan
Hello --

I'm working with a database populated with data originally from a file 
structure of files that are ~150MB each. We are dealing with a lot of data that 
is being stored in the database using the 'ARRAY' datatype, specifically 
numeric[]. After loading some of the data into the database I tried performing 
a query to get back some data, and comparing it with code that reads directly 
from the file system - but the database query took ~50 times longer to 
complete. For instance, to retrieve 100 records that contain a few 4000 element 
arrays each using the code that reads the filesystem it took less than a 
second, but the query on the database took around 25 seconds to complete. Has 
anyone else had issues with array types slowing down queries or does this sound 
more like another issue?

Thanks!
Adrian

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Cascading orphan deletes in self-referential table

2010-04-25 Thread Adrian
Hi,

I'm using the following table (shortened, removed unnecessary columns)
to store a menu tree.

class MenuNode(Base):
__tablename__ = 'menu'
id = Column(Integer, primary_key=True, nullable=False,
unique=True)
parent_id = Column(Integer, ForeignKey('menu.id',
onupdate='CASCADE', ondelete='CASCADE'), nullable=True, index=True)
name = Column(String(64), nullable=False)

parent = relationship('MenuNode', remote_side='MenuNode.id',
cascade='all, delete', passive_deletes=True,
backref=backref('children', order_by=position))

Top-level menu nodes have parent_id=NULL.
When deleting a node via session.delete(node) sqlalchemy issues UPDATE
statements to set all children's parent_id fields to NULL instead of
deleting the children.
How can i tell SA that I want to delete any children?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Cascading orphan deletes in self-referential table

2010-04-25 Thread Adrian
 Does specifying cascade='all, delete-orphan' on the parent
 relationship accomplish what you're after?
delete-orphan doesn't work in self-relational relationships (there are
always some nodes without a parent).
However, moving passive_deletes=True into the backref() fixed it.

-- 
Adrian

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Interface to (very old) Sybaase?

2010-01-27 Thread Adrian von Bidder
Heyho!

Has anybody worked with a Sybase Anywhere (ASA 9 -- yes, very old ...) 
database?  I may need to build a simple CRUD (actually onnly R and U ;-) 
frontend to some legacy application. (I probably will give TurbeGears a try 
for this.)

I do have a JDBC driver, and I *think* ODBC should work (when I tried it 
some time ago), but I haven't worked (much) with either.  (I think I 
remember having seen a jdbc bridge for either Perl or Python, but I'm not 
sure anywhere and at least I can't find Debian packages right now.)

Thanks in advance
-- vbi

-- 
Je n'ai pas souvent assisté à des course de spermatozoïdes, mais j'ai
donné beaucoup de départs.
-+- Olivier de Kersauson -+-


signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Apply function to object(s) returned by association_proxy

2010-01-17 Thread Adrian
Hi,

is there an easy way to apply a function to the items returned by
association_proxy? Currently, I have a setup like this: A-B-C, both
one-to-many relations; A.C (association_proxy('B','C')) returns a list
of lists but I would like to apply a function (list(chain.from_iterable
(x))) that flattens it to a simple list. What what be the best way to
implement this?

Cheers,

Adrian
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Re: Apply function to object(s) returned by association_proxy

2010-01-17 Thread Adrian
The read-only version was all I needed, thanks.

On Jan 17, 3:25 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jan 17, 2010, at 9:20 AM, Adrian wrote:

  Hi,

  is there an easy way to apply a function to the items returned by
  association_proxy? Currently, I have a setup like this: A-B-C, both
  one-to-many relations; A.C (association_proxy('B','C')) returns a list
  of lists but I would like to apply a function (list(chain.from_iterable
  (x))) that flattens it to a simple list. What what be the best way to
  implement this?

 you should be able to pass proxy_factory to your association_proxy:

 foo = association_proxy(B, C, proxy_factory=lambda coll, creator, value: 
 list(chain.from_iterable(coll)))

 that returned list though won't interact in the other direction, i.e. when 
 you append or remove items from it no events will propagate up to the C or B 
 since above we're not adding handlers for that (I'm assuming this isn't 
 needed since you can't determine that from a flattened list anyway).

 In that case association_proxy here wouldn't even be needed, a read-only 
 version is just:

 class A(object):
     @property
      def foo(self):
           reutrn list(chain.from_iterable(b.C for b in self.B))



  Cheers,

  Adrian
  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] Multi-column primary key with autoincrement?

2009-12-16 Thread Adrian von Bidder
Heyho!

[multi-column primary key where one column is autoincrement int]

On Wednesday 16 December 2009 05.29:54 Daniel Falk wrote:
 The true problem here 
 is with sqlite, which tries to make a smart choice about whether to 
 autoincrement or not.  And it gets it wrong.  SQLAlchemy is correct to 
 not specify the id field in the INSERT statement.  That's the cue to the 
 db that it needs to supply that value on its own.

Hmm.

Closer inspection shows that 
 * sqlite doesn't even support the scenario I want (autoincrement on one 
column with multi column primary key) and
 * sqlalchemy doesn't notice this and just creates a two column primary key 
without autoincrement at all.

Is there any hope that sqlalchemy will start to simulate the feature on 
sqlite? (I'm using the python-pysqlite2 2.5.5-3 / libsqlite3-0 3.6.20-1 / 
python-sqlalchemy 0.5.6-1 packages from Debian squeeze, btw)

I've opened #1642 now.

PostgreSQL handles this in just the way I was expecting (no surprise since 
my expectation on autoincrement columns is derived from the way pg builds 
its serial data type :-), so I'll have to test if sqlalchemy will do the 
right thing here.  Then I can at least use pg (I was planning to do so in 
production anyway; sqlite is convenient for development though.)


cheers
-- vbi

-- 
featured product: GNU Privacy Guard - http://gnupg.org


signature.asc
Description: This is a digitally signed message part.


Re: [sqlalchemy] Multi-column primary key with autoincrement?

2009-12-16 Thread Adrian von Bidder
Heyho!

On Wednesday 16 December 2009 16:36:10 Michael Bayer wrote:
 You need to either use the default keyword and specify a
 function or SQL expression that will generate new identifiers, or just set
 up the PK attributes on your new objects before adding them to the
 session.

... or just switch to pg for testing.  I don't care to support other db 
anyway.

Ok, thanks for clearing this up.

cheers
-- vbi


-- 
If we can capitalize on something that did or did not happen in 1947
then it can help the entire state.
-- Rep. Dan Foley on inventing the Extraterrestrial Culture Day


signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Multi-column primary key with autoincrement?

2009-11-30 Thread Adrian von Bidder
Heyho!

My small blog-style web site should support article versioning, so:

class Entry(DeclarativeBase):
id = Column(Integer, autoincrement=True, primary_key=True)
version = Column(Integer, primary_key=True, default=0)
... and more stuff (content, author, ...)

it seems autoincrement is not supported in this case.

(I'm working in a TurboGears 2 environment with SQLite; final deployment 
will probably be with PostgreSQL)

At least, when trying to populate a db from TurboGear's pasteer set-up, I 
get an IntegrityError (id may not be NULL) and see that id is not set in 
the INSERT statement.  Just removing the version column from the Entry class 
suffices to let it work as before.

Additionally: to keep the queries simple and fast, I'll introduce a 
current flag which should be set to True for the latest version of any id.  
Are there hooks I can use to manage this?  (something like: execute UPDATE 
... set current = False where version  myversion whenver an Entry is being 
added to the db with a version  0)

And a third qestion: what is the easiest way to create a new version? copy 
my 'Entry' object element by element to a new instance, or does sqla provide 
a cloning mechanism?

thanks a lot!
cheers
-- vbi


-- 
SCO's lawsuit is a lost cause. The implications for Linux users are
rather like the implications for passengers on an ocean liner of a
seagull diving into the water nearby.
-- Thomas Carey, Bromberg  Sunstein, LLP, attorney


signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Re: executemany + postgresql

2009-11-06 Thread Adrian von Bidder
Heyho!

On Friday 06 November 2009 02.46:11 Jon Nelson wrote:
 ... was performing an individual
 INSERT for every single row.

Don't know sqlalchemy good enough, but for big bulk imports on the SQL side, 
shouldn't COPY be used?  Which is as far as I know pg-specific / non-SQL 
standard.


cheers
-- vbi

-- 
Lo-lan-do モインさん?
nobse Lo-lan-do: Gesundheit.
-- #debian-devel


signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Re: Portable Enum Columns

2009-10-06 Thread Adrian von Bidder
On Tuesday 06 October 2009 14.45:33 Yannick Gingras wrote:
[...]
 Is there another way to do it?  Something that would be portable and
 to both MySQL and Postgres would be great.

Since both pg and mysql hava a native enum type, it's only a matter of 
writing the appropriate code in the SQL dialects.

This came up just recently, search the list archive.  I can't remember if 
the answer last time was it would be quite easy to do or somebody is 
working on it, though.

cheers
-- vbi

-- 
featured product: PostgreSQL - http://postgresql.org


signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] vertical partitioning

2009-09-30 Thread Adrian von Bidder
Heyho!

Is there a tutorial on vertical partitioning?

I have a table Entry and a table EntryFlags (1:1 relation from 
EntryFlags to Entry).  The idea is that while there is a large number of 
Entry rows only a small number has flags set (and thus needs an entry in 
EntryFlags; note that they don't need to be booleans, despite the name).

So having a separate table for the EntryFlags keeps the rows of Entry 
smaller and should also speed up if I select by certain flags.

How do I represent this in sqlalchemy?

Obviously I can trivially do the relation stuff to get entry.flags.myflag, 
but nicer would be having the columns from the EntryFlags table appear in 
Entry as if they were inline, returning a default value where an EntryFlags 
column doesn't exist.  (Bonus for removing the EntryFlags row if all values 
are back to default ;-)

Thanks in advance.

-- vbi


-- 
The use of COBOL cripples the mind; its teaching should, therefore, be
regarded as a criminal offence.
-- E. Dijkstra, 1975


signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Re: vertical partitioning

2009-09-30 Thread Adrian von Bidder
On Wednesday 30 September 2009 21.58:55 Kevin Horn wrote:

  I have a table Entry and a table EntryFlags (1:1 relation from
  EntryFlags to Entry).  The idea is that while there is a large number
  of Entry rows only a small number has flags set (and thus needs an
  entry in EntryFlags; note that they don't need to be booleans, despite
  the name).
 
  So having a separate table for the EntryFlags keeps the rows of Entry
  smaller and should also speed up if I select by certain flags.
 
  How do I represent this in sqlalchemy?
 
[...]
  the columns from the
  EntryFlags table appear in Entry as if they were inline, returning a
  default value where an EntryFlags column doesn't exist.  (Bonus for
  removing the EntryFlags row if all values are back to default ;-)

 Check out the Customizing Column Properties in the Mapper Configuation
 docs:
 http://www.sqlalchemy.org/docs/05/mappers.html#customizing-column-propert
 ies
 
 If I understand correctly, this should do what you want.
 (Disclaimer: haven't done it myself, YMMV)

Hmm.  Mapping an outer join to my actual model class would be part of it.  
Not sure how to easily assign default values for the part of the row not 
backed by a real row on the EntryFlags side (they shouldn't just be NULL; 
default values on the EntryFlags table wouldn't be of much use since they'd 
be applied only when the row would be generated and not for access to the 
outer join, afaict)

(And as an aside: this has become a a bit an academic question for now, I've 
thought about more about what I'm trying to do and redesigned the schema.)

cheers
-- vbi

 
 Kevin Horn
 
 --~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups
  sqlalchemy group. To post to this group, send email to
  sqlalchemy@googlegroups.com To unsubscribe from this group, send email
  to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this
  group at http://groups.google.com/group/sqlalchemy?hl=en
  -~--~~~~--~~--~--~---
 

-- 
Do you understand now why we attacked Iraq?  Because war is good for the
economy, which means war is good for America.   Also, since God is on
America's side, anyone who opposes war is a godless un-American Communist.
-- excerpt from one of those 'joke' mails floating around.


signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Re: Database with audit trail table

2009-08-22 Thread Adrian von Bidder
On Saturday 22 August 2009 01.08:05 David Bolen wrote:
 Adrian von Bidder avbid...@fortytwo.ch writes:
  Ideas  comments?

 For what it's worth, I'd think that the best sort of audit would be
 something done in the database itself, since it would audit any
 changes whether done through any interface.

Yes and no...

I see your point (and since I know pg better than sqla/python I'd probably 
even be quicker doing it in SQL), but the application has more knowledge 
about what's going on.

My audit table should not just version the db content, but I plan to 
record additional stuff that comes from the application (like: who 
authorized the change? etc.) which is not readily available at the db level.

(And then there's the fact that I'm writing this toy project to learn about 
sqlalchemy, so going ahead and writing an SQL audit trail framework misses 
this goal completely :-)

cheers
-- vbi
(Off to play around with SeesionExtension some more ...)


-- 
featured link: http://www.pool.ntp.org


signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Database with audit trail table

2009-08-21 Thread Adrian von Bidder
Heyho!

Instead of creating changeby / changed fields on all my tables, I'm 
planning to write some model classes where changes would be recorded in a 
separate audit trail table (the obvious benefit beyond not requiring the 
additional fields is that I can preserve the history as far back as I want)

So, like a table audit ( timestamp, dbchange, info, ...) where dbchange 
would be some kind of machine interpretable description and info would be a 
textual description of the event (created user blah, removed product x 
from order y, ...)

Where do I start to get this automated?

Session has the information about what needs to be done (insert, delete, 
update), and the model classes know what should be recorded into the audit 
records.  So where do I hook into so that I automatically can add these 
audit records at flush time?  (this would need to be after flush, actually, 
since autogenerated values need to be available.  But as long as I get into 
the same transaction I'm just fine.)

Perhaps this has even been done before?

(Obviously, this kind of audit trail would be lost upon rollback.  For a 
first variant this is fine with me; later I' guess an option to use a 
separate session for the audit stuff would be nice, but that'd need to log 
which audit records were part of a transaction that was later rolled 
back...)

Ideas  comments?

cheers
-- vbi


-- 
Bill Dickey is learning me his experience.
-- Yogi Berra in his rookie season.


signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Re: Two enhancement proposals for orm.Query

2009-06-18 Thread Adrian von Bidder
On Wednesday 17 June 2009 19.08:10 klaus wrote:
 ... whether a query will yield any result ...

 The best approximation
 seems to be
  query.first() is not None
 which can select a lot of columns. I often see
  query.count()  0
 which can become quite expensive on a DBMS like PostgreSQL.

Just a side note that pg tends to compute first() efficiently if (and I 
assume sa does this) the implementation uses LIMIT to tell the db that 
really only the first row is interesting.  I don't know about other 
databases.

cheers
-- vbi


-- 
this email is protected by a digital signature: http://fortytwo.ch/gpg



signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Re: union with two different orders

2009-06-07 Thread Adrian von Bidder
On Saturday 06 June 2009 17.39:20 naktinis wrote:
 I think this was not the case, since I didn't expect the merged result
 to be ordered.

 To be more precise, the query looks like:
 q1 = Thing.query().filter(...).order_by(Thing.a.desc()).limit(1)
 q2 = Thing.query().filter(...).order_by(Thing.a.asc()).limit(1)
 q = q1.union(q2).order_by(Thing.id).all()

 The q1 returns first filtered element with largest 'a' column, q2 -
 first with smallest 'a'.

 So, I guess my question is still valid.

You didn't mention limit in your first post, so I misunderstood what you 
were trying to do, sorry.

Yes, as Michael said, subqueries are the way to go.  I'm quite new to sa, so 
I can't help you there.

cheers
-- vbi


-- 
featured link: http://www.pool.ntp.org



signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Re: union with two different orders

2009-06-06 Thread Adrian von Bidder

On Saturday 06 June 2009 14.18:33 naktinis wrote:
 I want to use union on two queries, which have different order:
 q1 = Thing.query().order_by(Thing.a)
 q2 = Thing.query().order_by(Thing.b)
 q = q1.union(q2).all()

SQL doesn't work as you think it does here.

A UNION does not concatenate the results of the two queries, but is allowed 
to return the result in any order.  ORDER BY can *then* be applied to the 
end result of your union.  So even if you use subqueries, the order by in 
the subqueries might just be ignored.

This is to allow the SQL query planner to be clever while building the union 
(perhaps a large union over two queries over the same table: if both queries 
require a table scan over the large table, the planner might decide to build 
the union by scanning the table only once while running both queries in 
parallel, so the table is loaded from disk once insead of twice.  The UNION 
would then contain the resulting rows in more or less random order.)

But I digress.

What you want to do is something like:

SELECT 1 as COL1, ... FROM ...
UNION
SELECT 2 as COL1, ... FROM ...
ORDER BY COL1, ...

cheers
-- vbi

 But after this query I get MySQL error message Incorrect usage of
 UNION and ORDER BY.

 I guess that this could be because having SELECT ... UNION SELECT ...
 ORDER BY B, it is not clear whether the second subquery or both
 queries should be ordered using B criteria. I think this can be solved
 by adding brackets to each of the subquery: (SELECT ...) UNION
 (SELECT ...).

 Is there any way to create this query using SQLAlchemy ORM?

 I am using SQLAlchemy 0.5.4.
 
-- 
Vertrauen ist gut.  Anwalt ist saugeil.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: querying into objects from a large flat table or have multiple processed tables mapped to objects?

2009-05-25 Thread Adrian von Bidder
[web logs - db]

On Tuesday 26 May 2009 00.27:03 Michael Bayer wrote:
 the best thing to do would be to experiment with some various schemas  
 and see what works best

Also, it's extremely important to keep in mind that SQL databases can only 
work well with big tables if you create the right indices.  What kind of 
index to create depends on the database you'll use and on which queries 
you'll run.  (And: don't use a small example data set to decide which 
indices to build.  A database often will change its query plan, sometimes 
drastically, depending on how much data is in a table.)  So: create your 
schema, fill it with a few million log entries and then look at which 
queries might need which indices.

In the case of web logs, you'll probably want an index on every base field 
(tinestamp, user, IP, URL string), but depending on your exact queries, 
combined indices on multiple column, or indices on functions of fields 
(like, perhaps, an index on the ending of the url to quickly filter jpg/png 
requests vs. html/php requests vs. requests on directories, ending by '/') 
might speed up your queries dramatically.

cheers
-- vbi

-- 
Even though I use zsh, holding down the TAB key just doesn’t write my
code for me, so I have to use a text editor.
-- Scott James Remnant



signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Re: Q: fetch value of autoincrement column

2009-05-23 Thread Adrian von Bidder
On Friday 22 May 2009 23.00:05 Werner F. Bruhin wrote:
 What do you want to do with the autoincrement column?  Often these are
 used for primary keys, which in turn get used as foreign keys.

I want to use the id as filename; the table will cache some info that comes 
from the file.  Using it as a filename directly has several advantages:
 * I don't have to care about a unique filename, since the db already takes 
care of that.
 * I don't have to create an additional column with the filename.

Obviously, I can just first create the file with a temp filename, but I'd 
like to avoid this additional step if it's possible.  Less code == fewer 
bugs.  And I hoped that using the id from the beginning would be easily 
possible.

Right now, I'm undecided if I should just drop portability and fetch the 
next value from the mytable_id_seq that postgres has generated for me with 
the autoincrement columns (from what I understand, even if I declare the 
column autoincrement, if I set the id manually it will Just Work(tm) like it 
should or if I should go the temporary filename route.  I'm leaning towards 
the former because I probably will drop portability at some point in any 
case, since PostgreSQL just has too many cool features waiting to be used 
:-)

(And no, I'm not going to use blobs and store the file as a large object in 
the db.  Relational databases are not made for this, and besides it makes 
the db harder to maintain because db dumps get huge or the blobs have to be 
backed up separately, and debugging is harder because I can't use shell 
tools to get at the file data.)

(So you see, I'm not trying to build foreign keys by hand or any such thing.  
I actually know relational databases quite well, it's just my knowledge of 
SQLAlchemy and Python which is lacking.)

thanks
-- vbi

-- 
Alle Wirksamkeit ist stärker am Mittelpunkt als gegen die
Peripherie zu. Raum zwischen Mars und Jupiter.
-- Goethe, Maximen und Reflektionen, Nr. 1336



signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Re: Small note on reading SA docs

2009-05-22 Thread Adrian von Bidder
On Friday 22 May 2009 01.59:13 Michael Bayer wrote:
 otherwise if you have any advice on how to get 0.4/0.3  
 delisted from such a prominent place on Google, that would be  
 appreciated.

Since removing them entirely is an option for you, perhaps just completely 
remove them from search engines via robots.txt?  (And have a page linking to 
and describing the 0.3/0.4 docs in a way so that people searching for them 
explicitly still find them.)

cheers
-- vbi

-- 
The most interesting [DNS weirdness] is that when I visit the Asus
website two Asus IPs (one in the US, one in Taiwan) will query my
nameserver for the . record for an entire week.
-- Koos van den Hout



signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Re: Q: fetch value of autoincrement column

2009-05-22 Thread Adrian von Bidder
On Friday 22 May 2009 08.43:09 Alexandre Conrad wrote:
 Don't you want that non-null column to be a foreign key ?

Would that make a difference?

cheers
-- vbi


 2009/5/21 Adrian von Bidder avbid...@fortytwo.ch:
  Hi,
 
  Is it possible to fetch the values of an autoincrement field without
  flushing the object to the DB?
 
  (In postgres, I obviously can manually fetch nextval of the
  automatically generated sequence, but I lose the portability that way
  ...)
 
  Why?
 
  Because I need the id to generate data that will be filled into some
  (non- null) columns of the table row.  So I can't flush since I'll get
  an IntegrityError about non-null columns, and I can't fill those
  columns without knowing the id that's going to be assigned.
 
  (Yes, I can use dummy values, then flush(), and then update the row
  before committing.  But that's not exactly elegant...)
 
  cheers
  -- vbi
 
  --
  Fnord.

 --~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group. To post to this group, send email to
 sqlalchemy@googlegroups.com To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com For more options, visit this
 group at http://groups.google.com/group/sqlalchemy?hl=en
 -~--~~~~--~~--~--~---

-- 
Der Glaube versetzt Berge, der Zweifel erklettert sie.
-- Friedrich Georg Jünger



signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Re: Creating something with a unique, random key

2009-05-22 Thread Adrian von Bidder
On Friday 22 May 2009 12.01:05 Iwan wrote:
 Naïvely, I thought you'd create an X, flush it, and then catch any
 IntegrityError's thrown.  [...]

I know that PostgreSQL can't continue in a transaction after an error, you 
have to roll back the transaction.  I don't know what the SQL standard says 
on this, but that's how postgres has always behaved.

I guess you just have to query for your string to see if it's unique.  
Performance-wise it shouldn't make a difference, and in Python, I usually 
find a simple if even nicer than a try-except block.

cheers
-- vbi


-- 
Jeder Mensch ist einzigartig.
-- Benedetto Croce



signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Re: Q: fetch value of autoincrement column

2009-05-22 Thread Adrian von Bidder
On Friday 22 May 2009 13.58:34 Alexandre Conrad wrote:
 Hello Adrian,

 2009/5/22 Adrian von Bidder avbid...@fortytwo.ch:
  On Friday 22 May 2009 08.43:09 Alexandre Conrad wrote:
  Don't you want that non-null column to be a foreign key ?
 
  Would that make a difference?

 That's what a foreign key is used for:
 http://en.wikipedia.org/wiki/Foreign_key

Oh, thanks a lot.  Not really helpful, though.

I was quite simply asking if it's possibly to fetch the value of an 
autoincrement column (in a portable way, if possible) without causing a 
flush, because at the time when I want to use the value, the row is not 
complete yet.

Oh, well...

cheers
-- vbi





-- 
featured link: http://www.pool.ntp.org



signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Q: fetch value of autoincrement column

2009-05-21 Thread Adrian von Bidder
Hi,

Is it possible to fetch the values of an autoincrement field without 
flushing the object to the DB?

(In postgres, I obviously can manually fetch nextval of the automatically 
generated sequence, but I lose the portability that way ...)

Why?

Because I need the id to generate data that will be filled into some (non-
null) columns of the table row.  So I can't flush since I'll get an 
IntegrityError about non-null columns, and I can't fill those columns 
without knowing the id that's going to be assigned.

(Yes, I can use dummy values, then flush(), and then update the row before 
committing.  But that's not exactly elegant...)

cheers
-- vbi

-- 
Fnord.



signature.asc
Description: This is a digitally signed message part.


[sqlalchemy] Re: XX and YY are both of the same direction symbol 'ONETOMANY error

2009-04-09 Thread Adrian

I have exactly the same problem with 0.5.3. On one machine the mapping
works fine with 0.5.2 on another with 0.5.3 I get the error you
mentioned.

On Apr 2, 3:36 pm, Andreas Jung li...@zopyx.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 I am getting the following error after upgrading from post-0.5.2 version
 (SVN) to 0.5.3:

 ==
 ERROR: testToolLookupVisitedBy (tool.tests.test_tool_lookup.TestToolLookup)
 - --
 Traceback (most recent call last):
   File /HRS2/local/lib/python2.4/unittest.py, line 260, in run
     testMethod()
   File
 /local/HRS2/Devel/junga/tb-dev/parts/modules-svn/toolbox/tool/tests/test_tool_lookup.py,
 line 47, in testToolLookupVisitedBy
     rows = TL.toolsVisitedBy('ajung')
   File
 /local/HRS2/Devel/junga/tb-dev/parts/modules-svn/toolbox/tool/__init__.py,
 line 70, in toolsVisitedBy
     visited_tools = session.query(VT).filter_by(benutzer=username.lower())
   File
 /local/HRS2/Devel/junga/tb-dev/eggs/SQLAlchemy-0.5.3-py2.4.egg/sqlalchemy/orm/session.py,
 line 895, in query
     return self._query_cls(entities, self, **kwargs)
   File
 /local/HRS2/Devel/junga/tb-dev/eggs/SQLAlchemy-0.5.3-py2.4.egg/sqlalchemy/orm/query.py,
 line 91, in __init__
     self._set_entities(entities)
   File
 /local/HRS2/Devel/junga/tb-dev/eggs/SQLAlchemy-0.5.3-py2.4.egg/sqlalchemy/orm/query.py,
 line 100, in _set_entities
     self.__setup_aliasizers(self._entities)
   File
 /local/HRS2/Devel/junga/tb-dev/eggs/SQLAlchemy-0.5.3-py2.4.egg/sqlalchemy/orm/query.py,
 line 114, in __setup_aliasizers
     mapper, selectable, is_aliased_class = _entity_info(entity)
   File
 /local/HRS2/Devel/junga/tb-dev/eggs/SQLAlchemy-0.5.3-py2.4.egg/sqlalchemy/orm/util.py,
 line 492, in _entity_info
     mapper = class_mapper(entity, compile)
   File
 /local/HRS2/Devel/junga/tb-dev/eggs/SQLAlchemy-0.5.3-py2.4.egg/sqlalchemy/orm/util.py,
 line 567, in class_mapper
     mapper = mapper.compile()
   File
 /local/HRS2/Devel/junga/tb-dev/eggs/SQLAlchemy-0.5.3-py2.4.egg/sqlalchemy/orm/mapper.py,
 line 653, in compile
     raise sa_exc.InvalidRequestError(One or more mappers failed to
 compile.  Exception was probably 
 InvalidRequestError: One or more mappers failed to compile.  Exception
 was probably suppressed within a hasattr() call. Message was: One or
 more mappers failed to compile.  Exception was probably suppressed
 within a hasattr() call. Message was: One or more mappers failed to
 compile.  Exception was probably suppressed within a hasattr() call.
 Message was: One or more mappers failed to compile.  Exception was
 probably suppressed within a hasattr() call. Message was:
 Hierarchies.subscribed_by and back-reference Hierarchies.subscriber are
 both of the same direction symbol 'ONETOMANY.  Did you mean to set
 remote_side on the many-to-one side ?

 The related code is:

     class Hierarchies(Base, AsDictMixin):
         __tablename__ = 'hierarchies'
         __table_args__ = ( { 'autoload' : True, })
         __mapper_args__ = ({'extension' : HierachiesDeletionLogger()})

         id = Column(Integer, Sequence('hierarchies_seq'), primary_key=True)
         parent_id = Column(Integer, ForeignKey('hierarchies.id'))
         hierarchyshare_id = Column(Integer, ForeignKey('hierarchies.id'))
         pos = Column(Integer)

     Hierarchies.subscribed_by = relation('Hierarchies',

 primaryjoin=Hierarchies.hierarchyshare_id==Hierarchies.id,
                                  backref=backref(subscriber,
 remote_side=Hierarchies.hierarchyshare_id),
                                  remote_side=Hierarchies.hierarchyshare_id,
                                  uselist=True,
                                  )

 Anything I am missing or something that changed over the last two weeks
 at this point?

 Andreas

 - --
 ZOPYX Ltd.  Co. KG - Charlottenstr. 37/1 - 72070 Tübingen - Germany
 Web:www.zopyx.com- Email: i...@zopyx.com - Phone +49 - 7071 - 793376
 Registergericht: Amtsgericht Stuttgart, Handelsregister A 381535
 Geschäftsführer/Gesellschafter: ZOPYX Limited, Birmingham, UK
 - 
 E-Publishing, Python, Zope  Plone development, Consulting

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (Darwin)
 Comment: Using GnuPG with Mozilla -http://enigmail.mozdev.org/

 iEYEARECAAYFAknUzXIACgkQCJIWIbr9KYwdZgCfVfo9ZN2bNPM4iaxZoFXdcuuE
 yPoAoMaqN2Wr219oL+kviY7dtotIqh/M
 =RG8E
 -END PGP SIGNATURE-

  lists.vcf
  1KViewDownload
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en

[sqlalchemy] where clause for calculated property with alias

2008-04-23 Thread Adrian

I guess the solution to my problem is simple, although I did not
manage to find it.

The problem is as follows: I calculate the bray-curtis distance
between an input and the rows in my table and give the value an alias
('brayCurtis'). What I want is to order the resultSet by brayCurtis
and return only those rows where the value exceeds a given threshold.
Ordering is pretty simple

query = query.order_by(query.c.brayCurtis.desc())

The problem occurs if I want to add an additional where clause like
this:

query = query.where(query.c.brayCurtis = cutoff)

Suddenly the whole statement is duplicated. Maybe there is a more
concise and elegant solution for this problem.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: where clause for calculated property with alias

2008-04-23 Thread Adrian

I should have mentioned that - cutoff is simply a float, e.g.

query = query.where(query.c.brayCurtis = 0.8)

that's why I think there is a trivial solution. If I do the above, the
whole query will be added as a subquery and the where and order by
clauses duplicated. It works fine for the order_by statement though.

Is there a recommended strategy for adding (where) clauses to
calculated columns? Actually what I do is that:

abs(ar1-ar2).sum() / float( abs(ar1+ar2).sum() )   # NumPy notation

the corresponding SQL query is quite verbose, that's why I used a
column label.

On Apr 23, 2:01 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Apr 23, 2008, at 6:55 AM, Adrian wrote:





  I guess the solution to my problem is simple, although I did not
  manage to find it.

  The problem is as follows: I calculate the bray-curtis distance
  between an input and the rows in my table and give the value an alias
  ('brayCurtis'). What I want is to order the resultSet by brayCurtis
  and return only those rows where the value exceeds a given threshold.
  Ordering is pretty simple

  query = query.order_by(query.c.brayCurtis.desc())

  The problem occurs if I want to add an additional where clause like
  this:

  query = query.where(query.c.brayCurtis = cutoff)

  Suddenly the whole statement is duplicated. Maybe there is a more
  concise and elegant solution for this problem.

 Im not sure what cutoff is, but if its a SELECT statement, try using
 cutoff.as_scalar().  This would solve the issue of extra elements
 being added to the FROM clause, if thats whats happening.

 But also, if cutoff is in fact a subquery, and the issue is that a
 labeled column from the columns clause is not appearing as that
 label in the WHERE clause, the subquery needs to be rendered at that
 point.  SQL doesnt officially allow order by or WHERE criterion based
 on labels in the columns clause of the select statement - it will fail
 on some databases, so SQLA currently doesn't place labels (i.e. column
 AS foobar) anywhere outside the columns clause.  A good optimizer
 should figure out that the columns clause and WHERE clause are the
 same thing.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Using MySQLdb type conversion as engine connection argument

2008-01-23 Thread Adrian

I am trying to change the default column type mapping in sqlalchemy.
Analogous to the description in the MySQLdb User's Guide (http://mysql-
python.sourceforge.net/MySQLdb.html) I tried the following.

from MySQLdb.constants import FIELD_TYPE
my_conv = { FIELD_TYPE.DECIMAL: float }
ENGINE  = create_engine( 'mysql://%s:[EMAIL PROTECTED]:3306/%s', connect_args =
{'conv': my_conv} )

This works if I use MySQLdb directly but not with sqlalchemy. I
suspect the syntax I used is wrong. It will create an engine and
connection but as soon as a query is issued a TypeError is raised. Is
there an error in my connect_args dictionary or do I have to change
the mapping behaviour somewhere else?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Query object behavior for methods all() and one()

2008-01-14 Thread Adrian

I am a bit confused by the behavior for the methods all() and one() if
the Query would return an empty result set. In the case of all() it
returns an empty list whereas one() will throw an exception
(sqlalchemy.exceptions.InvalidRequestError). I am sure there was a
reason to implement as it is now but wouldn't it be more convenient to
return simply None (or an empty String) and throw an exception only if
more than one row would be returned? An empty result set as such is
valid and shouldn't be treated as an error.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---